A new study appears to lend credence to allegations that OpenAI trained at least some of its AI models on copyrighted content.
OpenAI is embroiled in suits brought by authors, programmers, and other rights-holders who accuse the company of using their works — books, codebases, and so on — to develop its models without permission. OpenAI has long claimed a fair use defense, but the plaintiffs in these cases argue that there isn’t a carve-out in U.S. copyright law for training data.
The study, which was co-authored by researchers at the University of Washington, the University of Copenhagen, and Stanford, proposes a new method for identifying training data “memorized” by models behind an API, like OpenAI’s.
Models are prediction engines. Trained on a lot of data, they learn patterns — that’s how they’re able to generate essays, photos, and more. Most of the outputs aren’t verbatim copies of the training data, but owing to the way models “learn,” some inevitably are. Image models have been found to regurgitate screenshots from movies they were trained on, while language models have been observed effectively plagiarizing news articles.
The study’s method relies on words that the co-authors call “high-surprisal” — that is, words that stand out as uncommon in the context of a larger body of work. For example, the word “radar” in the sentence “Jack and I sat perfectly still with the radar humming” would be considered high-surprisal because it’s statistically less likely than words such as “engine” or “radio” to appear before “humming.”
The co-authors probed several OpenAI models, including GPT-4 and GPT-3.5, for signs of memorization by removing high-surprisal words from snippets of fiction books and New York Times pieces and having the models try to “guess” which words had been masked. If the models managed to guess correctly, it’s likely they memorized the snippet during training, concluded the co-authors.
According to the results of the tests, GPT-4 showed signs of having memorized portions of popular fiction books, including books in a dataset containing samples of copyrighted ebooks called BookMIA. The results also suggested that the model memorized portions of New York Times articles, albeit at a comparatively lower rate.
Abhilasha Ravichander, a doctoral student at the University of Washington and a co-author of the study, told TechCrunch that the findings shed light on the “contentious data” models might have been trained on.
“In order to have large language models that are trustworthy, we need to have models that we can probe and audit and examine scientifically,” Ravichander said. “Our work aims to provide a tool to probe large language models, but there is a real need for greater data transparency in the whole ecosystem.”
OpenAI has long advocated for looser restrictions on developing models using copyrighted data. While the company has certain content licensing deals in place and offers opt-out mechanisms that allow copyright owners to flag content they’d prefer the company not use for training purposes, it has lobbied several governments to codify “fair use” rules around AI training approaches.