All-powerful, ever-pervasive AI is running out of internet
There is no such thing as unlimited data
Artificial intelligence (AI) has relied on high-quality language data to train its models, but supply is running low. That depletion is forcing companies to look elsewhere for data sourcing as well as to change their algorithms to use data more efficiently.
What is the scope of AI's data problem?
Artificial intelligence needs to be trained, and data and information is used to accomplish that. Trouble is, the data is running out. A paper by Epoch, an AI research organization, found that AI could exhaust all the current high-quality language data available on the internet as soon as 2026. This could pose a problem as AI continues to grow. "The issue stems from the fact that, as researchers build more powerful models with greater capabilities, they have to find ever more texts to train them on," said the MIT Technology Review. The quality of the data used in training AI is important. "The [data shortage] issue stems partly from the fact that language AI researchers filter the data they use to train models into two categories: high-quality and low-quality," said the Review. "The line between the two categories can be fuzzy," but "text from [high-quality data] is viewed as better-written and is often produced by professional writers."
AI models require vast amounts of data to be functional. For example, "the algorithm powering ChatGPT was originally trained on 570 gigabytes of text data, or about 300 billion words," said Singularity Hub. In addition, "low-quality data such as social media posts or blurry photographs are easy to source but aren't sufficient to train high-performing AI models," and could even be "biased or prejudiced or may include disinformation or illegal content which could be replicated by the model." Much of the data on the internet is considered useless for AI modeling. Instead, "AI companies are hunting for untapped information sources and rethinking how they train these systems," said The Wall Street Journal. "Companies also are experimenting with using AI-generated, or synthetic, data as training material — an approach many researchers say could actually cause crippling malfunctions."
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.
Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
What are AI companies doing to combat the imminent data scarcity?
The ticking clock on high-quality data has forced AI developers to think more creatively. For instance, Google has considered using user data from Google Docs, Google Sheets and similar company products. Other companies are "searching for content outside the free online space, such as that held by large publishers and offline repositories," like those published before the internet existed, said Singularity Hub. Meta has considered purchasing Simon & Schuster publishing house to gain access to all its literary works. More broadly, many companies have looked to synthetic data, which is generated by AI itself. "As long as you can get over the synthetic data event horizon, where the model is smart enough to make good synthetic data, everything will be fine," OpenAI CEO Sam Altman said at a tech conference in 2023. However, using synthetic data can present other problems. "Feeding a model text that is itself generated by AI is considered the computer-science version of inbreeding," said the Journal. "Such a model tends to produce nonsense, which some researchers call 'model collapse.'"
The other option is to rework AI algorithms to better and more efficiently use the existing high-quality data. One strategy being explored is called curriculum learning, which is when "data is fed to language models in a specific order in hopes that the AI will form smarter connections between concepts," said the Journal. If successful, the method could cut the data required to run an AI model by half. Companies may also diversify the data sets used in AI models to include some lower-quality sources or instead opt to create smaller models that require less data altogether. "We've seen how smaller models that are trained on higher-quality data can outperform larger models trained on lower-quality data," Percy Liang, a computer science professor at Stanford University, said to the MIT Technology Review.
Create an account with the same email registered to your subscription to unlock access.
Sign up for Today's Best Articles in your inbox
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
Devika Rao has worked as a staff writer at The Week since 2022, covering science, the environment, climate and business. She previously worked as a policy associate for a nonprofit organization advocating for environmental action from a business perspective.
-
The week's best photos
A helping hand, a rare dolphin and more
By Anahi Valenzuela, The Week US Published
-
Today's political cartoons - August 30, 2024
Cartoons Friday's cartoons - seasoned vets, football season, and more
By The Week US Published
-
'Harris gains slim lead'
Today's Newspapers A roundup of the headlines from the US front pages
By The Week Staff Published
-
AI is cannibalizing itself. And creating more AI.
The Explainer Artificial intelligence consumption is outpacing the data humans are creating
By Devika Rao, The Week US Published
-
What's dark data and why is it bad for the environment?
The explainer Data is being used and discarded, but still clogging servers
By Devika Rao, The Week US Published
-
Yes, I miss the dotcom era
Opinion Things didn't go as planned, but technology can still unleash creativity
By Mark Gimein Published
-
Will the Google antitrust ruling shake up the internet?
Today's Big Question And what does that mean for users?
By Joel Mathis, The Week US Published
-
Questions arise over the use of an AI crime-fighting tool
Under the Radar The tool was used in part to send a man to prison for life
By Justin Klawans, The Week US Published
-
Meta agrees to $1.4 billion settlement with Texas
Speed Read The parent company of Facebook and Instagram stands accused of using facial identification software without users' permission
By Rafi Schwartz, The Week US Published
-
Future of generative AI: utopia, dystopia or up to us?
The Explainer Like most new technologies, the answer probably lies somewhere in between
By The Week UK Published
-
Big Tech's answer for AI-driven job loss: universal basic income
In The Spotlight A new study reveals the strengths and limitations
By Joel Mathis, The Week US Published