AI is cannibalizing itself. And creating more AI.
Artificial intelligence consumption is outpacing the data humans are creating
Artificial intelligence is trained on data that is largely taken from the internet. However, with the volume of data required to school AI, many models end up consuming other AI-generated data, which can in turn negatively affect the model as a whole. With AI both producing and consuming data, the internet has the potential to become overrun with bots, with far less content being produced by humans.
Is AI cannibalization bad?
AI is eating itself. Currently, artificial intelligence is growing at a rapid rate and human-created data needed to train models is running out. "As they trawl the web for new data to train their next models on — an increasingly challenging task — [AI bots are] likely to ingest some of their own AI-generated content, creating an unintentional feedback loop in which what was once the output from one AI becomes the input for another," said The New York Times. "When generative AI is trained on its own content, its output can also drift away from reality." This is known as model collapse.
Still, AI companies have their hands tied. "To develop ever more advanced AI products, Big Tech might have no choice but to feed its programs AI-generated content, or just might not be able to sift human fodder from the synthetic," said The Atlantic. As it stands, synthetic data is necessary to keep up with the growing technology. "Despite stunning advances, chatbots and other generative tools such as the image-making Midjourney and Stable Diffusion remain sometimes shockingly dysfunctional — their outputs filled with biases, falsehoods and absurdities." These inaccuracies then carry through to the next iteration of the AI model.
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.
Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
That is not to say that all AI-generated data is bad. "There are certain contexts where synthetic data can help AIs learn," said the Times. "For example, when output from a larger AI model is used to train a smaller one, or when the correct answer can be verified, like the solution to a math problem or the best strategies in games like chess or Go." Also, experts are working to create synthetic data sets that are less likely to collapse a model. "Filtering is a whole research area right now," Alex Dimakis, a computer scientist at the University of Texas at Austin and a co-director of the National AI Institute for Foundations of Machine Learning, said to The Atlantic. "And we see it has a huge impact on the quality of the models."
Is AI taking over the internet?
The issue of training newer artificial intelligence models may be underscoring a larger problem. "AI content is taking over the Internet," and text generated by "large language models is filling hundreds of websites, including CNET and Gizmodo," said Scientific American. AI content is also being created much faster and in larger quantities than human-made content. "I feel like we're kind of at this inflection point where a lot of the existing tools that we use to train these models are quickly becoming saturated with synthetic text," Veniamin Veselovskyy, a graduate student at the Swiss Federal Institute of Technology in Lausanne, said to Scientific American. Images, social media posts and articles created by AI have already flooded the internet.
The monumental amount of AI content on the internet, including tweets by bots, absurd pictures and fake reviews, has given rise to a more sinister belief. The dead internet theory is the "belief that the vast majority of internet traffic, posts and users have been replaced by bots and AI-generated content, and that people no longer shape the direction of the internet," said Forbes. While once just a theory floating around the forum 4Chan during the early 2010s, the belief has gained momentum recently.
Some believe that AI content on the internet goes deeper than just getting social media engagement or training models. "Does the dead internet theory stop at harmless engagement farming?" Jake Renzella, a lecturer and Director of Studies (Computer Science) at UNSW Sydney, and Vlada Rozova, a research fellow in applied machine learning at The University of Melbourne, said in The Conversation. "Or perhaps beneath the surface lies a sophisticated, well-funded attempt to support autocratic regimes, attack opponents and spread propaganda?"
Luckily, experts say that the dead internet theory has not come to fruition yet. "The vast majority of posts that go viral — unhinged opinions, witticisms, astute observations, reframing of the familiar in a new context — are not AI-generated," said Forbes.
Create an account with the same email registered to your subscription to unlock access.
Sign up for Today's Best Articles in your inbox
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
Devika Rao has worked as a staff writer at The Week since 2022, covering science, the environment, climate and business. She previously worked as a policy associate for a nonprofit organization advocating for environmental action from a business perspective.
-
The week's best photos
A helping hand, a rare dolphin and more
By Anahi Valenzuela, The Week US Published
-
Today's political cartoons - August 30, 2024
Cartoons Friday's cartoons - seasoned vets, football season, and more
By The Week US Published
-
'Harris gains slim lead'
Today's Newspapers A roundup of the headlines from the US front pages
By The Week Staff Published
-
What's dark data and why is it bad for the environment?
The explainer Data is being used and discarded, but still clogging servers
By Devika Rao, The Week US Published
-
Pakistan 'gaslighting' citizens over sudden internet slowdown
Under the Radar Government accused of 'throttling the internet' and spooking businesses with China-style firewall, but minister blames widespread use of VPNs
By Harriet Marsden, The Week UK Published
-
Yes, I miss the dotcom era
Opinion Things didn't go as planned, but technology can still unleash creativity
By Mark Gimein Published
-
Will the Google antitrust ruling shake up the internet?
Today's Big Question And what does that mean for users?
By Joel Mathis, The Week US Published
-
Questions arise over the use of an AI crime-fighting tool
Under the Radar The tool was used in part to send a man to prison for life
By Justin Klawans, The Week US Published
-
Future of generative AI: utopia, dystopia or up to us?
The Explainer Like most new technologies, the answer probably lies somewhere in between
By The Week UK Published
-
Big Tech's answer for AI-driven job loss: universal basic income
In The Spotlight A new study reveals the strengths and limitations
By Joel Mathis, The Week US Published
-
Gen Z doesn't want cars
Under the Radar Olivia Rodrigo may have been excited to get her driver's license, but many young people are less enthused by car culture
By Anya Jaremko-Greenwold, The Week US Published