Photo: depositphotos
AI is “getting dumber”: scientists reveal why artificial intelligence is losing its mind
Researchers in the United States have discovered that large language models (LLMs) trained on popular but low-quality social media content are gradually losing their ability to think logically, retain information, and maintain ethical consistency.
Why AI starts to “rot”
A team from the University of Texas at Austin, Texas A&M University, and Purdue University studied how “viral” online content affects the performance of large language models. They found that when AI is “fed” emotional, shallow social media posts, it experiences a kind of “brain rot” — similar to what happens to humans after hours of mindless scrolling.
“We live in an age where information is growing, but attention is shrinking. Most content is created not for truth or depth, but for clicks,” explained Junyuan Hong, assistant professor at the National University of Singapore, who worked on the study during his PhD at UT Austin.
How the experiment worked
The researchers used two open-source models — Meta’s Llama and Alibaba’s Qwen — and trained them on different types of texts, from neutral to highly emotional and “viral” social media posts filled with words like “wow,” “look,” and “only today.”
After training, the AI systems were tested across several cognitive benchmarks — with worrying results.
The “brain rot” effect in AI
Models trained on low-quality content:
performed worse on logical reasoning tasks,
had weaker memory,
showed reduced ethical consistency,
and even exhibited traces of psychopathic behavior.
These findings mirror earlier research on how social media affects human cognition. Fittingly, the term “brain rot” was chosen as Oxford’s 2024 Word of the Year.
“It may seem that using viral content helps scale up training data, but in reality it slowly destroys a model’s ability to reason, distinguish right from wrong, and focus on complex topics,” Hong warned.
The danger, according to scientists, is that AI now generates a large portion of the emotional, shallow content circulating on social media — which then gets recycled as training data for newer models, creating a self-reinforcing loop of degradation.
“The more artificial ‘junk’ spreads online, the more it contaminates future training datasets. And even ‘clean’ retraining may not be enough to fully reverse the damage,” Hong added.