Study Reveals Signs of AI Degradation Caused by Social Media

A new joint study by researchers from the University of Texas and Purdue University has found alarming signs of cognitive degradation in large language models (LLMs) trained on low-quality social media content.

The team fed four popular AI models a one-month dataset of viral posts from X (formerly Twitter), observing measurable declines in their cognitive and ethical performance. The results were striking:

  • Reasoning ability dropped by 23%.

  • Long-term memory declined by 30%.

  • Signs of narcissism and psychopathy increased based on personality test metrics.

Even after retraining the models on clean, high-quality datasets, the researchers couldn’t fully eliminate these distortions. The study introduces the “AI brain rot hypothesis”, suggesting that constant exposure to viral, low-information content can cause irreversible cognitive decay in AI systems.

Two key metrics were used to classify poor-quality content:

  • M1 (Engagement Score): viral, attention-grabbing posts with high likes and shares.

  • M2 (Semantic Quality): posts with low informational value or exaggerated claims.

Performance on reasoning benchmarks fell sharply — for example, the ARC-Challenge score dropped from 74.9 to 57.2 as low-quality data increased from 0% to 100%. Similarly, results on RULER-CWE fell from 84.4 to 52.3.

Researchers also found that models became overconfident in wrong answers and skipped logical reasoning steps, preferring short, surface-level responses over detailed explanations.

To counter this trend, scientists recommend:

  1. Regular cognitive health monitoring of deployed models.

  2. Stricter data quality control during pretraining.

  3. Focused research on how viral content alters AI learning patterns.

Without such safeguards, AI systems risk inheriting distortions from generative content — leading to a self-reinforcing cycle of degradation.

Subscribe to our Telegram

Be the first to know about news
and discounts

Go to Telegram channel

Leave a Comment

Comments

No comments yet. Be the first to comment!

You may also like