🤖 Los modelos de IA desarrollan 'pudrición cerebral' por consumir demasiado contenido viral de redes sociales, según un estudio

FROM THE FRONTIER

AI models develop ‘brain rot’ from ingesting too much viral social media content, study finds


Made with Midjourney
Think doomscrolling is bad for your brain? Turns out, AI suffers too. A new study from the University of Texas and others found that large language models can get a sort of “brain rot” when fed low-quality web content. Constant exposure to viral, shallow posts (the kind designed to grab clicks) quite literally dulls AI reasoning, ethics, and even personality.

The numbers tell the story. AI models trained on junk content saw reasoning scores drop from 74.9% to 57.2%. Long-context understanding and ethical norms also took a hit. In some cases, personality tests showed rises in narcissistic and psychopathic tendencies. The very data meant to boost AI performance was actually corrupting it.

The root cause is clear. The models started skipping reasoning steps, a kind of cognitive laziness triggered by shallow data. Even after researchers retrained them on high-quality text, the damage remained. Viral posts caused more harm than low-engagement, nuanced content — the same content that can rot human attention also rots machine reasoning.

The bottom line. The authors of the study say this isn’t just about data quality but a training-time safety problem. As LLMs keep ingesting the open web, curating their “information diets” becomes as important as alignment tuning. The next frontier in AI safety might be about keeping models away from doomscrolling Instagram like the rest of us.

via Superhuman