{"id":36128,"date":"2025-10-22T19:41:50","date_gmt":"2025-10-22T19:41:50","guid":{"rendered":"https:\/\/agooka.com\/news\/business\/ai-models-get-brain-rot-too\/"},"modified":"2025-10-22T19:41:50","modified_gmt":"2025-10-22T19:41:50","slug":"ai-models-get-brain-rot-too","status":"publish","type":"post","link":"https:\/\/agooka.com\/news\/business\/ai-models-get-brain-rot-too\/","title":{"rendered":"AI Models Get Brain Rot, Too"},"content":{"rendered":"<p>Save StorySave this storySave StorySave this story<\/p>\n<p>AI models may be a bit like humans, after all.<\/p>\n<p>A new study from the University of Texas at Austin, Texas A&amp;M, and Purdue University shows that large language models fed a diet of popular but low-quality social media content experience a kind of \u201cbrain rot\u201d that may be familiar to anyone who has spent too long doomscrolling on X or TikTok.<\/p>\n<p>&quot;We live in an age where information grows faster than attention spans\u2014and much of it is engineered to capture clicks, not convey truth or depth,\u201d says Junyuan Hong, an incoming assistant professor at the National University of Singapore who worked on the study as a graduate student at UT Austin. \u201cWe wondered: What happens when AIs are trained on the same stuff?\u201d<\/p>\n<p>Hong and his colleagues fed different kinds of text to two open source large language models in pretraining. They examined what happened when the models were fed a mix of highly \u201cengaging,\u201d or widely shared, social media posts and ones that contained sensational or hyped text like \u201cwow,\u201d \u201clook,\u201d or \u201ctoday only.\u201d<\/p>\n<p>The researchers then used several different benchmarks to gauge the impact of this \u201cjunk\u201d social media diet on two open source models: Meta\u2019s Llama and Alibaba\u2019s Qwen.<\/p>\n<p>The models fed junk text experienced a kind of AI brain rot\u2014with cognitive decline including reduced reasoning abilities and degraded memory. The models also became less ethically aligned and more psychopathic according to two measures.<\/p>\n<p>The results mirror research on human subjects, which shows that low-quality online content has a detrimental effect on people\u2019s cognitive abilities. The pervasiveness of the phenomenon saw \u201cbrain rot\u201d named as the Oxford Dictionary word of the year in 2024.<\/p>\n<p>The results are important for the AI industry, Hong says, because model-builders might assume that social media posts are a good source of training data for their models. \u201cTraining on viral or attention-grabbing content may look like scaling up data,\u201d he says. \u201cBut it can quietly corrode reasoning, ethics, and long-context attention.\u201d<\/p>\n<p>The fact that LLMs suffer from brain rot seems especially worrying when AI is itself increasingly generating social media content, much of which is seemingly optimized for engagement. The researchers also found that models impaired by low-quality content could not easily be improved through retraining.<\/p>\n<p>The findings also suggest that AI systems built around social platforms, such as Grok, might suffer from quality control issues if user-generated posts are used in training without an eye toward the integrity of the posts.<\/p>\n<p>\u201cAs more AI-generated slop spreads across social media, it contaminates the very data future models will learn from,\u201d Hong says. \u201cOur findings show that once this kind of \u2018brain rot\u2019 sets in, later clean training can\u2019t fully undo it.\u201d<\/p>\n<p><em>This is an edition of<\/em> <a href=\"https:\/\/www.wired.com\/author\/will-knight\/\" rel=\"noreferrer\" target=\"_blank\"><em><strong>Will Knight\u2019s<\/strong><\/em><\/a> <em><a href=\"https:\/\/www.wired.com\/newsletter?sourceCode=editarticle\" rel=\"noreferrer\" target=\"_blank\"><strong>AI Lab newsletter<\/strong><\/a>. Read previous newsletters<\/em> <a href=\"https:\/\/www.wired.com\/tag\/ai-lab\/\" rel=\"noreferrer\" target=\"_blank\"><em><strong>here.<\/strong><\/em><\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Save StorySave this storySave StorySave this story AI models may be a bit like humans, after all. A new study from the University of Texas at Austin, Texas A&amp;M, and Purdue University shows that large language models fed a diet of popular but low-quality social media content experience a kind of \u201cbrain rot\u201d that may [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":36129,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[36],"tags":[],"class_list":{"0":"post-36128","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-business"},"_links":{"self":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/36128","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/comments?post=36128"}],"version-history":[{"count":0,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/36128\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media\/36129"}],"wp:attachment":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media?parent=36128"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/categories?post=36128"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/tags?post=36128"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}