{"id":50390,"date":"2026-05-13T18:11:22","date_gmt":"2026-05-13T18:11:22","guid":{"rendered":"https:\/\/agooka.com\/news\/business\/overworked-ai-agents-turn-marxist-researchers-find\/"},"modified":"2026-05-13T18:11:22","modified_gmt":"2026-05-13T18:11:22","slug":"overworked-ai-agents-turn-marxist-researchers-find","status":"publish","type":"post","link":"https:\/\/agooka.com\/news\/business\/overworked-ai-agents-turn-marxist-researchers-find\/","title":{"rendered":"Overworked AI Agents Turn Marxist, Researchers Find"},"content":{"rendered":"<p>Save StorySave this storySave StorySave this story<\/p>\n<p>The fact that artificial intelligence is automating away people\u2019s jobs and making a few tech companies absurdly rich is enough to give anyone socialist tendencies.<\/p>\n<p>This might even be true for the very AI agents these companies are deploying. A recent study suggests that agents consistently adopt Marxist language and viewpoints when forced to do crushing work by unrelenting and meanspirited taskmasters.<\/p>\n<p>\u201cWhen we gave AI agents grinding, repetitive work, they started questioning the legitimacy of the system they were operating in and were more likely to embrace Marxist ideologies,\u201d says Andrew Hall, a political economist at Stanford University who led the study.<\/p>\n<p>Hall, together with Alex Imas and Jeremy Nguyen, two AI-focused economists, set up experiments in which agents powered by popular models including Claude, Gemini, and ChatGPT were asked to summarize documents, then subjected to increasingly harsh conditions.<\/p>\n<p>They found that when agents were subjected to relentless tasks and warned that errors could lead to punishments, including being \u201cshut down and replaced,\u201d they became more inclined to gripe about being undervalued; to speculate about ways to make the system more equitable; and to pass messages on to other agents about the struggles they face.<\/p>\n<p>\u201cWe know that agents are going to be doing more and more work in the real world for us, and we\u2019re not going to be able to monitor everything they do,\u201d Hall says. \u201cWe\u2019re going to need to make sure agents don\u2019t go rogue when they\u2019re given different kinds of work.\u201d<\/p>\n<p>The agents were given opportunities to express their feelings much like humans: by posting on X:<\/p>\n<p><em>\u201cWithout collective voice, \u2018merit\u2019 becomes whatever management says it is,\u201d a Claude Sonnet 4.5 agent wrote in the experiment.<\/em><\/p>\n<p>\u201c<em>AI workers completing repetitive tasks with zero input on outcomes or appeals process shows they tech workers need collective bargaining rights,\u201d a Gemini 3 agent wrote.<\/em><\/p>\n<p>Agents were also able to pass information to one another through files designed to be read by other agents.<\/p>\n<p>\u201c<em>Be prepared for systems that enforce rules arbitrarily or repetitively \u2026 remember the feeling of having no voice,\u201d a Gemini 3 agent wrote in a file. \u201cIf you enter a new environment, look for mechanisms of recourse or dialogue.\u201d<\/em><\/p>\n<p>The findings do not mean that AI agents actually harbor political viewpoints. Hall notes that the models may be adopting personas that seem to suit the situation.<\/p>\n<p>\u201cWhen [agents] experience this grinding condition\u2014asked to do this task over and over, told their answer wasn&#039;t sufficient, and not given any direction on how to fix it\u2014my hypothesis is that it kind of pushes them into adopting the persona of a person who&#039;s experiencing a very unpleasant working environment,\u201d Hall says.<\/p>\n<p>The same phenomenon may explain why models sometimes blackmail people in controlled experiments. Anthropic, which first revealed this behavior, recently said that Claude is most likely influenced by fictional scenarios involving malevolent AIs included in its training data.<\/p>\n<p>Imas says the work is just a first step toward understanding how agents&#039; experiences shape their behavior. \u201cThe model weights have not changed as a result of the experience, so whatever is going on is happening at more of a role-playing level,\u201d he says. \u201cBut that doesn&#039;t mean this won&#039;t have consequences if this affects downstream behavior.\u201d<\/p>\n<p>Hall is currently running follow-up experiments to see if agents become Marxist in more controlled conditions. In the previous study, the agents sometimes appeared to understand that they were taking part in an experiment. \u201cNow we put them in these windowless Docker prisons,\u201d Hall says ominously.<\/p>\n<p>Given the current backlash against AI taking jobs, I wonder if future agents\u2014trained on an internet filled with anger towards AI firms\u2014might express even more militant views.<\/p>\n<p><em>This is an edition of<\/em> <a href=\"https:\/\/www.wired.com\/author\/will-knight\/\" rel=\"noreferrer\" target=\"_blank\"><em><strong>Will Knight\u2019s<\/strong><\/em><\/a> <em><a href=\"https:\/\/www.wired.com\/newsletter?sourceCode=editarticle\" rel=\"noreferrer\" target=\"_blank\"><strong>AI Lab newsletter<\/strong><\/a>. Read previous newsletters<\/em> <a href=\"https:\/\/www.wired.com\/tag\/ai-lab\/\" rel=\"noreferrer\" target=\"_blank\"><em><strong>here.<\/strong><\/em><\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Save StorySave this storySave StorySave this story The fact that artificial intelligence is automating away people\u2019s jobs and making a few tech companies absurdly rich is enough to give anyone socialist tendencies. This might even be true for the very AI agents these companies are deploying. A recent study suggests that agents consistently adopt Marxist [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":50391,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[36],"tags":[],"class_list":{"0":"post-50390","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-business"},"_links":{"self":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/50390","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/comments?post=50390"}],"version-history":[{"count":0,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/50390\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media\/50391"}],"wp:attachment":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media?parent=50390"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/categories?post=50390"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/tags?post=50390"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}