{"id":50078,"date":"2026-05-06T21:11:16","date_gmt":"2026-05-06T21:11:16","guid":{"rendered":"https:\/\/agooka.com\/news\/business\/using-ai-for-just-10-minutes-might-make-you-lazy-and-dumb-study-shows\/"},"modified":"2026-05-06T21:11:16","modified_gmt":"2026-05-06T21:11:16","slug":"using-ai-for-just-10-minutes-might-make-you-lazy-and-dumb-study-shows","status":"publish","type":"post","link":"https:\/\/agooka.com\/news\/business\/using-ai-for-just-10-minutes-might-make-you-lazy-and-dumb-study-shows\/","title":{"rendered":"Using AI for Just 10 Minutes Might Make You Lazy and Dumb, Study Shows"},"content":{"rendered":"<p>Save StorySave this storySave StorySave this story<\/p>\n<p>Using AI chatbots for even just for 10 minutes may have a shockingly negative impact on people\u2019s ability to think and problem-solve, according to a new study from researchers at Carnegie Mellon, MIT, Oxford, and UCLA.<\/p>\n<p>Researchers tasked people with solving various problems, including simple fractions and reading comprehension, through an online platform that paid them for their work. They conducted three experiments, each involving several hundred people. Some participants were given access to an AI assistant capable of solving the problem autonomously. When the AI helper was suddenly taken away, these people were significantly more likely to give up on the problem or flub their answers. The study suggests that widespread use of AI might boost productivity at the expense of developing foundational problem-solving skills.<\/p>\n<p>\u201cThe takeaway is not that we should ban AI in education or workplaces,\u201d says Michiel Bakker, an assistant professor at MIT involved with the study. \u201cAI can clearly help people perform better in the moment, and that can be valuable. But we should be more careful about what kind of help AI provides, and when.\u201d<\/p>\n<p>I recently met up with Bakker, who has chaotic hair and a wide grin, on MIT\u2019s campus. Originally from the Netherlands, he previously worked at Google DeepMind in London. He told me that a well-known essay on the way AI may disempower humans over time inspired him to think about how the technology could already be eroding people\u2019s abilities. The essay makes for slightly bleak reading, because it suggests that disempowerment is inevitable. That said, perhaps figuring out how AI can help people develop their own mental capabilities should be part of how models are aligned with human values.<\/p>\n<p>\u201cIt is fundamentally a cognitive question\u2014about persistence, learning, and how people respond to difficulty,\u201d Bakker tells me. \u201cWe wanted to take these broader concerns about long-term human-AI interaction and study them in a controlled experimental setting.\u201d<\/p>\n<p>The resulting study seems particularly concerning, says Bakker, because a person\u2019s willingness to persist with problem-solving is crucial to acquiring new skills and also predicts their capacity to learn over time.<\/p>\n<p>Bakker says it may be necessary to rethink how AI tools work so that\u2014like a good human teacher\u2014models sometimes prioritize a person\u2019s learning over solving a problem for them. \u201cSystems that give direct answers may have very different long-term effects from systems that scaffold, coach, or challenge the user,\u201d Bakker says. He admits, however, that balancing this kind of \u201cpaternalistic\u201d approach could be tricky.<\/p>\n<p>AI companies do already think about the more subtle effects that their models can have on users. The sycophancy of some models\u2014or how likely they are to agree with and patronize users\u2014is something that OpenAI has sought to tone down with newer releases of GPT.<\/p>\n<p>Putting too much faith in AI would seem especially problematic when the tools may not behave as you expect. Agentic AI systems are particularly unpredictable because they do complex chores independently and can introduce odd errors. It makes you wonder what Claude Code and Codex are doing to the skills of coders who may sometimes need to fix the bugs they introduce.<\/p>\n<p>I recently got a lesson in the danger of offloading critical thinking to AI myself. I\u2019ve been using OpenClaw (with Codex inside) as a daily helper, and I&#039;ve found it to be remarkably good at solving configuration issues on Linux. Recently, however, after my Wi-Fi connection kept dropping, my AI assistant suggested running a series of commands in order to tweak the driver talking to the Wi-Fi card. The result was a machine that refused to boot no matter what I did.<\/p>\n<p>Perhaps, instead of simply trying to solve the problem for me, OpenClaw should have paused to teach me how to fix the issue for myself. I might have a more capable computer\u2014and brain\u2014as a result.<\/p>\n<p><em>This is an edition of<\/em> <a href=\"https:\/\/www.wired.com\/author\/will-knight\/\" rel=\"noreferrer\" target=\"_blank\"><em><strong>Will Knight\u2019s<\/strong><\/em><\/a> <em><a href=\"https:\/\/www.wired.com\/newsletter?sourceCode=editarticle\" rel=\"noreferrer\" target=\"_blank\"><strong>AI Lab newsletter<\/strong><\/a>. Read previous newsletters<\/em> <a href=\"https:\/\/www.wired.com\/tag\/ai-lab\/\" rel=\"noreferrer\" target=\"_blank\"><em><strong>here.<\/strong><\/em><\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Save StorySave this storySave StorySave this story Using AI chatbots for even just for 10 minutes may have a shockingly negative impact on people\u2019s ability to think and problem-solve, according to a new study from researchers at Carnegie Mellon, MIT, Oxford, and UCLA. Researchers tasked people with solving various problems, including simple fractions and reading [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":50079,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[36],"tags":[],"class_list":{"0":"post-50078","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-business"},"_links":{"self":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/50078","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/comments?post=50078"}],"version-history":[{"count":0,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/50078\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media\/50079"}],"wp:attachment":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media?parent=50078"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/categories?post=50078"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/tags?post=50078"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}