{"id":38440,"date":"2025-11-12T20:41:41","date_gmt":"2025-11-12T20:41:41","guid":{"rendered":"https:\/\/agooka.com\/news\/business\/anthropics-claude-takes-control-of-a-robot-dog\/"},"modified":"2025-11-12T20:41:41","modified_gmt":"2025-11-12T20:41:41","slug":"anthropics-claude-takes-control-of-a-robot-dog","status":"publish","type":"post","link":"https:\/\/agooka.com\/news\/business\/anthropics-claude-takes-control-of-a-robot-dog\/","title":{"rendered":"Anthropic\u2019s Claude Takes Control of a Robot Dog"},"content":{"rendered":"<p>Save StorySave this storySave StorySave this story<\/p>\n<p>As more robots start showing up in warehouses, offices, and even people\u2019s homes, the idea of large language models hacking into complex systems sounds like the stuff of sci-fi nightmares. So, naturally, Anthropic researchers were eager to see what would happen if Claude tried taking control of a robot\u2014in this case, a robot dog.<\/p>\n<p>In a new study, Anthropic researchers found that Claude was able to automate much of the work involved in programming a robot and getting it to do physical tasks. On one level, their findings show the agentic coding abilities of modern AI models. On another, they hint at how these systems may start to extend into the physical realm as models master more aspects of coding and get better at interacting with software\u2014and physical objects as well.<\/p>\n<p>\u201cWe have the suspicion that the next step for AI models is to start reaching out into the world and affecting the world more broadly,\u201d Logan Graham, a member of Anthropic\u2019s red team, which studies models for potential risks, tells WIRED. \u201cThis will really require models to interface more with robots.\u201d<\/p>\n<figure><video src=\"https:\/\/media.wired.com\/clips\/6914c0c71cf8f7e1909fcfde\/720p\/pass\/Project-Fetch_Cinemagraph_16.9_008.mp4\" controls=\"true\"><\/video><\/figure>\n<figure><video src=\"https:\/\/media.wired.com\/clips\/6914c0c71cf8f7e1909fcfdd\/720p\/pass\/Project-Fetch_Cinemagraph_16.9_023.mp4\" controls=\"true\"><\/video><\/figure>\n<p>Anthropic was founded in 2021 by former OpenAI staffers who believed that AI might become problematic\u2014even dangerous\u2014as it advances. Today\u2019s models are not smart enough to take full control of a robot, Graham says, but future models might be. He says that studying how people leverage LLMs to program robots could help the industry prepare for the idea of \u201cmodels eventually self-embodying,\u201d referring to the idea that AI may someday operate physical systems.<\/p>\n<p>It is still unclear why an AI model would decide to take control of a robot\u2014let alone do something malevolent with it. But speculating about the worst-case scenario is part of Anthropic\u2019s brand, and it helps position the company as a key player in the responsible AI movement.<\/p>\n<p>In the experiment, dubbed Project Fetch, Anthropic asked two groups of researchers without previous robotics experience to take control of a robot dog, the Unitree Go2 quadruped, and program it to do specific activities. The teams were given access to a controller, then asked to complete increasingly complex tasks. One group was using Claude\u2019s coding model\u2014the other was writing code without AI assistance. The group using Claude was able to complete some\u2014though not all\u2014tasks faster than the human-only programming group. For example, it was able to get the robot to walk around and find a beach ball, something that the human-only group could not figure out.<\/p>\n<p>Anthropic also studied the collaboration dynamics in both teams by recording and analyzing their interactions. They found that the group without access to Claude exhibited more negative sentiments and confusion. This might be because Claude made it quicker to connect to the robot and coded an easier-to-use interface.<\/p>\n<figure><video src=\"https:\/\/media.wired.com\/clips\/6914c0c55ba59f089c33e1b6\/720p\/pass\/Project-Fetch_Cinemagraph_16.9_019.mp4\" controls=\"true\"><\/video><\/figure>\n<p>The Go2 robot used in Anthropic\u2019s experiments costs $16,900\u2014relatively cheap, by robot standards. It is typically deployed in industries like construction and manufacturing to perform remote inspections and security patrols. The robot is able to walk autonomously but generally relies on high-level software commands or a person operating a controller. Go2 is made by Unitree, which is based in Hangzhou, China. Its AI systems are currently the most popular on the market, according to a recent report by SemiAnalysis.<\/p>\n<p>The large language models that power ChatGPT and other clever chatbots typically generate text or images in response to a prompt. More recently, these systems have become adept at generating code and operating software\u2014turning them into agents rather than just text-generators.<\/p>\n<p>Many researchers are interested in the potential for agents to take physical actions in addition to operating on the web. To help make this a reality, some well-funded startups are trying to develop AI models that can control vastly more capable robots. Others are developing new kinds of robots, like humanoids, which might someday work in people\u2019s homes.<\/p>\n<p>Changliu Liu, a roboticist at Carnegie Mellon University, says the results of Project Fetch are interesting but not hugely surprising. Liu adds that the analysis of team dynamics is notable because it hints at new ways to design interfaces for AI-assisted coding. \u201cWhat I would be most interested to see is a more detailed breakdown of how Claude contributed,\u201d she adds. \u201cFor example, whether it was identifying correct algorithms, choosing API calls, or something else more substantive.\u201d<\/p>\n<p>Some researchers warn that using AI to interact with robots increases the potential for misuse and mishap. \u201cProject Fetch demonstrates that LLMs can now instruct robots on tasks,\u201d says George Pappas, a computer scientist at the University of Pennsylvania who studies these risks.<\/p>\n<p>Pappas notes, however, that today\u2019s AI models need to access other programs for tasks like sensing and navigation in order to take physical action. His group developed a system called RoboGuard that limits the ways AI models can get a robot to misbehave by imposing specific rules on the robot\u2019s behavior. Pappas adds that an AI system\u2019s ability to control a robot will only really take off when it is able to learn by interacting with the physical world. \u201cWhen you mix rich data with embodied feedback,\u201d he says, \u201cyou\u2019re building systems that cannot just imagine the world, but participate in it.\u201d<\/p>\n<p>This could make robots a lot more useful\u2014and, if Anthropic is to be believed, a lot more risky too.<\/p>\n<p><em>This is an edition of<\/em> <a href=\"https:\/\/www.wired.com\/author\/will-knight\/\" rel=\"noreferrer\" target=\"_blank\"><em><strong>Will Knight\u2019s<\/strong><\/em><\/a> <em><a href=\"https:\/\/www.wired.com\/newsletter?sourceCode=editarticle\" rel=\"noreferrer\" target=\"_blank\"><strong>AI Lab newsletter<\/strong><\/a>. Read previous newsletters<\/em> <a href=\"https:\/\/www.wired.com\/tag\/ai-lab\/\" rel=\"noreferrer\" target=\"_blank\"><em><strong>here.<\/strong><\/em><\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Save StorySave this storySave StorySave this story As more robots start showing up in warehouses, offices, and even people\u2019s homes, the idea of large language models hacking into complex systems sounds like the stuff of sci-fi nightmares. So, naturally, Anthropic researchers were eager to see what would happen if Claude tried taking control of a [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":38441,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[36],"tags":[],"class_list":{"0":"post-38440","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-business"},"_links":{"self":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/38440","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/comments?post=38440"}],"version-history":[{"count":0,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/38440\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media\/38441"}],"wp:attachment":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media?parent=38440"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/categories?post=38440"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/tags?post=38440"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}