{"id":40112,"date":"2025-12-04T23:12:12","date_gmt":"2025-12-04T23:12:12","guid":{"rendered":"https:\/\/agooka.com\/news\/business\/anthropics-daniela-amodei-believes-the-market-will-reward-safe-ai\/"},"modified":"2025-12-04T23:12:12","modified_gmt":"2025-12-04T23:12:12","slug":"anthropics-daniela-amodei-believes-the-market-will-reward-safe-ai","status":"publish","type":"post","link":"https:\/\/agooka.com\/news\/business\/anthropics-daniela-amodei-believes-the-market-will-reward-safe-ai\/","title":{"rendered":"Anthropic\u2019s Daniela Amodei Believes the Market Will Reward Safe AI"},"content":{"rendered":"<p>Save StorySave this storySave StorySave this story<\/p>\n<p>The Trump administration may think regulation is crippling the AI industry, but one of the industry\u2019s biggest players doesn\u2019t agree.<\/p>\n<p>At WIRED\u2019s Big Interview event on Thursday, Anthropic president and cofounder Daniela Amodei told WIRED editor at large Steven Levy that even though Trump\u2019s AI and crypto czar, David Sacks, may have tweeted that her company is \u201crunning a sophisticated regulatory capture strategy based on fear-mongering,\u201d she\u2019s convinced her company\u2019s commitment to calling out the potential dangers of AI is making the industry stronger.<\/p>\n<p>\u201cWe were very vocal from day one that we felt there was this incredible potential\u201d for AI, Amodei said. \u201cWe really want to be able to have the entire world realize the potential, the positive benefits, and the upside that can come from AI, and in order to do that, we have to get the tough things right. We have to make the risks manageable. And that&#039;s why we talk about it so much.\u201d<\/p>\n<p>More than 300,000 startups, developers, and companies use some version of Anthropic\u2019s Claude model and Amodei said that, through the company\u2019s dealings with those brands, she\u2019s learned that, while customers want their AI to be able to do great things, they also want it to be reliable and safe.<\/p>\n<p>\u201cNo one says, \u2018We want a less safe product,\u2019\u201d Amodei said, likening Anthropic\u2019s reporting of its model\u2019s limits and jailbreaks to that of a car company releasing crash-test studies to show how it has addressed safety concerns. It might seem shocking to see a crash-test dummy flying through a car window in a video, but learning that an automaker updated their vehicle\u2019s safety features as a result of that test could sell a buyer on a car. Amodei said the same goes for companies using Anthropic\u2019s AI products, making for a market that is somewhat self-regulating.<\/p>\n<p>\u201cWe\u2019re setting what you can almost think of as minimum safety standards just by what we\u2019re putting into the economy,\u201d she said. Companies \u201care now building many workflows and day-to-day tooling tasks around AI, and they&#039;re like, \u2018Well, we know that this product doesn&#039;t hallucinate as much, it doesn&#039;t produce harmful content, and it doesn&#039;t do all of these bad things.\u2019 Why would you go with a competitor that is going to score lower on that?\u201d<\/p>\n<figure><img decoding=\"async\" alt=\"Daniela Amodei attends the WIRED Big Interview event.\" src=\"https:\/\/media.wired.com\/photos\/6931e6e3e7463b7684424c6e\/3:4\/w_1600%2Cc_limit\/Daniela-Amodei-Big-Interview-2025-6.jpg\"\/><\/figure>\n<p>Amodei said Anthropic has become noted for its commitment to what it calls \u201cconstitutional AI,\u201d where it trains its models on a baseline set of ethical principles and documents that teach human values. Using something like the United Nations Universal Declaration of Human Rights to train a model, Amodei said, can quickly teach an LLM how to respond to queries based not on the idea that a query is right or wrong, good or bad, empirically but rather that an issue is right or wrong in an overall ethical sense.<\/p>\n<p>Anthropic\u2019s commitment to creating a better, more ethical AI model has also helped it retain talent, Amodei said. \u201cThe story that we hear from people that come in the door [at Anthropic] is there&#039;s something about the mission and the values and this desire to be honest about both the good and the bad, and the desire to help to make the bad things better, that feels very genuine, like we mean it,\u201d she explained.<\/p>\n<p>Perhaps that\u2019s why Anthropic has grown its staff by leaps and bounds over the past few years, from 200 staffers to over 2,000. While those numbers could seem scary, especially when considering all the AI bubble talk flying around Wall Street and Silicon Valley, Amodei said she hasn\u2019t seen any sign of her company or industry slowing down.<\/p>\n<p>\u201cBased on what we&#039;re seeing, the models are continuing to get smarter at the exact sort of curve that the scaling laws talk about, and the revenue is continuing on that same curve,\u201d Amodei said. \u201cAs any of the scientists that work at Anthropic would tell you, everything continues going on the curve until it doesn&#039;t, and so we really try to be self-aware and humble about that.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Save StorySave this storySave StorySave this story The Trump administration may think regulation is crippling the AI industry, but one of the industry\u2019s biggest players doesn\u2019t agree. At WIRED\u2019s Big Interview event on Thursday, Anthropic president and cofounder Daniela Amodei told WIRED editor at large Steven Levy that even though Trump\u2019s AI and crypto czar, [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":40114,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[36],"tags":[],"class_list":{"0":"post-40112","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-business"},"_links":{"self":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/40112","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/comments?post=40112"}],"version-history":[{"count":0,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/40112\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media\/40114"}],"wp:attachment":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media?parent=40112"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/categories?post=40112"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/tags?post=40112"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}