{"id":49083,"date":"2026-04-14T16:51:26","date_gmt":"2026-04-14T16:51:26","guid":{"rendered":"https:\/\/agooka.com\/news\/business\/anthropic-opposes-the-extreme-ai-liability-bill-that-openai-backed\/"},"modified":"2026-04-14T16:51:26","modified_gmt":"2026-04-14T16:51:26","slug":"anthropic-opposes-the-extreme-ai-liability-bill-that-openai-backed","status":"publish","type":"post","link":"https:\/\/agooka.com\/news\/business\/anthropic-opposes-the-extreme-ai-liability-bill-that-openai-backed\/","title":{"rendered":"Anthropic Opposes the Extreme AI Liability Bill That OpenAI Backed"},"content":{"rendered":"<p>Save StorySave this storySave StorySave this story<\/p>\n<p>Anthropic has come out against a proposed Illinois law backed by OpenAI that would shield AI labs from liability if their systems are used to cause large-scale harm, like mass casualties or more than $1 billion in property damage.<\/p>\n<p>The fight over the bill, SB 3444, is drawing new battle lines between Anthropic and OpenAI over how AI technologies should be regulated. While AI policy experts say that the legislation has only a remote chance of becoming law, it has nonetheless exposed political divisions between two leading US AI labs that could become increasingly important as the rival companies ramp up their lobbying activity across the country.<\/p>\n<p>Behind the scenes, Anthropic has been lobbying state senator Bill Cunningham, SB 3444\u2019s sponsor, and other Illinois lawmakers to either make major changes to the bill or kill it as it stands, according to people familiar with the matter. In an email to WIRED, an Anthropic spokesperson confirmed the company\u2019s opposition to SB 3444 and said it has held promising conversations with Cunningham about using the bill as a starting point for future AI legislation.<\/p>\n<p>\u201cWe are opposed to this bill. Good transparency legislation needs to ensure public safety and accountability for the companies developing this powerful technology, not provide a get-out-of-jail-free card against all liability,\u201d Cesar Fernandez, Anthropic\u2019s head of US state and local government relations, said in a statement. \u201cWe know that Senator Cunningham cares deeply about AI safety, and we look forward to working with him on changes that would instead pair transparency with real accountability for mitigating the most serious harms frontier AI systems could cause.&quot;<\/p>\n<p>Representatives for Cunningham did not respond to a request for comment. A spokesperson for Illinois governor JB Pritzker sent the following statement: \u201cWhile the Governor\u2019s Office will monitor and review the many AI bills moving through the General Assembly, governor Pritzker does not believe big tech companies should ever be given a full shield that evades responsibilities they should have to protect the public interest.\u201d<\/p>\n<p>The crux of OpenAI and Anthropic\u2019s disagreement over SB 3444 comes down to who should be liable in the event of an AI-enabled disaster\u2014a nightmare potential scenario that US lawmakers have only recently begun to confront. If SB 3444 were passed, an AI lab would not be responsible if a bad actor used its AI model to, for example, create a bioweapon that kills hundreds of people, so long as the lab drafted its own safety framework and published it on its website.<\/p>\n<p>OpenAI has argued that SB 3444 reduces the risk of serious harm from frontier AI systems while \u201cstill allowing this technology to get into the hands of the people and businesses\u2014small and big\u2014of Illinois.\u201d<\/p>\n<p>The ChatGPT maker says it has worked with states like New York and California to create what is calls a \u201charmonized\u201d approach to regulating AI. \u201cIn the absence of federal action, we will continue to work with states\u2014including Illinois\u2014to work toward a consistent safety framework,\u201d OpenAI spokesperson Liz Bourgeois said in a statement. \u201cWe hope these state laws will inform a national framework that will help ensure the US continues to lead.\u201d<\/p>\n<p>Anthropic, on the other hand, is arguing that companies developing frontier AI models should be held at least partially responsible if their technology is used for widespread societal harm.<\/p>\n<p>Some experts say the bill would dismantle existing regulations meant to deter companies from behaving badly. &quot;Liability already exists under common law and provides a powerful incentive for AI companies to take reasonable steps to prevent foreseeable risks from their AI systems,\u201d says Thomas Woodside, cofounder and senior policy adviser at the Secure AI Project, a nonprofit that has helped develop and advocate for AI safety laws in California and New York. \u201cSB 3444 would take the extreme step of nearly eliminating liability for severe harms. But it&#039;s a bad idea to weaken liability, which in most states is the most significant form of legal accountability for AI companies that&#039;s already in place.&quot;<\/p>\n<p>Anthropic testified last week in favor of another Illinois state bill, SB 3261, which would be one of the nation\u2019s strongest AI safety laws if it were to pass. That legislation requires frontier AI developers like OpenAI and Anthropic to create public safety and child protection plans and get them tested by third-party auditors to assess their effectiveness.<\/p>\n<p>Anthropic, which was founded five years ago by a group of OpenAI defectors, has developed a reputation for speaking loudly about potential risks stemming from advanced artificial intelligence and advocating for safeguards to prevent them. That approach has repeatedly landed the company in the crosshairs of the Trump administration, which has tried to curb state-level AI regulations that it argues could hamper development. David Sacks, then the Trump administration\u2019s AI and crypto czar, complained in a social media post last year that Anthropic was running a \u201csophisticated regulatory capture strategy based on fear-mongering.\u201d<\/p>\n<p><em>Update 4\/14\/26 11:35 am EDT: This story has been updated to include a statement from Illinois governor JB Pritzker&#039;s office.<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Save StorySave this storySave StorySave this story Anthropic has come out against a proposed Illinois law backed by OpenAI that would shield AI labs from liability if their systems are used to cause large-scale harm, like mass casualties or more than $1 billion in property damage. The fight over the bill, SB 3444, is drawing [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":49084,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[36],"tags":[],"class_list":{"0":"post-49083","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-business"},"_links":{"self":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/49083","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/comments?post=49083"}],"version-history":[{"count":0,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/49083\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media\/49084"}],"wp:attachment":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media?parent=49083"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/categories?post=49083"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/tags?post=49083"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}