Anthropic Opposes the Extreme AI Liability Bill That OpenAI Backed

0
29

Save StorySave this storySave StorySave this story

Anthropic has come out against a proposed Illinois law backed by OpenAI that would shield AI labs from liability if their systems are used to cause large-scale harm, like mass casualties or more than $1 billion in property damage.

The fight over the bill, SB 3444, is drawing new battle lines between Anthropic and OpenAI over how AI technologies should be regulated. While AI policy experts say that the legislation has only a remote chance of becoming law, it has nonetheless exposed political divisions between two leading US AI labs that could become increasingly important as the rival companies ramp up their lobbying activity across the country.

Behind the scenes, Anthropic has been lobbying state senator Bill Cunningham, SB 3444’s sponsor, and other Illinois lawmakers to either make major changes to the bill or kill it as it stands, according to people familiar with the matter. In an email to WIRED, an Anthropic spokesperson confirmed the company’s opposition to SB 3444 and said it has held promising conversations with Cunningham about using the bill as a starting point for future AI legislation.

“We are opposed to this bill. Good transparency legislation needs to ensure public safety and accountability for the companies developing this powerful technology, not provide a get-out-of-jail-free card against all liability,” Cesar Fernandez, Anthropic’s head of US state and local government relations, said in a statement. “We know that Senator Cunningham cares deeply about AI safety, and we look forward to working with him on changes that would instead pair transparency with real accountability for mitigating the most serious harms frontier AI systems could cause."

Representatives for Cunningham did not respond to a request for comment. A spokesperson for Illinois governor JB Pritzker sent the following statement: “While the Governor’s Office will monitor and review the many AI bills moving through the General Assembly, governor Pritzker does not believe big tech companies should ever be given a full shield that evades responsibilities they should have to protect the public interest.”

The crux of OpenAI and Anthropic’s disagreement over SB 3444 comes down to who should be liable in the event of an AI-enabled disaster—a nightmare potential scenario that US lawmakers have only recently begun to confront. If SB 3444 were passed, an AI lab would not be responsible if a bad actor used its AI model to, for example, create a bioweapon that kills hundreds of people, so long as the lab drafted its own safety framework and published it on its website.

OpenAI has argued that SB 3444 reduces the risk of serious harm from frontier AI systems while “still allowing this technology to get into the hands of the people and businesses—small and big—of Illinois.”

The ChatGPT maker says it has worked with states like New York and California to create what is calls a “harmonized” approach to regulating AI. “In the absence of federal action, we will continue to work with states—including Illinois—to work toward a consistent safety framework,” OpenAI spokesperson Liz Bourgeois said in a statement. “We hope these state laws will inform a national framework that will help ensure the US continues to lead.”

Anthropic, on the other hand, is arguing that companies developing frontier AI models should be held at least partially responsible if their technology is used for widespread societal harm.

Some experts say the bill would dismantle existing regulations meant to deter companies from behaving badly. "Liability already exists under common law and provides a powerful incentive for AI companies to take reasonable steps to prevent foreseeable risks from their AI systems,” says Thomas Woodside, cofounder and senior policy adviser at the Secure AI Project, a nonprofit that has helped develop and advocate for AI safety laws in California and New York. “SB 3444 would take the extreme step of nearly eliminating liability for severe harms. But it's a bad idea to weaken liability, which in most states is the most significant form of legal accountability for AI companies that's already in place."

Anthropic testified last week in favor of another Illinois state bill, SB 3261, which would be one of the nation’s strongest AI safety laws if it were to pass. That legislation requires frontier AI developers like OpenAI and Anthropic to create public safety and child protection plans and get them tested by third-party auditors to assess their effectiveness.

Anthropic, which was founded five years ago by a group of OpenAI defectors, has developed a reputation for speaking loudly about potential risks stemming from advanced artificial intelligence and advocating for safeguards to prevent them. That approach has repeatedly landed the company in the crosshairs of the Trump administration, which has tried to curb state-level AI regulations that it argues could hamper development. David Sacks, then the Trump administration’s AI and crypto czar, complained in a social media post last year that Anthropic was running a “sophisticated regulatory capture strategy based on fear-mongering.”

Update 4/14/26 11:35 am EDT: This story has been updated to include a statement from Illinois governor JB Pritzker's office.