Anthropic Hits Back After US Military Labels It a ‘Supply Chain Risk’

0
14

Save StorySave this storySave StorySave this story

United States Secretary of Defense Pete Hegseth directed the Pentagon to designate Anthropic as a “supply-chain risk” on Friday, sending shockwaves through Silicon Valley and leaving many companies scrambling to understand whether they can keep using one of the industry’s most popular AI models.

“Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic,” Hegseth wrote in a social media post.

The designation comes after weeks of tense negotiations between the Pentagon and Anthropic over how the US military could use the startup’s AI models. In a blog post this week, Anthropic argued its contracts with the Pentagon should not allow for its technology to be used for mass domestic surveillance of Americans or fully autonomous weapons. The Pentagon asked that Anthropic agree to let the US military apply its AI to “all lawful uses” with no specific exceptions.

A supply chain risk designation allows the Pentagon to restrict or exclude certain vendors from defense contracts if they are deemed to pose security vulnerabilities, such as risks related to foreign ownership, control, or influence. It is intended to protect sensitive military systems and data from potential compromise.

Anthropic responded in another blog post on Friday evening, saying it would “challenge any supply chain risk designation in court,” and that such a designation would “set a dangerous precedent for any American company that negotiates with the government.”

Anthropic added that it hadn’t received any direct communication from the Department of Defense or the White House regarding negotiations over the use of its AI models.

“Secretary Hegseth has implied this designation would restrict anyone who does business with the military from doing business with Anthropic. The Secretary does not have the statutory authority to back up this statement,” the company wrote.

The Pentagon declined to comment.

"This is the most shocking, damaging, and over-reaching thing I have ever seen the United States government do,” says Dean Ball, a senior fellow at the Foundation for American Innovation and the former senior policy advisor for AI at the White House. “We have essentially just sanctioned an American company. If you are an American, you should be thinking about whether or not you should live here 10 years from now."

People across Silicon Valley chimed in on social media expressing similar shock and dismay. “The people running this administration are impulsive and vindictive. I believe this is sufficient to explain their behavior,” Paul Graham, founder of the startup accelerator Y Combinator said.

Boaz Barak, an OpenAI researcher, said in a post that “kneecapping one of our leading AI companies is right about the worst own goal we can do. I hope very much that cooler heads prevail and this announcement is reversed.”

Meanwhile, OpenAI CEO Sam Altman announced on Friday night that the company reached an agreement with the Department of Defense to deploy its AI models in classified environments, seemingly with carveouts. “Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” said Altman. “The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.”

Confused Customers

In its Friday blog post, Anthropic said a supply chain risk designation, under the authority 10 USC 3252, only applies to Department of Defense contracts directly with suppliers, and doesn’t cover how contractors use its Claude AI software to serve other customers.

Three experts in federal contracts say it’s impossible at this point to determine which Anthropic customers, if any, must now cut ties with the company. Hegseth’s announcement “is not mired in any law we can divine right now,” says Alex Major, a partner at the law firm McCarter & English, which works with tech companies.

Amazon, Microsoft, Google, and Nvidia—all companies that provide services to the US military and work with Anthropic—did not immediately respond to WIRED’s request for comment. Anduril and Shield AI, two prominent AI-focused defense-tech companies, both declined to comment.

Supply chain risk designations typically do not go into effect immediately, and the US government is required to complete risk assessments and notify Congress before military partners have to cut ties with a company or its products, according to Charlie Bullock, senior research fellow at the Institute for Law and AI.

But the situation could still discourage other tech companies from working with the Pentagon, according to Greg Allen, senior adviser at the Wadhwani AI Center at the Center for Strategic and International Studies (CSIS). “The Defense Department just sent a huge message to every company that if you dip your toe in the defense contracting waters, we will grab your ankle and pull you all the way in, anytime we want,” he says.

Several legal experts tell WIRED that Anthropic is likely to sue the government. Hegseth previously suggested that the DoD could attack Anthropic by invoking the Defense Production Act, which would force the company to provide its technology to the Pentagon. Allen says the flip-flopping undermines the Pentagon’s argument that Anthropic is a genuine supply chain risk.

A lawsuit could take months or years to resolve, however, and Anthropic’s business could suffer in the meantime if companies are forced to sever ties.

The dispute raises critical questions for a plethora of prominent US military partners, such as Nvidia, Amazon, Google, and Palantir, which work closely with Anthropic.

One tech executive, whose company’s software is used by the US military and requested anonymity because of the sensitivity of the situation, said that until the Department of Defense’s’s directive goes beyond a post on social media, their company is in a holding pattern and has lawyers examining the issue.

As a comparison, the executive pointed to Section 889 of the National Defense Authorization Act, a procurement prohibition that bars federal agencies from contracting with companies that use certain Chinese telecom equipment as a “substantial or essential component” of any system.If this new mandate is similar, that could be a “high bar to clear,” the executive says, because even if a tech company is using Anthropic’s Claude Code internally, it might not be defined as an “essential” part of the product it is ultimately selling to the government.