OpenAI Had Banned Military Use. The Pentagon Tested Its Models Through Microsoft Anyway

0
28

Save StorySave this storySave StorySave this story

OpenAI CEO Sam Altman is still in the hot seat this week after his company signed a deal with the US military. OpenAI employees have criticized the move, which came after Anthropic’s roughly $200 million contract with the Pentagon imploded, and asked Altman to release more information about the agreement. Altman admitted it looked “sloppy” in a social media post.

While this incident has become a major news story, it may just be the latest and most public example of OpenAI creating vague policies around how the US military can access its AI.

In 2023, OpenAI’s usage policy explicitly banned the military from accessing its AI models. But some OpenAI employees discovered the Pentagon had already started experimenting with Azure OpenAI, a version of OpenAI’s models offered by Microsoft, two sources familiar with the matter said. At the time, Microsoft had been contracting with the Department of Defense for decades. It was also OpenAI’s largest investor, and had broad license to commercialize the startup’s technology.

That same year, OpenAI employees saw Pentagon officials walking through the company’s San Francisco offices, the sources said. They spoke on the condition of anonymity as they aren’t licensed to comment on private company matters.

Some OpenAI employees were wary about associating with the Pentagon, while others were simply confused about what OpenAI’s usage policies meant. Did the policy apply to Microsoft? While sources tell WIRED it was not clear to most employees at the time, spokespeople from OpenAI and Microsoft say Azure OpenAI products are not, and were not, subject to OpenAI’s policies.

“Microsoft has a product called the Azure OpenAI Service that became available to the US Government in 2023 and is subject to Microsoft terms of service,” said spokesperson Frank Shaw in a statement to WIRED. Microsoft declined to comment specifically on when it made Azure OpenAI available to the Pentagon, but notes the service was not approved for “top secret” government workloads until 2025.

“AI is already playing a significant role in national security and we believe it’s important to have a seat at the table to help ensure it’s deployed safely and responsibly,” OpenAI spokesperson Liz Bourgeois said in a statement. “We've been transparent with our employees as we’ve approached this work, providing regular updates and dedicated channels where teams can ask questions and engage directly with our national security team.”

The Department of Defense did not respond to WIRED's request for comment.

By January 2024, OpenAI updated its policies to remove the blanket ban on military use. Several OpenAI employees found out about the policy update through an article in The Intercept, sources say. Company leaders later addressed the change at an all-hands meeting, explaining how the company would tread carefully in this area moving forward.

In December 2024, OpenAI announced a partnership with Anduril to develop and deploy AI systems for “national security missions.” Ahead of the announcement, OpenAI told employees that the partnership was narrow in scope and would only deal with unclassified workloads, the same sources said. This stood in contrast to a deal Anthropic had signed with Palantir, which would see Anthropic’s AI used for classified military work.

Palantir approached OpenAI in the fall of 2024 to discuss participating in their “FedStart” program, an OpenAI spokesperson confirmed to WIRED. The company ultimately turned it down, and told employees it would’ve been too high-risk, two sources familiar with the matter tell WIRED. However, OpenAI now works with Palantir in other ways.

Around the time the Anduril deal was announced, a few dozen OpenAI employees joined a public Slack channel to discuss their concerns about the company's military partnerships, sources say and a spokesperson confirmed. Some believed the company’s models were too unreliable to handle a user’s credit card information, let alone assist Americans on the battlefield.

Not everyone shared their concerns. Other employees felt that the Anduril partnership showed the company would handle its military partnerships responsibly. “OpenAI’s approach thus far has been ‘measure twice, cut once’ when it comes to broad classified deployments. Employees are engaged on the question of what approach to national security is in line with the mission,” a current OpenAI researcher tells me.

That’s partly why OpenAI’s latest Pentagon deal divided employees. While Altman said publicly he supported Anthropic’s red lines—to not allow its AI to be used for legal mass surveillance or the development of autonomous weapons—the company’s agreement appeared to leave room for those very activities, according to outside legal experts.

“The biggest losers in all of this are everyday people and civilians in conflict zones,” said Sarah Shoker, the former head of OpenAI’s geopolitics team, in a Substack post last week. “Our ability to understand the effects of military AI in war is and will be severely hindered due to layers of opacity caused by technical design and policy. It’s black boxes all the way down.”

Charlie Bullock, a senior research fellow with the Institute for Law and AI, told WIRED that OpenAI’s public comments suggest the Pentagon may have been permitted to engage in forms of surveillance that are technically considered legal, such as buying up Americans’ user data from third-party firms and analyzing it with AI. OpenAI later amended the terms of its agreement to address this specific concern, though Bullock notes that without seeing the full terms of the agreement, the public essentially has to take OpenAI at its word.

“Over the weekend it became clear that the original language in the OpenAI/DoW agreement left legitimate questions unanswered, especially around some novel ways that AI could potentially enable legal surveillance,” said Noam Brown, an OpenAI researcher, in a social media post. Brown continued to say he’s now planning to become “more personally involved with policy at OpenAI.”

Just over two years after OpenAI removed its blanket ban on military use, the company seems to have embraced defense partnerships. At an all-hands meeting on Tuesday, Altman reportedly told employees that the company doesn’t get to make the call about what the defense department does with its artificial intelligence software. Altman also said he’s interested in selling the company’s AI models to NATO.

This is an edition of the Model Behavior newsletter. Read previous newsletters here.