{"id":46934,"date":"2026-03-06T00:01:19","date_gmt":"2026-03-06T00:01:19","guid":{"rendered":"https:\/\/agooka.com\/news\/business\/openai-had-banned-military-use-the-pentagon-tested-its-models-through-microsoft-anyway\/"},"modified":"2026-03-06T00:01:19","modified_gmt":"2026-03-06T00:01:19","slug":"openai-had-banned-military-use-the-pentagon-tested-its-models-through-microsoft-anyway","status":"publish","type":"post","link":"https:\/\/agooka.com\/news\/business\/openai-had-banned-military-use-the-pentagon-tested-its-models-through-microsoft-anyway\/","title":{"rendered":"OpenAI Had Banned Military Use. The Pentagon Tested Its Models Through Microsoft Anyway"},"content":{"rendered":"<p>Save StorySave this storySave StorySave this story<\/p>\n<p>OpenAI CEO Sam Altman is still in the hot seat this week after his company signed a deal with the US military. OpenAI employees have criticized the move, which came after Anthropic\u2019s roughly $200 million contract with the Pentagon imploded, and asked Altman to release more information about the agreement. Altman admitted it looked \u201csloppy\u201d in a social media post.<\/p>\n<p>While this incident has become a major news story, it may just be the latest and most public example of OpenAI creating vague policies around how the US military can access its AI.<\/p>\n<p>In 2023, OpenAI\u2019s usage policy explicitly banned the military from accessing its AI models. But some OpenAI employees discovered the Pentagon had already started experimenting with Azure OpenAI, a version of OpenAI\u2019s models offered by Microsoft, two sources familiar with the matter said. At the time, Microsoft had been contracting with the Department of Defense for decades. It was also OpenAI\u2019s largest investor, and had broad license to commercialize the startup\u2019s technology.<\/p>\n<p>That same year, OpenAI employees saw Pentagon officials walking through the company\u2019s San Francisco offices, the sources said. They spoke on the condition of anonymity as they aren\u2019t licensed to comment on private company matters.<\/p>\n<p>Some OpenAI employees were wary about associating with the Pentagon, while others were simply confused about what OpenAI\u2019s usage policies meant. Did the policy apply to Microsoft? While sources tell WIRED it was not clear to most employees at the time, spokespeople from OpenAI and Microsoft say Azure OpenAI products are not, and were not, subject to OpenAI\u2019s policies.<\/p>\n<p>\u201cMicrosoft has a product called the Azure OpenAI Service that became available to the US Government in 2023 and is subject to Microsoft terms of service,\u201d said spokesperson Frank Shaw in a statement to WIRED. Microsoft declined to comment specifically on when it made Azure OpenAI available to the Pentagon, but notes the service was not approved for \u201ctop secret\u201d government workloads until 2025.<\/p>\n<p>\u201cAI is already playing a significant role in national security and we believe it\u2019s important to have a seat at the table to help ensure it\u2019s deployed safely and responsibly,\u201d OpenAI spokesperson Liz Bourgeois said in a statement. \u201cWe&#039;ve been transparent with our employees as we\u2019ve approached this work, providing regular updates and dedicated channels where teams can ask questions and engage directly with our national security team.\u201d<\/p>\n<p>The Department of Defense did not respond to WIRED&#039;s request for comment.<\/p>\n<p>By January 2024, OpenAI updated its policies to remove the blanket ban on military use. Several OpenAI employees found out about the policy update through an article in The Intercept, sources say. Company leaders later addressed the change at an all-hands meeting, explaining how the company would tread carefully in this area moving forward.<\/p>\n<p>In December 2024, OpenAI announced a partnership with Anduril to develop and deploy AI systems for \u201cnational security missions.\u201d Ahead of the announcement, OpenAI told employees that the partnership was narrow in scope and would only deal with unclassified workloads, the same sources said. This stood in contrast to a deal Anthropic had signed with Palantir, which would see Anthropic\u2019s AI used for classified military work.<\/p>\n<p>Palantir approached OpenAI in the fall of 2024 to discuss participating in their \u201cFedStart\u201d program, an OpenAI spokesperson confirmed to WIRED. The company ultimately turned it down, and told employees it would\u2019ve been too high-risk, two sources familiar with the matter tell WIRED. However, OpenAI now works with Palantir in other ways.<\/p>\n<p>Around the time the Anduril deal was announced, a few dozen OpenAI employees joined a public Slack channel to discuss their concerns about the company&#039;s military partnerships, sources say and a spokesperson confirmed. Some believed the company\u2019s models were too unreliable to handle a user\u2019s credit card information, let alone assist Americans on the battlefield.<\/p>\n<p>Not everyone shared their concerns. Other employees felt that the Anduril partnership showed the company would handle its military partnerships responsibly. \u201cOpenAI\u2019s approach thus far has been \u2018measure twice, cut once\u2019 when it comes to broad classified deployments. Employees are engaged on the question of what approach to national security is in line with the mission,\u201d a current OpenAI researcher tells me.<\/p>\n<p>That\u2019s partly why OpenAI\u2019s latest Pentagon deal divided employees. While Altman said publicly he supported Anthropic\u2019s red lines\u2014to not allow its AI to be used for legal mass surveillance or the development of autonomous weapons\u2014the company\u2019s agreement appeared to leave room for those very activities, according to outside legal experts.<\/p>\n<p>\u201cThe biggest losers in all of this are everyday people and civilians in conflict zones,\u201d said Sarah Shoker, the former head of OpenAI\u2019s geopolitics team, in a Substack post last week. \u201cOur ability to understand the effects of military AI in war is and will be severely hindered due to layers of opacity caused by technical design and policy. It\u2019s black boxes all the way down.\u201d<\/p>\n<p>Charlie Bullock, a senior research fellow with the Institute for Law and AI, told WIRED that OpenAI\u2019s public comments suggest the Pentagon may have been permitted to engage in forms of surveillance that are technically considered legal, such as buying up Americans\u2019 user data from third-party firms and analyzing it with AI. OpenAI later amended the terms of its agreement to address this specific concern, though Bullock notes that without seeing the full terms of the agreement, the public essentially has to take OpenAI at its word.<\/p>\n<p>\u201cOver the weekend it became clear that the original language in the OpenAI\/DoW agreement left legitimate questions unanswered, especially around some novel ways that AI could potentially enable legal surveillance,\u201d said Noam Brown, an OpenAI researcher, in a social media post. Brown continued to say he\u2019s now planning to become \u201cmore personally involved with policy at OpenAI.\u201d<\/p>\n<p>Just over two years after OpenAI removed its blanket ban on military use, the company seems to have embraced defense partnerships. At an all-hands meeting on Tuesday, Altman reportedly told employees that the company doesn\u2019t get to make the call about what the defense department does with its artificial intelligence software. Altman also said he\u2019s interested in selling the company\u2019s AI models to NATO.<\/p>\n<p><em>This is an edition of<\/em> <em>the<\/em> <a href=\"https:\/\/www.wired.com\/newsletter?sourceCode=editarticle\" rel=\"noreferrer\" target=\"_blank\"><em><strong>Model Behavior newsletter<\/strong><\/em><\/a>. <em>Read previous newsletters<\/em> <a href=\"https:\/\/www.wired.com\/tag\/model-behavior\/\" rel=\"noreferrer\" target=\"_blank\"><em><strong>here.<\/strong><\/em><\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Save StorySave this storySave StorySave this story OpenAI CEO Sam Altman is still in the hot seat this week after his company signed a deal with the US military. OpenAI employees have criticized the move, which came after Anthropic\u2019s roughly $200 million contract with the Pentagon imploded, and asked Altman to release more information about [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":46935,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[36],"tags":[],"class_list":{"0":"post-46934","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-business"},"_links":{"self":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/46934","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/comments?post=46934"}],"version-history":[{"count":0,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/46934\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media\/46935"}],"wp:attachment":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media?parent=46934"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/categories?post=46934"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/tags?post=46934"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}