Florian Gaertner/Photothek via Getty Images
- OpenAI employees are publicly discussing the company's agreement with the Department of Defense.
- Some have called for more clarity; others say the contract includes strong protections
- Sam Altman said OpenAI is working with the Pentagon to amend its contract after backlash.
OpenAI employees are airing their views about the company's deal with the Pentagon.
In posts on X over the weekend, current and former staff weighed in on whether OpenAI compromised its safety principles in negotiations with the US Department of Defense — and how the agreement compares to rival Anthropic's stance.
Last week, Sam Altman confirmed OpenAI's deal to give the Department of Defense access to its AI models. The agreement came after Anthropic refused to accept government terms that could have allowed its model, Claude, to be deployed for mass domestic surveillance or autonomous lethal weapons.
OpenAI said in a blog post on Saturday that its contract with the Defense Department is "better" and includes more safety guardrails than Anthropic's original contract.
On Monday evening, following concerns around the deal, Altman said on X that OpenAI is working with the Pentagon to "make some additions in our agreement."
Here's what OpenAI staff have to say:
Boaz Barak
Boaz Barak, a member of OpenAI's technical staff who works on alignment and is also a Harvard computer science professor, pushed back against the idea that OpenAI had weakened safeguards.
In a post on X on Sunday, Barak said there is a narrative that Anthropic had a "wonderful contract" blocking the US government from using it for mass domestic surveillance or autonomous lethal weapons, and that OpenAI's deal would now unleash those risks.
"It is wrong to present the OAI contract as if it is the same deal than Anthropic rejected, or even as if it is less protective of the red lines than the deal Anthropic already had in place before," he wrote.
"Obviously I don't know all details of what Anthropic had before, but based on what I know, it is quite likely that the contract OAI signed gives more guarantees of no usage of models for mass domestic surveillance or autonomous lethal weapons than Anthropic ever had," he added.
In another X post on Monday, Barak said: "The red line of not using AI to do domestic mass surveillance is not Anthropic's red line – it should be all of ours."
Miles Brundage
Miles Brundage, OpenAI's former head of policy research, said in a post on X on Saturday that "in light of what external lawyers and the Pentagon are saying, OpenAI employees' default assumption here should unfortunately be that OpenAI caved + framed it as not caving, and screwed Anthropic while framing it as helping them."
"To be clear, OAI is a complex org, and I think many people involved in this worked hard for what they consider a fair outcome. Some others I do not trust at all, particularly as it relates to dealings with government and politics," he added.
He later clarified on Sunday in a reply to his post that he "probably should not have said 'caved' in the first tweet."
"OpenAI may very well have gotten what they wanted and, at the same time, this could have weakened Anthropic's bargaining position since Anthropic cared about a detail OAI didn't, and been caving from their POV," he said.
Clive Chan
Clive Chan, a member of technical staff at OpenAI, said in a post on X on Sunday that he believes the company's contract includes guarantees against the use of its models for mass domestic surveillance or autonomous lethal weapons. He added that he is "advocating internally to release more information" about the agreement.
"If we later learn this is not the case, then I will advocate internally to terminate the contract," he added.
In a reply to his post, Chan acknowledged that there are likely limits on what can be publicly disclosed about defense contracts. Still, he said the company should have anticipated public concerns and prepared clearer answers in advance.
Following the publication of OpenAI's blog post, Chan said on Sunday on X that the post "covers most" of his concerns. "Thanks to the team for being super thoughtful about the approach to this," he added.
Mohammad Bavarian
Mohammad Bavarian, a research scientist at OpenAI, said in an X post on Monday that he doesn't think there is an "un-crossable gap between what Anthropic wants and DoW's demands," adding that "with cooler heads it should be possible to cross the divide."
The Pentagon's designation of Anthropic as a supply chain risk is "unfair, unwise, and an extreme overreaction," Bavarian wrote on Monday.
"Designating an organization which has contributed so much to pushing AI forward and with so much integrity does not serve the country or humanity well," he added.
Noam Brown
Noam Brown, a researcher at OpenAI, said in an X post on Tuesday that the original language in the company's agreement with the Department of War left "legitimate questions unanswered" — particularly around new ways AI could potentially enable lawful surveillance.
After OpenAI updated its blog post on Monday evening, Brown said "the language is now updated to address this," but he strongly believes that "the world should not have to rely on trust in AI labs or intelligence agencies for their safety and security."
Brown added that deployment to the NSA and other Department of War intelligence agencies would be paused to allow time to address the potential loopholes "through the democratic process before deployment."
"I know that legislation can sometimes be slow, but I'm afraid of a slippery slope where we become accustomed to circumventing the democratic process for important policy decisions," he wrote.
Read the original article on Business Insider






























