Scoop: OpenAI, Pentagon add more surveillance protections to AI deal
OpenAI CEO Sam Altman. Photo: Justin Sullivan/Getty Images
OpenAI and the Pentagon have agreed to strengthen their recently agreed contract, following widespread backlash that domestic mass surveillance was still a real risk under the deal — though the language has not been formally signed, sources familiar with the pact told Axios.
Why it matters: The Pentagon’s deal with Anthropic to use Claude for national security blew up, and the prospect of securing an agreement with OpenAI appeared to be on thin ice if concerns around mass domestic surveillance weren’t addressed.
OpenAI CEO Sam Altman approached undersecretary of Defense for research and engineering Emil Michael to rework the contract, the sources said.
The big picture: As negotiations were deteriorating with Anthropic, the Pentagon and OpenAI began working through an alternative.
Altman said he had the same worries as Anthropic — domestic mass surveillance and autonomous weapons — and critics questioned whether civil liberties and safety would really be upheld.
That prompted the CEO to try to answer thousands of questions on X directly.
The Pentagon also had to go on a messaging spree, reassuring observers it also cared about civil liberties and had no intention to spy on Americans, and that this was a matter of letting national security be handled by the government — not a company.
“One thing I think I did wrong: we shouldn’t have rushed to get this out on Friday,” Altman said in an internal post to employees earlier on Monday, which he later shared on X.
“The issues are super complex, and demand clear communication. We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy. Good learning experience for me as we face higher-stakes decisions in the future.”
The language seen by Axios states:
“Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, FISA Act of 1978, the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.”
“For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.”
Additionally, Altman said on X the Pentagon has affirmed OpenAI’s services will not be used by intelligence agencies like the National Security Agency and any services to those agencies would require “a follow-on modification” to the contract.
Between the lines: The amendment to the existing OpenAI-Pentagon contract, makes an explicit reference to “commercially acquired” or public information. Previously the contract named only “private information.”
That would have left geolocation, web browsing data or personal financial information purchased from data brokers up for grabs.
What they’re saying: “It’s critical to protect the civil liberties of Americans, and there was so much focus on this, that we wanted to make this point especially clear, including around commercially acquired information,” Altman said on X.
“Just like everything we do with iterative deployment, we will continue to learn and refine as we go.”
What we’re watching: The Pentagon as of Monday night has not sent Anthropic a formal notice designating the company a “supply chain risk,” as threatened last week, as Altman continues to push for the same terms to be offered to the rival company.
Editor’s note: This story has been updated with new details throughout

