In short: OpenAI is revising a U.S. Defense Department contract to add stronger guardrails on how its AI can and cannot be used.
In June 2025, the U.S. Defense Department signed a one year contract with OpenAI, not Microsoft, worth up to $200 million. The deal was for access to OpenAI’s AI models for non-military uses like data analysis and cybersecurity.
In late February 2026, OpenAI said it had started revising the contract to strengthen protections against banned uses. OpenAI published key changes on March 2, 2026, and the updated agreement was still waiting for formal signing in early March.
The revisions say the tools cannot be intentionally used to track or monitor U.S. persons, including by using personal data bought from commercial sources. They also forbid use for directing autonomous weapons systems (weapons that can act on their own), mass domestic surveillance, and high-stakes automated decisions such as “social credit” scoring.
OpenAI also said the AI would be limited to certain cloud networks and would require OpenAI personnel oversight. Think of it like requiring the maker’s staff to be present for some uses, rather than handing over the keys and walking away.
This update addresses a sensitive fear people often have about government AI, that it could be turned into a large scale tracking tool. It also pushes back on claims that the Defense Department had been experimenting with Microsoft’s version of OpenAI technology before policy changes, since available reporting on this contract says OpenAI secured the deal and does not provide evidence of earlier Microsoft-led experiments.
Source: Wired
12
Software Development17
Data & Analytics6
Audio & Video Production5
Productivity & Workflow10
Voice & Speech5
Sales & Outreach5
Design & Creative5
Marketing & Growth4
Search & Discovery7
Email & Communication5
Art & Illustration3
Customer Support1
HR & Recruiting2
Writing & Content Creation3