OpenAI claims its military AI pact includes 'more guardrails' than Anthropic's, but can cloud-only deployment truly prevent weaponization?
The U.S. Department of War (DoW) signed a classified agreement with OpenAI emphasizing cloud-exclusive architecture and safety stack control. OpenAI CEO Sam Altman stated:
We think our agreement has more guardrails than any previous agreement for classified AI deployments, including Anthropic’s.
It was definitely rushed, and the optics don't look good.
— Sam Altman (@sama) March 1, 2026
We really wanted to de-escalate things, and we thought the deal on offer was good.
If we are right and this does lead to a de-escalation between the DoW and the industry, we will look like geniuses, and a company that…
The contract explicitly prohibits mass domestic surveillance, autonomous weapons, and high-stakes automated decisions. However, the language reveals critical nuances: while OpenAI retains control over the safety stack, the DoW Directive 3000.09 compliance clause allows for classified exceptions under national security mandates. Legal alignment is tied to current policies, not future ethical frameworks.
Cloud-only deployment creates a technical barrier against direct weaponization but does not prevent adversarial attacks on infrastructure. The contract requires cleared personnel in deployment loops, yet termination clauses for violations remain vague—raising questions about enforcement mechanisms in covert operations.