Quick Read
- OpenAI CEO Sam Altman revised the company’s Pentagon AI deal after backlash, limiting domestic surveillance and intelligence agency use.
- The revisions followed President Trump’s order to ban federal agencies from using rival AI firm Anthropic’s technology due to its refusal to relax AI safeguards.
- Employees at Google and OpenAI are jointly demanding stricter limits on military AI, citing ethical concerns and recent military operations in Iran.
- Anthropic faces potential designation as a national security ‘supply chain risk’ for its stance, which it plans to challenge in court.
- The controversy highlights global tensions between AI safety principles and national defense needs, with Europe also navigating similar challenges.
WASHINGTON (Azat TV) – OpenAI CEO Sam Altman announced significant revisions to the company’s recently struck deal with the U.S. Department of Defense (DoD) this week, following widespread backlash and accusations of opportunism. The move came just hours after President Donald Trump ordered all federal agencies to cease using rival AI firm Anthropic’s technology, which had refused to relax its AI safeguards for defense applications. The controversy has ignited a broader debate over the ethical boundaries of artificial intelligence in military operations, prompting a coordinated push from tech workers at both Google and OpenAI for stricter limitations on military AI.
The initial deal between OpenAI and the DoD, announced last Friday, drew immediate criticism for its timing and perceived lack of clear safeguards. Critics argued that the agreement appeared to capitalize on Anthropic’s public standoff with the Department of War (DoW), which had sought ‘any lawful purpose’ use of Anthropic’s Claude AI models. Anthropic had maintained its commitment to prohibit using Claude for mass domestic surveillance or fully autonomous weapons, leading to the DoW’s threat to designate the company a national security ‘supply chain risk.’
OpenAI Amends Controversial Defense Agreement
Responding to the swift backlash, OpenAI CEO Sam Altman conceded that the announcement of the Pentagon deal had been ‘rushed and sloppy.’ In an internal communication to staff, later confirmed by CNBC, Altman outlined key revisions to the contract language. The new version explicitly closes off domestic surveillance applications and mandates that any intelligence agency work be governed by a separate agreement. Altman also affirmed that OpenAI’s tools would not be used by intelligence agencies such as the National Security Agency (NSA).
The CEO defended the company’s decision to work with the Pentagon, describing the negative public relations as ‘really painful’ but asserting it was the ‘right decision’ in the long term, according to a report by The Wall Street Journal. OpenAI had previously amended its policies in January to allow certain government AI applications, a shift that had already sparked internal dissent.
Anthropic’s Standoff and Presidential Intervention
The backdrop to OpenAI’s revised deal is Anthropic’s dramatic confrontation with the Department of War. The DoW, rebranded from the Department of Defense, sought to amend contract language for Anthropic’s Claude system to allow its use for ‘any lawful purpose.’ Anthropic, however, steadfastly refused to strip out protections that prohibit using its AI for mass domestic surveillance or fully autonomous weapons. The company emphasized that Claude was already deployed within classified government networks, supporting intelligence analysis and operational planning, but that ethical guardrails were non-negotiable.
The dispute escalated publicly last week, culminating in President Trump’s directive via Truth Social: ‘I am directing every federal agency in the United States government to immediately cease all use of Anthropic’s technology. We don’t need it, we don’t want it, and will not do business with them again!’ Defense Secretary Pete Hegseth had indicated that Anthropic could be designated a ‘supply-chain threat,’ a label typically reserved for foreign adversaries. Anthropic has signaled its intent to challenge any such move in court, according to Resilience Media.
Employee Activism and the Future of Military AI
The intensifying debate has spurred an unprecedented wave of activism within the tech industry. Employees at Google and OpenAI are mounting a coordinated push for stricter limits on military AI applications, marking the first joint worker action between the rival tech giants. This movement, reported by TechBuzz AI, gained urgency amidst intensifying military operations in Iran, which have reportedly relied on AI-enhanced surveillance and strike coordination.
One OpenAI researcher stated in internal messages, ‘We’re not anti-defense. We’re anti-unaccountable AI in life-or-death scenarios.’ Activists are demanding quarterly transparency reports on all government AI deployments, third-party ethics audits, and employee seats on internal review boards evaluating military contracts. The blacklisting of Anthropic, which could lose access to a government AI market expected to hit $50 billion annually by 2028, looms large over these demands, signaling that ethical branding may no longer coexist with unrestricted defense pursuits.
Global Implications and European Perspectives on AI Safety
The tensions unfolding in Washington are reverberating globally, particularly in Europe. While there has been no equivalent public rupture between European defense ministries and frontier AI providers, questions are emerging about the region’s alignment with AI safety principles. Some observers suggest that Anthropic’s refusal to dilute safeguards aligns more closely with Europe’s stricter posture on surveillance and high-risk AI under the EU AI Act.
European governments are actively pursuing AI integration, including in defense. In January 2026, Mistral AI announced an agreement with France’s Ministry of Defence to strengthen the nation’s defense capabilities through advanced AI solutions, though details remain scarce. The UK also signed a memorandum of understanding with OpenAI in 2025 to explore AI opportunities, and OpenAI recently announced plans to make London its largest R&D hub outside the US. However, the scope of these European deals regarding surveillance or lethality remains largely undisclosed, mirroring the ‘kill chain’ questions at the heart of the US debate. The precedent set by the Pentagon’s actions and contract negotiations could ultimately shape decisions across Europe and within NATO systems.
The ongoing dispute between leading AI firms and governments highlights a fundamental tension: the imperative for national security to leverage advanced AI capabilities versus the ethical concerns surrounding autonomous weapons and unchecked surveillance. As these powerful general-purpose systems become more deeply embedded in intelligence and planning, the ‘fine print’ of contractual boundaries is transforming into a critical battleground for defining AI’s role in society.

