OpenAI’s Pentagon AI Contract Under Scrutiny for Surveillance Loopholes

Creator:

Artificial intelligence chip with military imagery

Quick Read

  • OpenAI CEO Sam Altman announced new terms with the Pentagon, claiming adherence to ethical AI principles.
  • Experts and sources challenge OpenAI’s claims, citing the contract’s ‘any lawful use’ clause as a loophole for mass surveillance.
  • OpenAI’s stance on autonomous weapons is criticized for relying on existing DoD policy rather than explicit contractual bans.
  • Rival AI firm Anthropic was blacklisted by the Pentagon for refusing to compromise on similar ethical red lines.
  • OpenAI’s technical safeguards are deemed insufficient by critics to prevent surveillance or ensure human oversight in AI-driven military operations.

WASHINGTON (Azat TV) – OpenAI’s recent contract with the Pentagon, initially presented by CEO Sam Altman as upholding strict ethical red lines against mass surveillance and autonomous weapons, is now facing intense scrutiny from industry experts and former employees. This comes on the heels of rival AI firm Anthropic being blacklisted by the Department of Defense (DoD) for refusing to compromise on these very issues, raising questions about the true nature of OpenAI’s agreement and its implications for military AI ethics.

On Friday evening, following Anthropic’s standoff with the DoD, Sam Altman announced that OpenAI had successfully negotiated new terms with the Pentagon. He claimed the agreement incorporated OpenAI’s ‘most important safety principles,’ specifically prohibiting domestic mass surveillance and ensuring human responsibility for the use of force, including autonomous weapon systems. Altman stated that the DoD, which he referred to by the Trump Administration’s preferred name, the Department of War, agreed with these principles and reflected them in law and policy, putting them into OpenAI’s contract.

OpenAI’s Contract: ‘Human Responsibility’ and ‘No Surveillance’ Claims

Altman’s claims were immediately met with skepticism across social media and the AI industry. Critics questioned why the Pentagon would suddenly agree to such red lines when it had previously indicated it would not. OpenAI spokesperson Kate Waters maintained that the Pentagon had not sought mass surveillance powers and denied that the agreement permitted the crossing of ethical boundaries, stating, “The system cannot be used to collect or analyze Americans’ data in a bulk, open-ended, or generalized way.”

OpenAI’s contract specifies that for intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947, the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives. For autonomous weapons, the contract states that OpenAI’s technology “will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control.”

Legal Loopholes: The ‘Any Lawful Use’ Clause

Sources familiar with the Pentagon’s negotiations, however, told The Verge that OpenAI’s deal is significantly less stringent than Anthropic’s proposed terms, largely due to the inclusion of three crucial words: “any lawful use.” This clause effectively means that if a use is technically legal, the U.S. military can employ OpenAI’s technology to carry it out. Over past decades, the U.S. government has broadly interpreted “technically legal” to justify extensive mass surveillance programs.

Miles Brundage, OpenAI’s former head of policy research, expressed concern on X, suggesting that OpenAI likely “caved + framed it as not caving, and screwed Anthropic while framing it as helping them.” Dave Kasten of Palisade Research noted that the intelligence law section of OpenAI’s agreement is misleading, as “every bad intelligence scandal in the last 30 years had a legal memo saying it complied with those authorities.” Sarah Shoker, a senior research scholar at the University of California Berkeley and former lead of OpenAI’s geopolitics team, pointed out the vagueness of terms like “unconstrained,” “generalized,” and “open-ended” in OpenAI’s statements, arguing they are “language that’s designed to allow optionality for the leadership.”

Experts suggest that under current legal constraints, the Pentagon could legally use OpenAI’s technology to search foreign intelligence databases for information on Americans at scale, purchase bulk location data from brokers, and build comprehensive profiles of citizens from publicly available and purchased data, all without violating the letter of OpenAI’s agreement.

Autonomous Weapons: Ambiguity in Oversight

OpenAI’s “red line” on lethal autonomous weapons faces similar criticism. The contract’s reliance on existing DoD directives from 2023, which require human control only “where law, regulation, or Department policy requires,” offers no additional contractual prohibitions. This contrasts sharply with Anthropic’s demand for a ban on unsupervised lethal autonomous weapons until the technology is deemed sufficiently reliable.

The distinction between OpenAI’s commitment to “human responsibility for the use of force” and Anthropic’s insistence on “proper [human] oversight” is critical. Sources indicate that “human responsibility” could imply accountability after an AI system’s decision, whereas “oversight” would require human involvement before or during an AI-driven strike, ensuring a human-in-the-loop mechanism.

Technical Safeguards and Industry Fallout

OpenAI also cited technical safeguards, such as employees receiving security clearances, and the implementation of classifiers—small models designed to monitor and tag large models—to prevent red line violations. However, a source told The Verge that such safeguards are not new to AI companies working with the Pentagon and have limited impact. Classifiers, for instance, cannot confirm if a human reviewed an AI’s decision to attack or differentiate between a single query and a mass surveillance program. Furthermore, if the government deems an action legal, OpenAI’s classifiers would not be allowed to prohibit the technology from carrying it out.

Amidst this controversy, Defense Secretary Pete Hegseth and President Trump publicly asserted that the U.S. military’s use of technology would not be dictated by private tech companies. Jeremy Lewin, an undersecretary in the Trump administration, stated that OpenAI’s deal was a “compromise that Anthropic was offered, and rejected.”

Anthropic’s refusal to accept these terms led to its classification as a “supply-chain risk” by the Pentagon—a designation typically reserved for foreign companies with cybersecurity concerns and rarely applied to a U.S. firm. This move sparked widespread support for Anthropic within the tech industry, with many workers and public figures, including pop star Katy Perry, lauding the company’s ethical stand. Anthropic’s Claude chatbot even briefly surpassed ChatGPT as the most-downloaded app on Apple’s App Store. Despite the public perception, Anthropic CEO Dario Amodei has clarified that his company is not against lethal autonomous weapons in the future, only that current frontier AI systems are not reliable enough for unsupervised use.

The diverging paths of OpenAI and Anthropic underscore a critical tension in the rapidly evolving landscape of military AI: the balance between national security interests and ethical safeguards. OpenAI’s reliance on existing legal frameworks, while seemingly pragmatic, risks enabling interpretations that could erode privacy and human control, suggesting that current laws may be insufficient to govern advanced AI capabilities without explicit, unambiguous contractual prohibitions.

LATEST NEWS