Trump Halts Claude AI Use Amid Pentagon Ethics Clash

Creator:

, ,

claude

Quick Read

  • US President Donald Trump ordered federal agencies to stop using Anthropic’s AI systems, including Claude.
  • The order escalates a dispute over Anthropic’s refusal to grant the Pentagon unrestricted use rights for its AI.
  • Anthropic maintains strict ethical guardrails against mass domestic surveillance and fully autonomous weapons.
  • The Pentagon threatened to invoke the Defense Production Act and designate Anthropic as a “supply chain risk.”
  • Over 500 employees from Google DeepMind and OpenAI signed a letter supporting ethical AI safeguards.

WASHINGTON (Azat TV) – US President Donald Trump has ordered all federal agencies to immediately cease using Anthropic’s advanced AI systems, including its flagship Claude model. This directive marks a significant escalation in a weeks-long confrontation between the White House, the Pentagon, and the San Francisco-based AI startup over the ethical deployment of artificial intelligence in national security operations, particularly concerning autonomous weapons and mass surveillance.

The core of the dispute centers on Anthropic’s refusal to grant the Department of Defense unrestricted rights to use Claude for “all lawful purposes.” As a self-proclaimed “safety-first” AI lab, Anthropic, founded in 2021 by former OpenAI researchers including CEO Dario Amodei, has built its reputation on strict ethical guardrails, explicitly blocking mass domestic surveillance and fully autonomous weapons that select and engage targets without human oversight. This clash highlights a critical turning point in defining a practical “codex” for AI, determining whether private tech firms can maintain independent ethical restrictions when contracting with government entities.

Pentagon Demands Unrestricted AI Deployment

Anthropic, which secured a $200 million Pentagon contract last July to provide AI for national security work, saw Claude become one of the first frontier AI models cleared for classified US government networks. However, tensions escalated when the Department of Defense insisted on a new agreement that would allow the military to use Claude without any oversight or ability for Anthropic to review specific deployments, even in classified environments.

Defense Secretary Pete Hegseth reportedly likened Claude to military hardware, arguing that contractors should not dictate how their products are deployed once a defense contract is signed. The Pentagon’s demand would effectively remove Anthropic’s right to block certain military applications, insisting that all use would remain lawful. While Anthropic has collaborated on defense-related projects, including missile defense systems, its chief executive, Dario Amodei, has consistently warned that current AI models are not reliable enough for lethal autonomous weapons and that AI-driven mass surveillance poses severe risks to civil liberties.

Trump’s Executive Order and Federal Backlash

The dispute intensified following a January operation that led to the capture of Venezuelan President Nicolas Maduro, where Claude was reportedly deployed via a platform operated by defense tech firm Palantir Technologies. Internal questions within Anthropic regarding the model’s usage allegedly alarmed defense officials, deepening mistrust. Signaling the Pentagon’s firm stance, Secretary Hegseth publicly stated that the department would not work with AI systems that restrict wartime deployment.

President Trump subsequently announced on Truth Social that he was directing every federal agency to “IMMEDIATELY CEASE” using Anthropic’s technology. He warned that non-cooperation during the phase-out period could lead to the use of the “full power of the presidency,” including potential civil and criminal consequences. Following this, the General Services Administration (GSA) suspended Anthropic from its USAi chatbot platform and initiated its removal from federal procurement systems. Despite the immediate order, the Pentagon has been granted up to six months to phase out Claude, underscoring how deeply embedded the technology has become within its operations.

Threats and Silicon Valley’s Divided Stance

The Pentagon set a firm deadline for compliance, threatening two major actions: the invocation of the Defense Production Act, a Cold War-era law allowing the government to compel private companies to prioritize defense contracts, and the designation of Anthropic as a “supply chain risk.” Such a designation, typically reserved for firms from adversarial nations, could bar other contractors from commercial activity with Anthropic. Anthropic has vowed to challenge any supply chain risk label in court, calling it intimidation and a dangerous precedent.

This confrontation has sparked rare public unity within Silicon Valley. More than 500 employees from Google DeepMind and OpenAI signed an open letter urging companies not to yield to demands for domestic mass surveillance or autonomous killing. OpenAI chief Sam Altman also confirmed his company’s opposition to mass surveillance and autonomous lethal weapons, indicating exploration of Pentagon contracts that would preserve similar safeguards. This episode exposes a growing divide within the tech sector, with some investors advocating for unrestricted military use of AI, while others warn of eroding democratic safeguards.

The outcome of this high-stakes dispute will likely set a significant precedent, determining not only Anthropic’s future but also who ultimately controls the ethical boundaries and deployment guidelines for advanced AI in warfare and surveillance, thereby shaping the de facto ‘codex’ governing this transformative technology.

LATEST NEWS