Anthropic Sues U.S. Government Over National Security Risk Designation

Creator:

Anthropic AI company logo and courthouse

Quick Read

  • Anthropic has filed federal lawsuits against the Trump administration challenging its designation as a supply-chain risk.
  • The AI firm alleges the government’s actions violate its First Amendment rights and bypass proper legal procedures.
  • The dispute centers on the Pentagon’s use of Anthropic’s AI model, Claude, and the company’s restrictions on its application.

WASHINGTON (Azat TV) – Artificial intelligence company Anthropic has filed federal lawsuits against the Trump administration, challenging the unprecedented designation of the firm as a risk to the Defense Department’s supply chain. The AI startup alleges the move violates its First Amendment rights, exceeds the scope of relevant statutes, and bypasses proper procedures for canceling government contracts.

Anthropic Alleges Unlawful Retaliation and Rights Violations

In its filings in the U.S. District Court for the Northern District of California, Anthropic’s legal team stated that the company is turning to the judiciary “as a last resort to vindicate its rights and halt the Executive’s unlawful campaign of retaliation.” The lawsuit names several federal agencies and cabinet officials, including the Defense Department and Defense Secretary Pete Hegseth, as defendants. Anthropic also indicated plans to file a separate suit in the U.S. Court of Appeals for the D.C. Circuit.

The dispute stems from a disagreement over the Pentagon’s use of Anthropic’s AI model, Claude. Anthropic CEO Dario Amodei had previously informed Secretary Hegseth that the company would not permit Claude’s use for surveilling American citizens or for autonomous weapons. In response, Hegseth reportedly threatened to label Anthropic a supply-chain risk, a designation typically reserved for entities with ties to U.S. adversaries.

The Pentagon formally designated Anthropic a supply-chain risk on Wednesday, following a social media post from President Donald Trump directing federal agencies to cease using Claude, which has reportedly been employed by the Pentagon in military operations, including in Iran.

Legal Challenges to Supply-Chain Designation

Anthropic’s lawsuit argues that statements made by President Trump and Secretary Hegseth indicate the government’s intent to suppress constitutionally protected speech. “The Constitution confers on Anthropic the right to express its views — both publicly and to the government — about the limitations of its own AI services and important issues of AI safety,” the company’s lawyers asserted.

Furthermore, the company contends that the government has overstepped the legal boundaries of the supply-chain risk designation statute. Anthropic argues that this law is intended for situations where foreign adversaries might sabotage national security systems, and that the government has not established such a risk concerning Anthropic. The firm also claims that Trump and Hegseth exceeded their authority by attempting to terminate government contracts without adhering to established procurement procedures.

Government and Company Responses

Spokespeople for the White House did not immediately respond to requests for comment. A Pentagon spokesperson stated that the department does not comment on ongoing litigation. White House spokesperson Liz Huston released a statement asserting that the president “will never allow a radical left, woke company” to dictate military operations, adding, “Under the Trump Administration, our military will obey the United States Constitution – not any woke AI company’s terms of service.”

Anthropic stated it wishes to continue negotiations with the government, with a spokesperson noting, “We will continue to pursue every path toward resolution, including dialogue with the government.” However, the company also alleged in its suit that the punitive actions are “harming Anthropic irreparably.” This statement appears to contrast with earlier remarks by CEO Dario Amodei, who reportedly told CBS News that the impact of the designation was “fairly small” and the company would “be fine.” Claude has been integrated into the Department of Defense over the past year and was reportedly the only AI model approved for classified systems, with extensive use in military operations.

The legal battle highlights the growing tension between national security concerns, the ethical development of artificial intelligence, and the government’s ability to regulate or restrict the use of powerful AI technologies by domestic companies.

LATEST NEWS