Anthropic Files Lawsuit Against U.S. Government Over AI Risk Claim
Anthropic, a leading AI developer, has launched a groundbreaking lawsuit against the U.S. government, challenging its designation as a “supply chain risk.”
This legal battle stems from disputes over AI usage restrictions in military contracts. The case highlights tensions between AI ethics and national security demands.
The conflict erupted when Defense Secretary Pete Hegseth demanded Anthropic remove safeguards from its AI tools, like Claude, for military applications.
Anthropic refused, citing risks in “lethal autonomous warfare” and “mass surveillance of Americans.” These limits have been standard in government contracts since Claude’s deployment in classified networks in 2024.
Public clashes intensified as President Donald Trump called Anthropic leaders “left-wing nut jobs,” ordering agencies to halt tool usage. Hegseth then labeled the firm a supply chain risk typically reserved for foreign adversaries, banning its tech in defense work and restricting contractors.
Filed Monday in California federal court, the suit targets Trump’s executive office, Hegseth, Secretary of State Marco Rubio, Commerce Secretary Howard Lutnick, and 16 agencies, including the Department of War (Formerly Defense), Homeland Security, and Energy. Reported by BBC.
Anthropic calls the actions “unprecedented and unlawful,” arguing they punish protected speech without a statutory basis. The complaint details failed negotiations: Anthropic offered compromises on contract language, but talks collapsed amid public criticism.
No monetary damages are sought; instead, it requests a ruling that the directive exceeds presidential authority and violates the Constitution, plus removal of the risk label.
Claude powers intelligence analysis, targeting, and simulations in classified settings, often with partners like Palantir. Anthropic stresses that current frontier AI lacks reliability for autonomous weapons, risking warfighters and civilians, and opposes surveillance, violating rights.
The risk label threatens “irreparable harm,” endangering millions in contracts and reputation. Major firms like Google, Meta, Amazon, and Microsoft use Claude Code ubiquitously but plan continued non-defense adoption despite bans.​
A White House spokesperson dismissed Anthropic as a “radical left, woke company” defying military needs under constitutional obedience. Nearly 40 Google and OpenAI employees filed a supportive brief, uniting across politics on AI risks needing guardrails.​
Legal expert Carl Tobias predicts a “scorched earth” defense, potentially reaching the Supreme Court. The case sets precedents for AI governance, free speech in tech contracts, and supply chain rules rarely applied to U.S. firms.​
This feud underscores AI’s dual-use challenges: enabling defense while curbing harms. Outcomes could reshape federal AI procurement and ethical deployments.