← Back to stories A courtroom document labeled 'Not Guilty' beside a gavel symbolizes justice.
Photo by KATRIN BOLOVTSOVA on Pexels
虎嗅 2026-03-11

Anthropic sues the Pentagon after threats of seizure and a “supply‑chain risk” label — a clash over AI safety, national security and who draws the line

Lawsuit and immediate fallout

Anthropic has taken the U.S. Department of Defense (the Pentagon) to court after it has been reported that senior Pentagon officials threatened to invoke the Defense Production Act to force Anthropic’s cooperation and to formally designate the company a “supply‑chain risk.” The filing frames the moves as coercive and legally dubious; Anthropic says the twin threats — seize the company’s technology or eject it from the defense supply chain — are logically inconsistent. Reportedly, some 38 leading AI scientists and engineers, including Google’s chief scientist Jeff Dean, have submitted a friend‑of‑the‑court letter backing Anthropic’s position.

The dispute exploded into the open after a highly charged meeting in Washington and a short public flare from President Trump on Truth Social ordering federal agencies to stop using Anthropic’s technology. It has been reported that Pentagon officials used blunt internal language — one senior official reportedly described the meeting as “shit‑or‑get‑off‑the‑pot” — and that the secretary of defense instructed contracting offices to insert an “all lawful uses” clause in upcoming AI contracts as a standard term.

How Claude got inside the classified perimeter

Anthropic was founded in 2021 by former OpenAI researchers and incorporated as a public benefit corporation that embeds safety and societal impact into governance. Its flagship models, Claude and a classified variant called Claude Gov, earned FedRAMP authorization and — reportedly — were the first external generative models tested inside the Department of Energy’s highest classified environments. Through partnerships with Palantir and AWS, and integration with the Palantir Maven platform, Anthropic’s models became one of the most widely deployed advanced AI systems inside U.S. classified networks, and Anthropic negotiated direct agreements with the Defense Department’s Chief Digital and Artificial Intelligence Office with contract ceilings around $200 million.

Tensions grew during expansion talks. The Pentagon pressed to be able to use Claude for “all lawful purposes” across its GenAI.mil infrastructure. Anthropic agreed to most terms but preserved two technical policy limits: the model must not be used to autonomously make lethal decisions, and its deployment must respect civil‑liberties risks tied to mass information collection and error amplification. Anthropic’s suit stresses that these limits are grounded in technical realities — hallucinations, unreliability in adversarial settings and limits to current model assurance — not political ideology.

Bigger questions: state power, supply‑chain blacklist and precedent

This clash raises a broader governance question: who decides the acceptable boundaries for AI in life‑and‑death national security contexts — private developers, civilian regulators, or military commanders? Labeling Anthropic a domestic “supply‑chain risk” would be an unusual precedent; it has been reported that the Pentagon has never before publicly applied that tag to an American company. The case also illuminates a geopolitical backdrop familiar to Western readers: over the past half decade governments have used export controls, sanctions and procurement policy to reshape technology supply chains. Could the same levers now be used internally to compel or exclude AI firms?

For investors and markets the stakes are concrete. Blacklisting or seizure could impose multibillion‑dollar costs and complicate any IPO plans. For policymakers, the litigation will test the limits of executive tools like the Defense Production Act and force courts to reconcile national‑security imperatives with private‑sector autonomy and formal safety commitments. Who draws the red lines in wartime AI? That question is suddenly in a courtroom.

AI
View original source →