← Back to stories Close-up of surveillance cameras on a Russian building facade with flag.
Photo by Спиридон Варфаламеев on Pexels
虎嗅 2026-03-28

Anthropic's guardrails: no VIP lanes — not even the Pentagon is exempt

Standoff over guardrails

It has been reported that President Trump ordered all federal agencies to immediately stop using Anthropic following a public dispute over the company's refusal to carve out special access for U.S. defense uses. The dispute centers on Anthropic’s decision to apply the same safety restrictions to every customer — commercial, academic and military alike — and to decline demands from the U.S. defense establishment to remove those limits. Anthropic had reportedly signed a prototype transition agreement with the Department of Defense in July 2025 worth up to $200 million and, the company says, was the first major Silicon Valley AI provider to deploy its Claude model inside classified government networks for tasks such as intelligence analysis and operational planning.

Legal and political escalation

The Department of Defense — referred to in some memos as the "Department of War," it has been reported — demanded that all defense AI contracts include an “Any Lawful Use” clause that would bar contractors from unilaterally restricting military deployments. Anthropic’s CEO Dario Amodei publicly argued that large-scale domestic surveillance and delegating lethal authority to unreliable systems are incompatible with democratic norms and would pose unacceptable risks. In response, Defense Secretary Pete Hegseth labeled Anthropic a “national security supply-chain risk,” issued a ban on contractors working with the firm, and threatened escalation up to invoking the Cold War–era Defense Production Act. Trump reportedly labeled Anthropic a “radical left” company; hours later OpenAI announced it had secured a written agreement with the Pentagon and a new contract, highlighting divergent strategies among AI firms.

Why it matters

This showdown is more than a contract dispute. It exposes a structural governance dilemma: unlike traditional hardware, AI policy is enforced in code. That makes private companies de facto gatekeepers of what sovereign militaries can do — a role neither democratically mandated nor legally settled. Anthropic says it will legally challenge its supply-chain designation, arguing such measures historically targeted foreign entities like Huawei and are unprecedented for a domestic vendor. If OpenAI can keep written safety principles and still win business, was Anthropic’s refusal a principled stand or a strategic misstep? The answer will matter for future bids, export controls, and how democracies decide who gets the last say over AI’s use in war and surveillance.

AI
View original source →