← Back to stories A scientist interacts with a robot helper, demonstrating modern technological innovation.
Photo by Pavel Danilyuk on Pexels
虎嗅 2026-03-28

The Silicon Valley–Washington consensus is breaking apart

Government bans a homegrown AI lab after a fight over military uses

The U.S. federal government has moved to effectively cut Anthropic out of the national security supply chain, marking an unprecedented break with a leading domestic AI firm. President Trump ordered all federal agencies to stop using Anthropic’s Claude models and gave the company six months to exit government service, while Defense Secretary Haigsses designated Anthropic a “national security supply-chain risk.” It has been reported that Pentagon contractors have been told to sever commercial ties immediately.

A safety-first lab at odds with Washington’s demand for “all lawful uses”

Anthropic was founded in 2021 by former OpenAI researchers Dario Amodei and Daniela Amodei on a platform of safety and “Constitutional AI” — training models to internalize a written set of principles rather than relying solely on curated labels. It has been reported that Anthropic recently raised about $30 billion and commands a valuation near $380 billion, and that its Claude model has been deployed in some classified military networks and, reportedly, used to support operations such as the alleged attempt to capture Venezuelan president Nicolás Maduro. Those reports — and Anthropic’s insistence that Claude not be used for mass domestic surveillance or integrated into fully autonomous lethal weapons — are at the heart of the rupture.

Ideology, procurement, and the politics of safety collide

The dispute is more than a contract fight. It cleaves an ecosystem where Silicon Valley’s safety-oriented researchers and boards (influenced by longtermist and Effective Altruism networks) now clash with a Washington that demands broader operational access and fewer limits on national-security deployments. Critics in the administration have framed Anthropic’s stance as political or “woke” obstruction; Anthropic’s leadership frames it as engineering discipline and democratic stewardship. Observers compare the showdown to the 2023 governance crisis at OpenAI — but this time the executive branch has deployed supply‑chain tools typically reserved for foreign adversaries.

What’s at stake for U.S. AI strategy?

Will this schism fragment the U.S. approach to frontier AI, pushing companies to choose patrons — commercial markets, state contracts, or international buyers — based on political alignment? Or will the government’s hard line reassert control over how powerful models are used in security contexts? Either way, the episode signals that the Silicon Valley–Washington equilibrium that shaped the first wave of generative AI is under real strain, with implications for procurement, export policy, and the global tech race.

AIPolicy
View original source →