← Back to stories An older man engages in a strategic chess game with a robotic arm, illustrating the blend of tradition and technology.
Photo by Pavel Danilyuk on Pexels
凤凰科技 2026-03-12

A Silicon Valley split: Anthropic vs. the Pentagon raises stakes for AI governance

A rare, public showdown

A U.S. Department of Defense move to label AI startup Anthropic as a “supply‑chain risk” has fractured Silicon Valley, forcing tech giants to pick sides in a dispute over when and how powerful AI should be used by the military. Microsoft publicly sided with Anthropic in court, warning that the Pentagon’s “extreme” action could have “broad negative effects” on the U.S. tech sector. Anthropic, the target of the designation, has sued the Pentagon, calling the move “unprecedented and unlawful” and saying it has caused irreparable harm.

Rivalry and opportunity in the defense market

The conflict has opened immediate business and political opportunities for competitors. It has been reported that Google has accelerated plans to deploy its new AI agents across the Pentagon’s non‑classified offices and is reportedly negotiating to expand into more sensitive environments. OpenAI also moved quickly to announce a Department of Defense deal, claiming stronger safety guards than previous deployments—an announcement that drew backlash and, reportedly, a spike in ChatGPT uninstall rates. For Anthropic—founded by ex‑OpenAI executives and until late February the only vendor operating inside DoD classified cloud environments—the designation and ensuing exclusions are especially costly. It has been reported that Anthropic’s valuation is in the hundreds of billions of dollars.

National security label, political signal

Labeling a domestic AI firm a “supply‑chain risk” is striking because that designation has historically been used against foreign adversaries. Analysts warn the move could politicize procurement and chill companies from building ethical safeguards if doing so risks exclusion from lucrative government contracts. It has been reported that an internal Pentagon memo now allows limited exemptions for units that deem certain AI tools critical to national security, underscoring the practical difficulty of entirely removing Anthropic from defense systems.

Governance questions remain

Experts say the episode exposes a broader governance gap: who decides acceptable military uses of AI, and how will commercial contract terms, corporate ethics, and national security be reconciled? Some employees at OpenAI and Google have reportedly joined public opposition to the Pentagon’s tactics, arguing the government is sowing fear to split the industry. The dispute will matter far beyond one lawsuit—affecting AI research incentives, the shape of civil‑military partnerships, and the global race over responsible AI deployment. Who gets to “tame” AI when it is both commercially vital and a matter of national security? The answer remains contested.

AISpace
View original source →