← Back to stories Black and white photo of Palacio do Planalto, Brazil's iconic government building.
Photo by Lucas Vinícius Pontes on Pexels
凤凰科技 2026-04-13

Banning on one hand and pushing on the other — the U.S. government plays split-personality tactics toward Anthropic

Violence, protests and a shrinking margin for error

It has been reported that a 20-year-old man, identified in open sources as Moreno‑Gama, hurled a homemade molotov at the San Francisco home of OpenAI chief Sam Altman in April, narrowly missing occupants and only scorching a doorframe. Within 48 hours a separate drive‑by shooting was recorded near the same neighbourhood; police later arrested two suspects. The incidents have been framed by campaigners and commentators as the first violent spillovers of a larger, increasingly organised global anti‑AI movement that ranges from disciplined lobbying groups like PauseAI to hardline offshoots that openly call for escalation.

Protest actions have also targeted other major labs: demonstrators staged prolonged vigils and even hunger strikes outside Anthropic’s headquarters, and activists have pushed for international moratoria and an IAEA‑style global regulator for advanced AI. At the same time, author and artist unions, environmental groups worried about the energy demands of data centres, and grassroots “quitGPT” consumer campaigns are pursuing non‑violent pressure through lawsuits, strikes, and local zoning fights. Who is responsible for calming this swirl of fear and anger — industry, courts, or government?

Split federal posture complicates corporate and public safety

Washington’s response has been fractured. On one hand, federal agencies and lawmakers have tightened export controls and debated curbs on certain technologies, signalling an appetite to limit what flows to geopolitical competitors. On the other hand, it has been reported that U.S. procurement, venture funding and regulatory carve‑outs continue to favour domestic AI champions such as Anthropic, creating a posture that looks like encouragement even as policymakers promise guardrails. The result is a confusing message: regulate and restrict some aspects of the market, while accelerating others.

That split matters beyond policy coherence. It raises a practical question about protection and accountability: if the state tacitly supports rapid domestic development, does it also assume greater responsibility to shield company executives and staff from politicised threats? Activists argue for stronger oversight and a global safety regime; companies warn that heavy‑handed curbs could push innovation offshore. For Western readers trying to understand China’s tech tensions, this plays out alongside trade and sanction regimes — geopolitics has turned AI into both an economic prize and a public‑safety headache, domestically and internationally.

AIPolicy
View original source →