← Back to stories Close-up of toy soldiers in military uniform standing in a simulated environment with blurred background.
Photo by Prakash Chavda on Pexels
虎嗅 2026-04-06

Everyone Believes AI Took Part in the War, and Everyone Wants to Put a 'Lobster' on Themselves

The viral narratives

The key story is not new technology but a pair of competing myths: that large language models secretly ran battlefield decisions, and that a cute open‑source agent — nicknamed "小龙虾" (literally "lobster") — will solve everyone's work and life problems if you simply install it. It has been reported that a sensational article claiming "Claude and Palantir killed Khamenei" swept Chinese social media, spawning thousands of reshared posts and bite‑sized video explainers. The piece reads like a techno‑thriller: petabytes processed in 90 minutes, drones switching AI "brains" mid‑flight, zero casualties. It is compelling. But many of its technical assertions do not withstand basic scrutiny.

Cloud outages, intelligence work and causal leaps

A related viral chain linked an attack on an AWS Middle East facility to a global outage of Anthropic's Claude and presented this as proof that war had directly felled the model. Reportedly, Claude has been used in certain government intelligence workflows and there are instances where models assist analysts — but engineering realities matter. Cloud services run on distributed, redundant architectures; outages are far more often caused by authentication bugs, routing rules or application‑level failures than by a single regional data‑center hit. Likewise, modern military targeting combines human HUMINT, signals, and multi‑source verification; attributing a strike to a single model is a gross oversimplification that feeds public fear more than it clarifies policy.

The 'lobster' craze and the cost of FOMO

On the consumer side, an open‑source AI agent framework called OpenClaw — popularly called "小龙虾" in China — vaulted to hundreds of thousands of GitHub stars and into offline install parties from Shanghai to provincial towns. What began as an enthusiast project promising new interaction paradigms has morphed into a mass installation frenzy: free setup booths, influencer demos, even reported local government promotion. The problem? Many users treat installation as an end in itself. Without technical literacy, configuration and security hygiene, these agents can leak data, delete files, or be abandoned as noisy, ineffective tools. Who benefits from the spectacle — curious citizens, speculators, platforms, or policymakers distracted from real risks?

Why this matters beyond clicks

This is more than an internet meme. When sensational narratives replace sober explanation, public debate on regulation, export controls and the real military use of AI becomes distorted — at a time when sanctions, tech‑transfer rules and tech rivalry between the U.S., China and other powers are already reshaping supply chains. The mix of marketing, FOMO and genuine capability creates both inflated fears and misplaced trust. The remedy is simple but hard: slow down, demand sources, and separate what AI actually did from what people say it did. Otherwise, the lobster will remain just that — a flamboyant garnish that hides the meal.

AI
View original source →