← Back to stories Sun-dried shrimps on a blue mat by the water in Kampot, Cambodia.
Photo by Sokmeas UY on Pexels
虎嗅 2026-03-09

Everyone Raising Shrimp: Why OpenClaw Has China Hooked

A grassroots rush meets corporate strategy

OpenClaw — an open‑source agent framework — has ignited an unlikely movement in China, from kids bringing MacBooks to queues at Tencent (腾讯) engineer booths to viral short videos on Douyin. It has been reported that appointment slots for hands‑on installs sold out by mid‑morning and that OpenClaw’s GitHub repo gathered roughly 250,000 stars in three weeks. This is not just a hacker party. Alibaba (阿里巴巴) is offering “one‑click cloud” deployment, Xiaomi (小米) is embedding agents across phones and home devices, and ByteDance (字节跳动) and Tencent (腾讯) are actively trialing ways to push agent workloads into their clouds. Why the urgency? Because OpenClaw promises persistent, automated workloads that finally turn idle datacenter GPUs into recurring revenue.

Token economics and the prize of task data

For Western readers: China’s AI ecosystem runs on a different cost logic—domestic inference APIs are far cheaper, and local electricity and hardware setups lower per‑call costs. It has been reported that heavy OpenClaw deployments can consume millions to hundreds of millions of tokens per day, producing costs that dwarf a $20/month chat subscription but create continuous cloud demand. Analysts say the real attractor is not per‑call revenue alone but the task‑trajectory data agents produce: step‑by‑step records of searches, tool calls and corrections that are gold for training next‑generation agent models. It has been reported that OpenRouter data shows Chinese models’ share of token consumption jumping sharply into the high‑30s percentage range — a sign that “token outflow” today looks less like export and more like a global service model with compute kept in China while distilled model capability is exported.

Chips, control and the new entrance war

The shift from occasional chat to sustained agent workloads breaks the old GPU‑centric calculus. Agentic AI is high‑concurrency, low‑batch, always‑on inference — a different engineering problem that favors inference‑optimized chips, KV cache management and new CPU roles. It has been reported that NVIDIA moved aggressively to acquire Groq assets for roughly $20 billion and is pushing Vera CPUs and other stack changes to close the efficiency gap. Against a backdrop of US export controls on advanced chips, China’s low‑cost inference stack and rapidly growing agent deployments take on geopolitical as well as commercial weight. Who controls the agent on your phone — and thus the routing of intent and commerce — will decide which platforms get paid. So the question is no longer whether agents will arrive, but which companies will own the intent distribution layer when they do?

AI
View original source →