OpenClaw effect: explosion in AI token use adds fuel to Chinese AI development
OpenClaw effect and the numbers
China’s sudden rush to adopt OpenClaw, an open‑source AI agent, has driven a seismic jump in what the industry calls token consumption — the basic units of text that AI models process. The country’s National Data Administration (国家数据局) told delegates at the Zhongguancun Forum that daily token use surged from roughly 100 billion in early 2024 to more than 140 trillion by March, a rise of over 1,400 times. Why does that matter? Tokens are a direct proxy for computing demand and costs; more tokens mean more infrastructure and more spending.
Industry executives say the spike is concentrated around agents that do real tasks for users, not just chat. One executive, Xia Lixue, co‑founder and CEO of a Beijing AI computing provider, said token consumption on his platform had been doubling every two weeks and had grown roughly tenfold since late January. It has been reported that Zhipu AI (智谱AI)’s CEO believes the surge supports price increases that will, in turn, sustain investment in bigger and better models.
Capacity, prices and geopolitics
The boom is a mixed blessing. Reportedly, the OpenClaw frenzy is accelerating model development and commercialisation, but it is also straining datacentres, GPUs and software stacks that were provisioned for much lower loads. Will hardware keep up? That’s the key question. Domestic suppliers and cloud providers face pressure to scale quickly while keeping costs under control — and higher token prices could make that scaling economically feasible.
Geopolitics shades the technical story. Amid U.S. export controls on advanced chips and broader technology competition, China has been incentivised to build more of its AI stack in‑country — from models to edge agents and supporting infrastructure. The token boom underlines both the scale of domestic demand and the urgency of that push. For Western observers unfamiliar with China’s tech ecosystem: this is not just a software fad, but part of a strategic race to convert user adoption into sustained, homegrown compute and model capability.
