← Back to stories Professional business team engaged in a corporate meeting. They discuss strategies at a conference.
Photo by RDNE Stock project on Pexels
虎嗅 2026-03-27

Senior AI figures praise OpenClaw as agents shift China’s AI stack — but compute pain looms

Event and panel

A high‑level roundtable convened by Yang Zhilin (杨植麟) brought together voices from across China’s AI chain to talk OpenClaw, agents and the infrastructure question. Guests included Zhang Peng (张鹏) of Zhipu (智谱), Xia Lixue (夏立雪) of Wuwen Xinqun (无问芯穹), Luo Fuli (罗福莉) of Xiaomi MiMo (小米MiMo) and Huang Chao (黄超) from the University of Hong Kong (港大). The session sat alongside announcements for an open‑source alliance, a “sovereign” large‑model white paper and the Beijing AI Association launch — signalling Beijing’s twin push for community development and domestic capability.

Why OpenClaw matters

Panelists portrayed OpenClaw as more than a tool: an open‑source agent scaffold that makes advanced model capabilities accessible to non‑programmers and accelerates real‑world workflows. Zhang described it as a “scaffold” that turns chat into actionable work; Luo argued its open‑source design and Skills/Harness model both raise the lower bound and the ceiling for domestic agents; Huang said the IM‑style interface gives users an “alive” interaction, and Xia warned that agents bring much heavier system demands. It has been reported that OpenClaw is currently the most talked‑about product in China’s agent space — and the panelists’ comments underscored why: it widens participation and shifts attention from model research to application design and ecosystems.

Costs, models and geopolitics

The clearest friction? Compute and cost. Zhipu’s recent GLM5 Turbo update, Zhang said, targets long‑horizon tasking and efficiency for persistent agent loops; to reflect that workload Zhipu has raised inference prices — a move Zhang framed as a market correction because complex tasks can consume ten to a hundred times more tokens than simple Q&A. It has been reported that some infrastructure providers saw token use doubling every two weeks and a tenfold rise over a short period, forcing urgent efficiency work. Xia explained her company is stitching together dozens of domestic clusters and chip types to squeeze more output from limited hardware. Why does this matter beyond product teams? Because China’s twin goals of open ecosystems and “sovereign” models are unfolding against a backdrop of export controls and U.S. chip restrictions, making domestic compute strategy both a technical and geopolitical imperative.

The panel closed bluntly and practically: one‑word forecasts ranged from “ecosystem” to “self‑evolution” to “sustainable tokens” — and Zhang simply said “compute.” OpenClaw and like frameworks are redirecting the conversation from model size to agent design and infrastructure economics; the question now is operational: who funds the inference boom, and can China’s domestic stack scale fast enough?

AI
View original source →