← Back to stories Close-up of colorful programming code on a computer screen, showcasing digital technology.
Photo by Myburgh Roux on Pexels
虎嗅 2026-03-27

Five AI leaders unpack OpenClaw: lobsters, tokens and the next phase for models

OpenClaw’s moment

A fast‑spreading open‑source agent framework called OpenClaw is forcing a rethink of what large language models can do — and who gets to use them. At a Zhongguancun Forum (中关村论坛) open‑source panel, Moon's Dark Side (月之暗面) founder Yang Zhilin moderated a wide‑ranging discussion with Zhipu AI (智谱) founder Zhang Peng, Wuwen Xinqun (无问芯穹) founder Xia Lixue, Xiaomi’s (小米) MiMo large‑model lead Luo Fuli, and The University of Hong Kong (香港大学) assistant professor Huang Chao. What began as a conversation about “what a lobster can do” — a playful reference to the agent’s nickname — quickly turned into a debate about product form, pricing and infrastructure.

From “JARVIS” fantasies to scaffolding for everyone

Panelists described OpenClaw variously as a personal JARVIS, a flexible scaffolding, and a lightweight operating system for models. Zhang said the real breakthrough is democratizing top‑tier model capabilities — especially coding and agentic skills — so non‑programmers can turn ideas into functioning workflows through simple dialogue. Xia argued OpenClaw changes the cost and shape of imagination: agentic loops and task orchestration demand far more sustained context and compute than conversational models, and that shift is already visible in usage patterns. It has been reported that Wuwen Xinqun’s token consumption doubled roughly every two weeks since January and has risen about tenfold overall — a growth Xia likened to the early days of mobile data.

Pricing, GLM‑5‑Turbo and the inference squeeze

Zhipu AI’s GLM‑5‑Turbo and its attendant price adjustments featured prominently. Zhang framed the model update as a deliberate move from “dialogue” toward “doing”: models must now plan, compress long contexts, debug, and handle multimodal inputs, which increases inference cost. He argued higher prices reflect longer, more complex reasoning chains and are necessary to sustain investment. Panelists converged on a tension familiar to Western readers watching China’s AI scene: exploding demand for inference at a time of constrained access to top‑end chips and growing export controls. The result is a business and infrastructure bottleneck — who pays, and who builds the more efficient inference stack?

Open source, ecosystem and what comes next

OpenClaw’s open‑source status was repeatedly flagged as pivotal: it lets the community push weaker but practical domestic models upward by building richer harnesses and skills, and it invites contributions from non‑researchers. The big strategic question remains: do we need a single all‑in‑one super‑agent, or an OS‑like scaffolding that composes many specialized tools? Panelists leaned toward the latter, saying a modular, community‑driven layer will unlock creative use cases even as the industry wrestles with pricing and compute. Against the backdrop of geopolitical pressure on chips and cloud services, the conversation made clear that China’s next twelve months will be shaped as much by open ecosystems and inference efficiency as by model size.

AI
View original source →