AI-Agent Competition: U.S. Giants Lose Momentum as China’s 'Longxia' Craze Builds New Barriers
The new contest over AI agents
A quiet shift is under way in the contest to dominate AI agents. U.S. firms still lead on chips, cloud infrastructure and cutting‑edge models. But momentum is cooling in some quarters as a raft of China‑focused agent initiatives — colloquially dubbed the "Longxia" (龙虾) craze — builds localized defensive moats that could blunt foreign entrants. Who defines the runtime, tool APIs and orchestration rules may decide who captures the most commercial value.
Nvidia founder Jensen Huang has framed the change as a physical stack realignment: energy, chips, infrastructure, models and applications form a chain in which lower‑level constraints propagate upward. It has been reported that Nvidia is trying to reposition itself from pure GPU vendor to an "agent infrastructure" provider by open‑sourcing a platform called NemoClaw, and that it plans an accompanying big model by 2026–27. At the same time it has been reported that domestic Chinese projects such as DeepSeek‑V4 may be trained increasingly on Huawei (华为) Ascend chips, reflecting Beijing's push to raise domestic compute capability despite international supply‑chain limits on high‑end HBM and advanced semiconductors.
Local moats, global consequences
China’s Longxia wave is not just a matter of engineering. It mixes localized data access, deep integrations with dominant domestic platforms, tighter regulatory alignment and niche agent ecosystems where "good enough" models paired with sophisticated workflow tooling win commercial adoption. It has been reported that some domestic open projects spawned dozens of proprietary forks that quickly closed ecosystems and monetized their own token economies — creating network effects that are hard for outsiders to penetrate. For many Chinese enterprises, model parity with global leaders matters less than reliable, cost‑effective automation that conforms to local compliance and product habits.
Geopolitics matters. U.S. export controls and sanctions have tightened access to certain advanced chips and HBM memory, constraining some aspects of China's training capacity even as China retains strengths in electricity infrastructure and data center scale. The net result may be a bifurcated landscape: Western firms remain strong on top‑end research and hardware‑heavy workloads, while Chinese players optimize the inference stack, tooling, and platform linkages that run real business processes at scale.
Reliability and the next phase
Production reliability, not raw model IQ, is emerging as the decisive battleground. It has been reported that DeepLearning.AI founder Andrew Ng (吴恩达) recently emphasized the large gap between experimental models and production‑grade reliability for high‑risk enterprise uses. That observation points to where vendors can still win: robust workflow orchestration, verification at scale, and well‑defined APIs for tool calling. If control over scheduling and orchestration confers upstream bargaining power, then the firms that own the agent runtime and the enterprise integrations will shape AI agent economics — and national tech strategies will shape who gets there first.
