← Back to stories Asian scientists in lab coats discussing research with a microscope in a laboratory setting.
Photo by Edward Jenner on Pexels
虎嗅 2026-03-17

On the cusp: The excitement and hidden concerns of 'raising lobsters'

Breakthrough and buzz

A new class of open-source AI agents exemplified by OpenClaw has ignited a wave of enthusiasm in China — what the media call the “raising lobsters” (养龙虾) craze. These tools promise to turn large language models from chatty assistants into autonomous “digital employees” that can manage files, send emails and run workflows on a user’s local PC. The payoff is clear: more capable one‑person companies and faster, individualized productivity. But at what cost? Security experts and lawyers are already sounding alarms.

What OpenClaw does and why it matters

OpenClaw is an orchestration layer that integrates communication apps and LLMs to perform complex tasks locally, creating reusable “skills” from tacit knowledge. Social attention exploded: Chinese social indices that tracked OpenClaw’s mentions jumped from near zero to millions in weeks, and local governments in Shenzhen, Wuxi and Hefei have issued supportive policies to incubate “AI + super‑individual/one‑person company” models. Corporates and incubators such as iFlytek (科大讯飞) and regional branches of China Unicom have begun offering deployment and security support as the technology moves from online demos into real workplaces. All this unfolds against a backdrop of intensified global competition over AI technology and export controls on advanced chips, which is nudging China to accelerate domestic AI ecosystems and open‑source tooling.

Rising risks and official alerts

The flip side is a string of high‑severity warnings. The National Cyber and Information Security Information Notification Center (国家网络与信息安全信息通报中心) issued a safety alert citing weak default configurations, plugin risks and opaque privilege models. The Ministry of Industry and Information Technology (工业和信息化部)‑hosted NVDB and the National Internet Emergency Center (国家互联网应急中心) published precautionary guidance and flagged the potential for attackers to seize full system control. It has been reported that some independent security audits show overall safety pass rates below 60%, and users have reportedly experienced stolen keys, exposed private data and accidental deletions. Lawyers warn that misconfigured agents running on corporate networks could create legal exposure, from data‑breach liabilities to criminal charges if production systems are disrupted. It has been reported that fraudsters are already repackaging “AI agent” narratives as investment or subsidy scams.

Experts urge measured momentum

Researchers call OpenClaw‑style agents a meaningful technological step — they make implicit expertise explicit and could reshape work — but stress that enthusiasm must be matched by governance. Academics and industry veterans urge standardized deployment, stronger default security, algorithm filing and clearer legal frameworks (including use of existing laws such as the Cybersecurity Law) to manage systemic risk. The policy lesson is blunt: don’t ban the lobster pot, but don’t leave it unattended. If China’s nascent “digital employee” economy is to scale, safety, auditability and regulation will have to catch up quickly.

AI
View original source →