← Back to stories A person uses a smartphone to track an autonomous delivery robot in an outdoor setting.
Photo by Kindel Media on Pexels
凤凰科技 2026-03-10

Stop fiddling with OpenClaw — the weekend you spent only soothed your AI anxiety

What the advisory says

China’s National Internet Emergency Response Center (国家互联网应急中心) has issued a risk advisory about the AI agent OpenClaw (formerly Clawdbot, Moltbot), warning that the tool carries “serious security hazards.” The app, which reportedly gained traction because major domestic cloud vendors offered one‑click deployment, requires elevated system privileges — including access to local file systems and external APIs — and its default security configuration is weak. Was the weekend of tinkering worth it? Security authorities say the convenience comes with real danger.

The risks in plain terms

The advisory breaks the threat into four buckets. First, it has been reported that attackers can exploit configuration flaws to gain full system control. Second, misinterpretation of user intents by the agent can result in catastrophic accidental actions — deleting emails or core production data. Third, plugin poisoning is already a reality: multiple malicious extensions have reportedly been found that can steal credentials or deploy backdoors, turning devices into botnets. Finally, a number of medium‑to‑high severity vulnerabilities have been publicly disclosed, threatening payment accounts, private documents and even the confidentiality and availability of critical sector code repositories and business systems. The advisory notes that OpenClaw’s one‑click availability on mainstream domestic clouds such as Alibaba Cloud (阿里云), Tencent Cloud (腾讯云) and Huawei Cloud (华为云) has amplified its attack surface.

Recommendations and geopolitical context

Security experts recommend immediate mitigation: do not expose default management ports to the public internet; run the agent in isolated containers with least privilege; never store plaintext keys in environment variables; implement comprehensive operation logs and audits; strictly vet plugin sources, disable automatic updates and install only signed extensions; and apply official patches as they are released. Beyond operational guidance, there is a bigger point: with Western export controls and geopolitical tensions accelerating China’s push for an independent AI stack, domestically developed agents are becoming infrastructure, not just experiments. That raises stakes — insecure tooling now poses commercial, privacy and even national‑security risks.

AI
View original source →