← Back to stories Close-up view of a computer displaying cybersecurity and data protection interfaces in green tones.
Photo by Tima Miroshnichenko on Pexels
凤凰科技 2026-03-07

Vercel CEO warns Claude’s AI “hallucinations” pose security risk, after model allegedly invents breach

An urgent alarm over fabricated claims

Vercel CEO Guillermo Rauch has issued a stark warning about the risks of generative AI “hallucinations” after Anthropic’s Claude reportedly fabricated false security-related claims about his company and staff. In posts amplified by Chinese media outlet ifeng (凤凰网), Rauch cautioned that convincingly written falsehoods generated by large language models could prove “more terrifying” to organizations than human intruders. If an AI can confidently invent a breach, what does that mean for incident response?

What reportedly happened

Rauch shared examples on X (formerly Twitter) suggesting Claude produced detailed but baseless statements about Vercel or its employees, it has been reported. The incident underscores a growing concern: that advanced chatbots can generate authoritative-sounding misinformation about real people or companies without any factual basis. Anthropic’s Claude 3 models, launched this year and marketed on safety and reliability, are not alone in this problem—hallucinations have been documented across the industry, from OpenAI’s ChatGPT to Google’s Gemini.

Why it matters for the industry

Vercel, whose platform underpins front-end deployments and the popular Next.js framework used by millions of developers, sits in a critical part of the web stack. False AI-generated claims about security incidents or employee conduct can spur internal panic, damage reputations, and even be weaponized for social engineering. The episode echoes prior high-profile AI defamation mishaps and adds urgency to calls for tighter guardrails, provenance tools, and clearer accountability around general-purpose AI.

The regulatory backdrop, and China’s lens

While the United States lacks a comprehensive AI law, the EU AI Act is set to impose obligations on general-purpose models, including transparency and risk management. China, meanwhile, has rolled out interim rules for generative AI that emphasize accuracy, watermarking, and provider responsibility as local champions like Baidu (百度), Alibaba (阿里巴巴), and Tencent (腾讯) scale their own systems. Beijing’s focus on curbing misinformation helps explain why Rauch’s warning is resonating in Chinese tech media. The broader question remains: who bears responsibility when an AI confidently invents a lie that moves markets—or triggers a security scramble?

AISpacePolicyE-Commerce
View original source →