← Back to stories Close-up of a computer screen displaying colorful programming code in JavaScript.
Photo by Nemuel Sereti on Pexels
凤凰科技 2026-04-01

Leaked Claude Code reportedly reveals 87 unreleased features, exposing Anthropic's ambitions

What leaked and what it shows

It has been reported that a leaked copy of Claude Code’s source contains 87 unannounced features, giving an unusually detailed view into Anthropic’s roadmap and system architecture. The disclosure — reportedly produced by an npm packaging mistake — is said to include roughly 512,000 lines of code across 4,756 source files, more than 40 tool modules, and artifacts revealing prompt engineering, tool-calling mechanisms, and internal components labeled Kairos and an “卧底模式” (undercover mode). Anthropic said the leak did not include model weights or customer data and attributed the release to human error rather than a security vulnerability.

Spread and company response

Within hours the leaked code was reportedly starred and copied widely on GitHub, creating backups and mirrors that have multiplied the exposure. Anthropic told reporters it was “rolling out measures to prevent similar incidents” and reiterated that no sensitive customer credentials were exposed. It has been reported that the incident represents Anthropic’s second major data mishap within a week, reigniting questions about operational controls at high-profile AI startups.

Why it matters

For Western readers less familiar with China’s media coverage of tech, the leak was widely amplified by Ifeng (凤凰网) and other outlets — reflecting global interest in the race to build safer, more capable generative AI. The real risk here is not stolen model weights but strategic leakage: roadmaps, prompt libraries and tooling semantics can give competitors and adversaries a manual to product positioning and future features. What does this reveal about Anthropic’s ambitions? Quite a bit — and it raises immediate questions about IP protection, competitive surveillance, and internal governance in an industry under intense regulatory and geopolitical scrutiny.

What comes next

Anthropic says it will harden release processes and tighten controls; regulators and enterprise customers will be watching. Reportedly, rivals may already be combing the exposed code for ideas. Can fast-moving AI companies maintain both rapid iteration and airtight operational security? That tension looks set to define the next phase of competition between Anthropic, OpenAI and other large-model developers.

AISpace
View original source →