← Back to stories Close-up view of HTML code displayed on a dark screen, showcasing coding concepts.
Photo by Florian Holly on Pexels
凤凰科技 2026-03-31

Anthropic's Claude Code reportedly exposed after npm source-map leak

Anthropic's internal Claude Code implementation has reportedly been leaked online after a misconfigured npm package exposed a source map, it has been reported that Chinese outlet ifeng (凤凰网) first highlighted. The exposed repository allegedly contains more than 1,900 files and roughly 512,000 lines of TypeScript — a near-complete reveal of the CLI and engine wiring that runs Claude-like developer tools. The revelation comes days after interest in Claude Mythos, and it has prompted fresh questions about how major AI labs protect their engineering assets.

What was exposed

Reportedly the leak originated from an @anthropic-ai/claude-code package whose cli.js.map file — said to be about 59.8 MB — contained full mappings back to original source. The leaked tree is described as a React + Ink terminal UI running on the Bun runtime, with a 46,000-line QueryEngine.ts at its core, a "Tools" suite of 40+ modules (file I/O, shell execution, LSP integration, sub-agents), and orchestration pieces such as a coordinator and bridges into VS Code and JetBrains. It has been reported that the codebase also includes long-running daemon support for persistent sessions and memory, an auto-approval "Auto Mode", a "Coordinator Mode" for parallel sub-agent scheduling, an "Undercover Mode" that reportedly scrubs AI traces from public commits, and even an internal "Buddy System" — an electronic pet feature bundled into the engineering build.

Why this matters

For Western readers unfamiliar with the Chinese coverage: this is not a regional incident but a global IP and security story. Source-map leaks reveal design patterns, inference logic, safety mitigations and tooling that competitors or attackers could study. It also plays into broader geopolitical concerns: advanced AI tooling and data-centric engineering are now strategic assets amid export controls on chips and heightened regulatory scrutiny of AI firms. Who gains from a full read-through — third-party developers, rivals, or malicious actors — remains an open question. Anthropic reportedly has not yet provided a public statement; meanwhile the episode revives the open-source vs closed-source debate and will likely trigger audits of packaging and release pipelines across the industry.

AIRobotics
View original source →