ClaudeCode’s Naked Launch: How a Sourcemap Blew Up Anthropic’s “Safety” Story
What happened
It has been reported that in the early hours of March 31 a security researcher known as ChaofanShou posted on X a zip link showing that a ClaudeCode sourcemap had been published to npm by mistake. Within hours the post neared ten million views and GitHub filled with mirror repositories — one of them gathering tens of thousands of stars and forks. This was not a hack; reportedly an automated build pipeline shipped a 59.8MB .map file that effectively reconstructed the client-side TypeScript for ClaudeCode.
What leaked
Reportedly the map file contained thousands of source files and roughly 512,000 lines of strict‑mode TypeScript, including the Agentic Harness that orchestrates tool calls, permission gates, context stitching, an IDE bridge, and an extensive internal product roadmap. Names that have drawn attention in the leaked tree include an Always‑On daemon called “KAIROS,” a “BUDDY” virtual pet, a planner “ULTRAPLAN,” and an “UndercoverMode” designed to erase provenance when Anthropic staff operate on public repos — a feature painfully ironic given the leak. It has been reported that many of these are finished but unreleased features and that the leak repeated a similar sourcemap error Anthropic made in 2025.
Why it matters
The immediate technical impact is constrained: no model weights or proprietary training data were exposed, only the orchestration layer that shows “how” to use a large model in production rather than “how” to build the model itself. Yet the strategic impact is larger — the leak hands a practical blueprint for building production‑grade multi‑agent systems to any well‑resourced competitor, accelerating engineering cycles across the industry. Community reaction was near‑jubilation: mirror stars, architecture writeups and forks proliferated, underscoring how much pent‑up demand there is for real‑world agent designs under closed‑source regimes.
Geopolitics and industry fallout
For China’s fast‑growing AI sector, which includes companies such as Baidu (百度) and Zhipu AI (智谱), the leak is less a simple transfer of core model IP than a tactical advantage: teams can study production orchestration and compress their time‑to‑market. But geopolitical constraints temper that windfall. Export controls, sanctions and restricted access to top‑tier accelerators mean that copying orchestration does not immediately substitute for compute and model‑training capacity. Still, the episode weakens a key narrative in the West — that closed systems are necessary for “AI safety” — and raises a question Anthropic must now face: can a company that built its brand on safety survive repeated, avoidable governance failures?
