Anthropic npm package accidentally exposed Claude Code source; “Kairos” assistant mode revealed
What happened
It has been reported that Anthropic accidentally shipped a source map with the official npm package for Claude Code, and the map included full sourcesContent. That means anyone who downloaded the official package could reconstruct large parts of Claude Code’s client-side source. The disclosure was first flagged in Chinese tech coverage and has since circulated widely among developers and security researchers. Reportedly, the publicly visible GitHub repository for anthropics/claude-code is a separate, official project; the controversy here stems from the npm artifact that contained the embedded source map.
What the leak shows
Inside the reconstructed code, researchers found a feature flag named KAIROS that activates an “assistant mode” — custom system prompts, a simplified brief view for non-developer interactions, scheduled check‑ins, cron-like tasks, and persistent sessions that can be resumed (claude assistant [sessionId]). It has been reported that KAIROS hooks into channels, webhook listeners (KAIROS_GITHUB_WEBHOOKS), remote control modules and notifications, suggesting Claude Code was being built not merely as a terminal aid for programmers but as a continuously running agent that can react to external signals. In short: this looks like a roadmap from a CLI aid to a full agent platform. The discovery reportedly also exposed integrations and telemetry design choices that were previously opaque.
Why it matters
This is not just a developer embarrassment. For a company selling enterprise-grade AI products, inadvertent disclosure of client-side source maps raises serious supply‑chain and process-control questions. It has been reported that Claude Code accounts for about 18% of Anthropic’s annualized revenue (15% in January), and that revenue for the product doubled in early 2026, reaching a scale reportedly 2.5x comparable OpenAI offerings — numbers that make governance and artifact hygiene material to investors and customers alike. More broadly, the incident feeds into growing scrutiny of AI vendors’ operational security and release practices at a time when governments and enterprises are tightening controls on software supply chains. Anthropic will need to tighten release reviews and artifact auditing quickly to restore confidence.
