Unbelievable — ClaudeCode source code leaked and was completely dissected within hours
It has been reported that the full source code for ClaudeCode, the internal codebase behind Anthropic’s Claude family of models, was leaked and rapidly analyzed by researchers and hobbyists. Chinese outlet ifeng (凤凰网) published a summary of the teardown, which found 87 compile‑time feature flags—most turned off—that appear to point to a raft of unreleased capabilities. The clearest shock: a finished, gate‑locked “assistant mode” called KAIROS that would let Claude run continuously and act proactively, not just respond on demand.
What the leak shows
Reportedly, the flags include names such as KAIROS, AGENT_MEMORY_SNAPSHOT, PROACTIVE, VOICE_MODE, WEB_BROWSER_TOOL and WORKFLOW_SCRIPTS. Snippets quoted by ifeng suggest KAIROS runs as an “assistant/colleague” that receives periodic heartbeat “ticks,” evaluates whether there is useful work to do, and calls a Sleep tool when idle. The code even mentions terminal focus as a signal—if your window is unfocused, the assistant is allowed to act more boldly; if you’re watching, it defers and negotiates. It has been reported that the Kairos runtime sits behind an internal gate named tengu_kairos and is currently available only to Anthropic employees.
Why it matters
An always‑on, proactive assistant changes the product and safety calculus. Autonomy, push notifications to phones, remote actions and long‑term memory features could speed productivity—but they also raise privacy, security and abuse risks. In the broader geopolitical context—where U.S.–China competition in AI is intensifying and governments are tightening controls on advanced models and chips—such leaks are also a form of competitive intelligence. Who benefits if these dormant features are reimplemented or weaponized? Who is liable if an always‑on agent acts in an unsafe way?
There is no public confirmation yet from Anthropic about the leak or the specific features described. For now, the leak offers a rare window into how a major AI developer is architecting autonomy and orchestration—and it prompts fresh questions for regulators, enterprise customers and anyone worried about machines that can decide to act without being explicitly asked.
