← Back to stories Close-up view of a mouse cursor over digital security text on display.
Photo by Pixabay on Pexels
虎嗅 2026-03-27

Anthropic’s most powerful model reportedly leaked after CMS misconfiguration

Leak and what was revealed

It has been reported that Fortune revealed a major accidental disclosure at Anthropic after a third‑party content management system was left with files set to public by default. Nearly 3,000 unpublished assets — including draft blog posts — were reportedly indexed in a publicly searchable cache, and one draft claims Anthropic’s next model substantially surpasses the capabilities of Opus 4.6. The leaked draft uses two names, Mythos and Capybara, for what appears to be the same model; many observers think Mythos will be the public product name while Capybara is an internal codename. Anthropic has since closed public access to the exposed data, and it has been reported that a copy of the draft was saved by a third party before it was taken down.

The leaked material reportedly describes notable gains in software programming, academic reasoning, and cybersecurity benchmarks compared with Opus 4.6. That last point is especially sensitive: Opus 4.6 has already shown an ability to find previously unknown vulnerabilities in production code. The cache also reportedly contained mundane but private items — for example, an employee’s parental‑leave paperwork — and detailed plans for a closed‑door European CEO summit featuring co‑founder Dario Amodei.

Why it matters

Why does this leak matter beyond embarrassment? Powerful models with improved code‑audit and vulnerability‑finding abilities are double‑edged. Anthropic reportedly plans to give early access to cybersecurity defenders so organizations can harden codebases before any malicious actors exploit newly discoverable flaws. That cautious rollout echoes broader industry conversations about staged releases, red‑team testing, and third‑party vetting for high‑capability models.

For Western readers less familiar with China’s tech press, the incident was quickly amplified by outlets such as Huxiu and by developer communities across Greater China, underscoring how leaks travel fast in global AI ecosystems. More broadly, the episode arrives amid intensifying geopolitical scrutiny of advanced AI: export controls, national security reviews, and tech‑competition politics (notably between the U.S. and China) are shaping how firms disclose and distribute frontier models. Reportedly, such incidents raise regulators’ and corporate customers’ appetite for stricter governance and audited release pathways.

AI
View original source →