← Back to stories Close-up of a laptop displaying cybersecurity text, emphasizing digital security themes.
Photo by cottonbro studio on Pexels
凤凰科技 2026-04-08

Anthropic Launches Cybersecurity AI Model Days After Reported Source‑Code Leak

Reported launch amid online claims

It has been reported that Anthropic unveiled a new cybersecurity-focused AI model days after a purported leak of some of its source code surfaced online. The claim appeared on Chinese social media and was reposted on ifeng (凤凰网), where a user post described the new system as "Claude Mythos" and hailed it as a "mythic-level" model with powerful offensive and defensive hacking capabilities — reportedly stronger than "opus4.6" and not open to the public.

Unverified claims, limited confirmation

The core details remain unverified. It has been reported that the social-post author positioned the release as a rapid response to the leak, but Anthropic has not publicly confirmed the existence of a product named Claude Mythos or the timeline described in those posts. Readers should treat the more sensational technical claims — especially about offensive cyber capabilities — with caution until independent verification or an official company statement appears.

Why this matters beyond the headlines

Why should Western readers care? Because a close succession of a source‑code exposure and a claimed hardened cybersecurity model raises acute questions about dual‑use risks, disclosure practices and corporate responsibility. In the past year, governments have tightened scrutiny of advanced models, citing national security and export‑control concerns. A model that is both powerful and undisclosed invites regulatory interest and public debate about oversight.

Next steps and the broader context

This episode highlights persistent tensions in the AI ecosystem: innovation racing ahead of verification, and private companies balancing competitive secrecy against public safety. It has been reported that the post was hosted on a user channel of ifeng (Dafeng Hao, 大风号), underscoring how quickly unvetted claims can spread across platforms. Independent audits and clear statements from Anthropic will be necessary to separate marketing or rumor from operational reality — and to clarify whether this is a defensive cybersecurity tool or something with broader, potentially concerning capabilities.

AISpace
View original source →