Anthropic’s takedown blunder removes thousands of GitHub repos while chasing a Claude Code leak
What happened
Anthropic reportedly tried to halt the spread of leaked source code for its Claude Code command‑line tool and instead triggered a sweeping takedown that affected roughly 8,100 GitHub repositories. It has been reported that the sequence began when Anthropic accidentally made Claude Code’s source access public; AI enthusiasts quickly mirrored the files to GitHub to inspect the model’s implementation. To stem the spread, Anthropic issued copyright takedown requests to GitHub, but the requests were overbroad and swept up thousands of repositories — including legitimate branches of its own open projects.
Boris Cherni (鲍里斯·切尔尼), the Claude Code lead, reportedly acknowledged the misstep as human error. GitHub, the Microsoft‑owned code platform, has restored most of the affected repositories after Anthropic narrowed its requests; according to reports, the company now only maintains a takedown against one repository and its 96 related branches.
Why it matters
This episode lands at a sensitive moment: Anthropic is preparing for an IPO, and handling of leaks, intellectual property and public communications all factor into investor confidence. It has been reported that the incident exposes legal and reputational risks for AI firms that must balance urgent IP protection against the norms of open‑source software and developer ecosystems. How do you stop a leak without breaking the internet?
For Western readers unfamiliar with the background, this is part of a broader fault line in the global AI race. U.S. companies face intense scrutiny over IP controls, export rules and regulatory oversight — and a high‑profile error like this can complicate those political and commercial pressures. Reportedly, GitHub has restored most access, but the episode raises fresh questions about takedown processes, transparency and how fast‑moving AI firms should respond when code escapes into the wild.
