Anthropic’s "Grand Leak": Half a million lines of Claude code exposed in packaging blunder
- Marijan Hassan - Tech Journalist
- 14 hours ago
- 2 min read
In what is being described as the most significant proprietary leak in the history of generative AI, Anthropic accidentally published the near-complete source code for Claude Code, its flagship AI agentic harness. The leak, which occurred on March 31, 2026, was not the result of a sophisticated hack but a "packaging error" that left a massive 60MB source map file in a public software update.

Within hours of the discovery, the codebase, comprising over 512,000 lines of TypeScript across 1,906 files, was mirrored to GitHub, where it became the fastest repository in history to reach 50,000 stars.
While Anthropic has issued thousands of DMCA takedown requests, security experts warn that the "blueprint" for their agentic AI is now permanently in the wild.
How the leak happened: A "Bun" in the oven
The root cause was traced back to a misconfiguration in Bun, the JavaScript runtime Anthropic acquired in late 2025. A known bug in the runtime caused production builds to include full source maps by default. When Anthropic pushed version 2.1.88 of the @anthropic-ai/claude-code package to the npm registry, it inadvertently included a link to a ZIP archive of the entire raw source tree hosted on the company's Cloudflare storage.
"This was a release packaging issue caused by human error, not a security breach," an Anthropic spokesperson stated. "No sensitive customer data or credentials were involved or exposed."
What was inside: The secrets of the "Harness"
The leaked files do not contain the "weights" (the brain) of the Claude model itself, but they do reveal the agentic harness, the sophisticated plumbing that allows Claude to "think" before acting, use terminal commands, and manage files.
Key discoveries within the code include:
"Strict write discipline": A previously unknown logic gate that prevents the AI from "hallucinating" successful actions. The agent is hard-coded to only update its memory after a confirmed filesystem write.
Anti-distillation logic: A feature called ANTI_DISTILLATION_CC that injects "decoy" tool definitions into API requests to corrupt the training data of any competitor attempting to "scrape" Claude’s logic.
The "capybara" model: References to an unreleased, highly efficient model internally codenamed "Capybara," designed for background autonomous tasks.
The "buddy" system: A surprising, Tamagotchi-style collectible game found in a hidden subdirectory, where users can earn "Legendary" digital pets based on their coding frequency and style.
A "perfect storm" for security
The timing of the leak created a nightmare scenario for developers. At the exact moment the source code went viral, an unrelated supply-chain attack hit the Axios library on npm. Developers who rushed to update or "build" the leaked Claude code between 00:21 and 03:29 UTC on March 31 may have inadvertently installed a Remote Access Trojan (RAT).
Additionally, cybersecurity firm Zscaler has identified dozens of "official-looking" GitHub forks of the leak that have been trojanized with Vidar Stealer malware, targeting enthusiasts eager to run a local, "unlocked" version of Claude.
Long-term fallout
For Anthropic, the leak is a massive blow to its competitive "moat." While rivals like OpenAI and Google cannot legally copy the code, they now have a transparent look at how Anthropic handles complex multi-step reasoning and tool orchestration.












