Claude Code Leak 2026: Supply Chain Attack, and What It Means for Developers

Digital illustration of the Claude Code source code leak from March 2026 - terminal with TypeScript code and shattered glass effect
Featured image for the article about the Claude Code leak of March 31, 2026, when 512,000 lines of TypeScript were accidentally exposed on npm

Claude Code Got Leaked: Everything You Need to Know About Anthropic's Biggest Slip-Up in 2026

On March 31, 2026, Anthropic made a mistake that will be talked about for a long time. The complete source code of Claude Code — their AI coding tool pulling in over $2.5 billion a year — ended up on npm for anyone to grab. No hackers involved. No insider drama. Just a misconfigured build file and a very bad day for someone on the release team.

Here is what happened, what was found inside, why it matters, and what you should do if you use Claude Code.

How It Happened: A Story About Source Maps and .npmignore

If you work with TypeScript or JavaScript, you know about source maps. When you compile code for production, your build tool (Webpack, esbuild, Bun, etc.) can generate .map files that bridge the gap between minified production code and the original readable source. They exist for debugging and should never reach end users.

Version 2.1.88 of the @anthropic-ai/claude-code package was published to npm with a 59.8 MB JavaScript source map accidentally bundled in. That source map contained a direct reference to a zip archive hosted on one of Anthropic's Cloudflare R2 storage buckets — a bucket that happened to be publicly accessible. Anyone who downloaded the npm package could follow the link and grab the entire source code. No authentication required. No hacking needed. The file was just sitting there.

Security researcher Chaofan Shou, an intern at blockchain security firm Fuzzland, was the first to publicly flag the issue on X (formerly Twitter). His post racked up over 28.8 million views. Within hours, the codebase was mirrored on GitHub, where it gathered over 84,000 stars and 82,000 forks before Anthropic started issuing DMCA takedown notices.

The irony? This is not the first time it has happened. A nearly identical source-map leak occurred with an earlier version of Claude Code back in February 2025. That means Anthropic made the same mistake twice in roughly 13 months.

Anthropic's Official Response

Anthropic confirmed the incident fairly quickly. A spokesperson told multiple outlets that a Claude Code release had included some internal source code, that no sensitive customer data or credentials were involved or exposed, and that it was a release packaging issue caused by human error, not a security breach. They added that measures were being rolled out to prevent it from happening again.

The key takeaway: the AI model weights were not exposed. User data was not compromised. API keys were not leaked. What got out was the agentic harness — the TypeScript code that wraps the Claude model and gives it the ability to use tools, manage files, execute bash commands, and orchestrate multi-agent workflows.

What Was Inside: 512,000 Lines of TypeScript

The downloadable archive contained the src/ directory of Claude Code: roughly 1,906 TypeScript files totaling over 512,000 lines of code. Here is what researchers and developers found when they dug in.

Overall Architecture

Claude Code is built around several core components. The main orchestration loop manages conversation turns, decides when to invoke tools, and handles the agent execution flow. A query engine handles LLM API calls and orchestration. The context window manager decides what information fits in the conversation and handles automatic compression when approaching token limits.

Each tool (file reading, bash execution, code editing, etc.) lives in its own directory under tools/ with a separate implementation file, description, and parameter schema. There is also a custom terminal rendering engine built on Ink and React with Yoga layout, handling text rendering, ANSI output, focus management, scrolling, selection, and hit testing.

A bidirectional communication layer connects IDE extensions (like VS Code) to the CLI, enabling the seamless integration developers have come to expect.

44 Hidden Feature Flags — 20 Unshipped Features

Perhaps the most interesting discovery was the presence of 44 feature flags scattered throughout the codebase. Claude Code uses bun:bundle's feature() mechanism for compile-time dead code elimination. The flags compile to false when Anthropic generates the external build, but the underlying code is fully functional and ready to ship.

Of these 44 flags, 20 represent features that are completely built but not yet released. Industry observers noted that Anthropic has been shipping a new feature roughly every two weeks — likely because everything is already done and they are just flipping flags on a schedule.

KAIROS: The Autonomous Daemon Mode

One of the most discussed findings is a feature internally called KAIROS — named after the ancient Greek concept meaning 'at the right time.' The KAIROS flag is mentioned over 150 times in the source code and represents a fundamental shift in how the tool operates.

KAIROS enables Claude Code to run as a persistent background agent, a daemon that keeps working even when the user is idle. It employs a process called autoDream, where the agent performs memory consolidation while the developer is away. During autoDream, Claude merges disparate observations, removes logical contradictions, and converts vague insights into concrete facts. Essentially, Claude Code could study and improve its understanding of your codebase while you sleep.

Self-Healing Memory Architecture

The leaked code revealed how Anthropic tackles what they call 'context entropy' — the tendency for AI agents to become confused or hallucinatory as long-running sessions accumulate more and more context. Their solution is a self-healing memory architecture.

A fascinating detail: agents are internally instructed to treat their own memory as a 'hint' rather than absolute truth. The model must verify information from memory against the actual codebase before taking action. It is a system that is skeptical by design, and that design choice alone sets it apart from many competing approaches.

Developers also found a revealing internal comment in the autoCompact.ts file (lines 68 through 70): 1,279 sessions had 50 or more consecutive auto-compaction failures (up to 3,272 failures in a single session), wasting approximately 250,000 API calls per day globally. The fix was three lines of code: MAX_CONSECUTIVE_AUTOCOMPACT_FAILURES = 3. After 3 consecutive failures, the system simply stops trying. Sometimes good engineering is knowing when to give up.

Multi-Agent Orchestration and Sub-Agents

The code confirms that Claude Code can spawn sub-agents or swarms of agents for complex tasks. There is a complete multi-agent orchestration system that allows task delegation across multiple agents working simultaneously, enabling parallel processing of complex development workflows.

References to the Next Model: Capybara / Mythos

The source code contains multiple references to a new AI model family from Anthropic, internally codenamed Capybara (also referred to as Mythos in a separate leaked document from a few days earlier). Beta flags in the code reference specific API version strings for Capybara, suggesting development is well beyond the concept phase. Security researcher Roy Paz from LayerX Security, who reviewed the code for Fortune, indicated the model will likely ship in fast and slow variants with a significantly larger context window than anything currently available on the market.

The axios Supply Chain Attack: A Separate But Critical Incident

The timing made everything worse. Just hours before the leak, in a completely unrelated event, a real supply chain attack hit the axios npm package — one of the most widely used HTTP client libraries in the JavaScript ecosystem.

Malicious versions 1.14.1 and 0.30.4 of axios were published to npm and contained a cross-platform Remote Access Trojan (RAT) hidden under a dependency called plain-crypto-js. The exposure window was between 00:21 UTC and 03:29 UTC on March 31, 2026.

How to Check If You Were Affected

If you installed or updated Claude Code via npm during that time window, you need to check immediately. Search your lockfiles (package-lock.json, yarn.lock, or bun.lockb) for axios versions 1.14.1 or 0.30.4, or the dependency plain-crypto-js. You can run this in your terminal:

grep -r '1.14.1|0.30.4|plain-crypto-js' package-lock.json

If you find any of these references, the situation is serious. Treat the host machine as fully compromised, rotate all secrets and credentials, and perform a clean OS reinstallation.

The Official Recommendation: Move Away from npm

Anthropic now officially recommends the Native Installer as the preferred installation method. The command is: curl -fsSL https://claude.ai/install.sh | bash. This installer uses a standalone binary that does not rely on the npm dependency chain, eliminating the risk of supply chain attacks through compromised packages.

Typosquatting and Dependency Confusion Attacks

As if that were not enough, attackers quickly capitalized on the leak by typosquatting internal Anthropic npm package names. A user called 'pacifier136' published packages with names nearly identical to internal ones. For now, they are empty stubs (module.exports = {}), but as security researcher Clement Dumas warned, that is exactly how these attacks work: squat the name, wait for downloads, then push a malicious update that hits everyone who installed it.

The Bigger Picture: Second Major Incident in One Week

The Claude Code leak did not happen in isolation. Just five days earlier, on March 26, a CMS misconfiguration at Anthropic exposed roughly 3,000 internal files containing details about their unreleased 'Claude Mythos' model, also attributed to human error. Two significant accidental disclosures in a single week raise legitimate questions about operational security at a company that markets itself as the safety-first AI lab.

As one analyst summed it up, the leak will not sink Anthropic, but it gives every competitor a free engineering education on how to build a production-grade AI coding agent and what tools to focus on.

Claude Code by the Numbers: Why This Matters Financially

Claude Code is not a side project. At the time of the leak, the tool was generating an annualized recurring revenue (ARR) of approximately $2.5 billion, a figure that had more than doubled since the beginning of 2026. Anthropic as a whole reports an annualized revenue run-rate of $19 billion, with enterprise adoption accounting for 80% of Claude Code's revenue.

The company is preparing for an IPO, which makes the timing of this leak particularly bad. Competitors — from established tech giants to agile rivals like Cursor — now have a literal blueprint for building a similar AI coding agent.

Technical Implications for Developers

System Prompts Shipped in the CLI — Why That Is Surprising

One of the most talked-about findings is that Claude Code's complete system prompts were included directly in the distributed CLI package. Researchers expected prompts to be served from the server side, not bundled locally. This means the exact instructions through which Claude reasons about tasks, plans its actions, and evaluates its own outputs are now public knowledge.

The Bash Tool — The Crown Jewel

Analysts described the Bash tool inside Claude Code as the crown jewel of the architecture. The ability to execute terminal commands, along with all the associated safety mechanisms and sandboxing, is arguably the most critical component from both a functionality and a security perspective.

Undercover Mode and Frustration Regexes

Other entertaining discoveries include an undercover mode and what the community dubbed 'frustration regexes' — regex patterns that detect signs of frustration in user messages to adapt the agent's behavior accordingly. Someone at Anthropic also apparently had fun writing 187 different spinner verb animations for the loading states.

What You Should Do Right Now If You Use Claude Code

If you are a Claude Code user, here are the concrete steps you should take:

  • Check your installation method. If you installed vianpm, migrate to the Native Installer (curl -fsSL https://claude.ai/install.sh |bash).
  • Audit your lockfiles. Search for malicious axiosversions 1.14.1, 0.30.4, or the dependency plain-crypto-js.
  • If you were exposed to the axios attack (npm installbetween 00:21 and 03:29 UTC on March 31): rotate ALL secrets, API keys, andcredentials. Consider the machine compromised.
  • Do not install npm packages claiming to be Claude Codesource — they are likely typosquatting or dependency confusion attacks.
  • Continue using Claude Code normally — the serviceitself was not affected. The AI models work the same, and your data is safe.

Final Thoughts

The Claude Code leak of March 2026 is a case study in how fragile release processes can be at a tech company, no matter how advanced the technology they are building. A single misconfigured .npmignore file or a wrong entry in package.json can expose everything.

For the AI industry, the leak provides unexpected transparency: we now see how a production-grade coding agent is actually built, from its memory architecture to multi-agent orchestration to how context window limits are managed. For competitors, it is a free engineering education. For Anthropic, it is a painful reminder that safety-first needs to apply to internal processes too, not just to AI models.

And for those of us who use these tools every day? It is a reminder to pay attention to our own supply chain, to verify what we install and where it comes from, and to never take for granted the packages we pull from registries — even from companies we trust the most.

Sources and References

  • VentureBeat — 'Claude Code's source code appears tohave leaked: here's what we know' (March 31, 2026)
  • The Hacker News — 'Claude Code Source Leaked via npmPackaging Error, Anthropic Confirms' (April 1, 2026)
  • Axios — 'Anthropic leaked its own Claude source code'(March 31, 2026)
  • CNBC — 'Anthropic leaks part of Claude Code's internalsource code' (March 31, 2026)
  • The Register — 'Anthropic accidentally exposes ClaudeCode source code' (March 31, 2026)
  • The AI Corner — 'Claude Code Source Code Leaked: What'sInside (2026)' (March 31, 2026)
  • DEV Community — 'The Great Claude Code Leak of 2026'(March 31, 2026)
  • Bitcoin News — 'Anthropic Source Code Leak 2026: ClaudeCode CLI Exposed via npm Source Map Error' (March 31, 2026)
Published:
Updated: