ND.Builds

ND.Builds / Updates

Claude Code Leak: What Actually Happened

April 1, 2026

This wasn’t a hack. It was a simple mistake that snowballed fast. Anthropic pushed a new version of their Claude Code CLI tool to npm, and it accidentally included a massive source map file.

Backstory from "Instructkr" on Github: "At 4 AM on March 31, 2026, I woke up to my phone blowing up with notifications. The Claude Code source had been exposed, and the entire dev community was in a frenzy. My girlfriend in Korea was genuinely worried I might face legal action from Anthropic just for having the code on my machine — so I did what any engineer would do under pressure: I sat down, ported the core features to Python from scratch, and pushed it before the sun came up. The whole thing was orchestrated end-to-end using oh-my-codex (OmX) by @bellman_ych — a workflow layer built on top of OpenAI's Codex (@OpenAIDevs). I used $team mode for parallel code review and $ralph mode for persistent execution loops with architect-level verification. The entire porting session — from reading the original harness structure to producing a working Python tree with tests — was driven through OmX orchestration. The result is a clean-room Python rewrite that captures the architectural patterns of Claude Code's agent harness without copying any proprietary source. I'm now actively collaborating with @bellman_ych — the creator of OmX himself — to push this further. The basic Python foundation is already in place and functional, but we're just getting started. Stay tuned — a much more capable version is on the way. https://github.com/instructkr/claw-code " What Exactly HappenedAnthropic released version 2.1.88 of their @anthropic-ai/claude-code npm package (a popular terminal-based AI coding agent/CLI tool). This package accidentally included a large JavaScript source map file (.map, about 59.8 MB). Source maps are debugging artifacts that map minified/bundled production code back to the original readable source (in this case, unobfuscated TypeScript). They are normally stripped or kept internal and not shipped to public registries like npm. The .map file contained references that pointed to a full ZIP archive of the source code hosted on Anthropic's own Cloudflare R2 storage bucket. Anyone who spotted the map file could download and decompress the archive, exposing:Roughly 1,900 files Over 512,000 lines of TypeScript code This included the CLI's architecture, tool registry, agent orchestration, feature flags, internal prompts/behaviors, and various unreleased/hidden features. fortune.com Discovery and SpreadSecurity researcher Chaofan Shou (@Fried_rice on X) spotted the exposed map file early on March 31 and publicly shared details (including download links). The code was quickly downloaded, mirrored to multiple GitHub repos (some reaching tens of thousands of stars/forks before takedowns), and analyzed by developers worldwide. Anthropic later removed the problematic npm version, but the leak had already propagated widely. gizmodo.com Anthropic's ResponseAnthropic confirmed it was a "release packaging issue caused by human error", not a hack, breach, or external attack. No customer data, credentials, or model weights/training data were exposed — only the client-side CLI/tooling code. They described it as an accidental inclusion of internal source code in the package. inc.com This is Anthropic's second public mishap in about a week — just days earlier, a separate misconfiguration exposed internal files including details on an unreleased model codenamed "Mythos" (also called "Capybara"). fortune.com Why It Was Embarrassing (and Fixable)It's a classic devops oversight: forgetting to exclude .map files (or properly configure .npmignore/build settings) when publishing to a public registry. Many called it a "5-minute CI fix" that should have been caught in the release pipeline. The irony is high because Claude Code itself is an AI-powered coding tool, and Anthropic has heavily promoted using their own AI for internal development. The leaked code has given the community a detailed look at how Anthropic builds advanced agentic coding tools (e.g., persistent memory, background agents, UI behaviors, prompt handling tricks), but it doesn't reveal core model secrets. Open-source forks and analyses are already popping up.In short: pure human/config error in the build/release process — not sophisticated hacking. These kinds of supply-chain/packaging slips happen more often than companies like to admit, especially with fast-moving AI products.