In what is being described as a massive strategic blow, the AI powerhouse Anthropic accidentally leaked the secret blueprints for its highly profitable coding tool, Claude Code. A simple mistake during a routine update has handed competitors a literal map of how the company’s most successful technology actually works.

The trouble began on 31 March 2026. During a standard release of version 2.1.88 on the public npm registry, someone accidentally included a 59.8 MB source map file. For those of us who aren’t tech experts, this file basically translates complex computer code back into plain English that any developer can read.

By 4:23 am ET, the mistake was spotted by Chaofan Shou, an intern at Solayer Labs. He posted the discovery on X, acting as a digital flare that alerted thousands of developers and rivals. Within hours, the 512,000 lines of code were copied and mirrored across the internet.

Claude code source code has been leaked via a map file in their npm registry!

Code: https://t.co/jBiMoOzt8G pic.twitter.com/rYo5hbvEj8

— Chaofan Shou (@Fried_rice) March 31, 2026

A High-Stakes Financial Disaster

Anthropic was quick to go into damage control. As per the company’s spokesperson, the incident was a “packaging issue caused by human error” rather than a hack, firmly stating that no sensitive customer data was exposed.

While the data may be safe, the leak is a strategic nightmare. According to data shared by Anthropic, Claude Code generates $2.5 billion in yearly revenue, a figure that has doubled since the start of 2026. This single tool now accounts for a significant portion of Anthropic’s estimated $19 billion total revenue.

A Critical Mistake

Further probing of the leaked files reveals why the tool is so valuable. It turns out Anthropic solved a major problem called context entropy. In simple terms, this is a kind of “brain fog” where an AI starts to get confused during long tasks. They fixed this with a three-layer memory system that acts like a skeptical librarian, constantly double-checking facts against real files to ensure it isn’t hallucinating.

The leak also revealed several internal projects. These include KAIROS, an “always-on” mode that fixes logic errors while the user is away, and a controversial Undercover Mode that lets the AI work on public projects without leaving AI fingerprints. The code even mentions upcoming models codenamed Capybara (Claude 4.6), Fennec, and a terminal pet called “Buddy” with stats like CHAOS and SNARK.

It is worth noting that this leak happened alongside a separate attack on a tool called Axios. If you updated via npm between 00:21 and 03:20 UTC on March 31, your computer might have picked up a Trojan virus. To stay safe, Anthropic is now urging everyone to use their Native Installer directly from the website, and it seems that switching to the official installer is now our best line of defence.

Deeba is a veteran cybersecurity reporter at Hackread.com with over a decade of experience covering cybercrime, vulnerabilities, and security events. Her expertise and in-depth analysis make her a key contributor to the platform’s trusted coverage.