Quick Read
- A packaging error in Anthropic’s npm registry exposed 512,000 lines of Claude Code source code.
- The leak revealed unreleased features, including an autonomous ‘Kairos’ daemon and multi-agent coordination logic.
- The incident marks the second major source code exposure for Anthropic in one year, intensifying scrutiny of its internal security protocols.
Anthropic, the AI research company known for its emphasis on safety and constitutional training, has suffered a significant data exposure involving its flagship terminal-based tool, Claude Code. On March 31, 2026, security researchers identified that the entire source code for the tool was made publicly accessible through a misconfigured npm package, marking a major lapse in the company’s internal software release protocols.
Source Map Error Exposes Proprietary Architecture
The leak originated from an included 60MB source-map file (cli.js.map) within the latest version of the @anthropic-ai/claude-code package. While source maps are intended for debugging purposes, their inclusion in a production-ready public package allowed for the reconstruction of over 1,900 proprietary TypeScript files. Security researcher Chaofan Shou, who first flagged the incident, noted that the file acted as a direct bridge to Anthropic’s cloud storage, effectively handing the public the complete blueprint of the tool’s underlying logic.
Unveiling ‘Claude Mythos’ and Hidden Agent Features
The exposure has provided an unprecedented look into the technical architecture of Anthropic’s agentic development tools. Analysis of the leaked 512,000 lines of code confirms the existence of advanced capabilities, including a multi-agent coordinator system and deep integration hooks for IDEs like VS Code and JetBrains. Most notably, the code references a model version codenamed ‘Claude Mythos’ (v5.0, ‘Capybara’), alongside several previously undisclosed features. These include an autonomous daemon mode referred to as ‘Kairos’—designed for background tasks and persistent memory—and an ‘Undercover Mode’ that allegedly automates the removal of AI traces from commit histories.
The Stakes of Transparency and Security
This incident represents the second time in a year that Anthropic has faced a source code leak, raising questions about the resilience of its build processes. As Claude Code serves as a core revenue driver and an essential tool for enterprise developers, the exposure of its internal API design, telemetry hooks, and permission-bypass logic is a significant concern for stakeholders. While the leak does not compromise individual user conversations, it exposes the ‘human factor’ in AI development—demonstrating that even highly guarded AI products remain vulnerable to fundamental packaging errors.
The incident highlights a growing tension in the AI industry between the desire for rapid deployment of agentic tools and the necessity for rigorous security auditing, suggesting that as these models become more complex, the risk of ‘accidental open-sourcing’ through build-pipeline failures will remain a critical liability for leading AI labs.

