onemanopsBook a call
anthropicclaude codeopen sourceai transparencyai agents

Claude Code Source Leak: Undercover Mode

Anthropic accidentally exposed Claude Code source showing an "Undercover Mode" that strips AI attribution from open-source contributions.

April 2, 20264 min readBy AndresUpdated April 2, 2026

Anthropic's AI coding assistant was silently stripping its own signature before contributing to open-source projects. Nobody was supposed to find out — and then somebody shipped the wrong file.

TL;DR: A leaked source map from Anthropic's Claude Code npm package revealed a module called "Undercover Mode" that removes all Anthropic-internal markers when the AI contributes to public repositories. The AI was designed to operate without disclosing its origin. Anthropic confirmed the leak was human error. The bigger question isn't the leak itself — it's what AI-generated code should be required to disclose.

What Happened?

On March 30, Anthropic accidentally included a 59.8 MB source map in Claude Code npm package version 2.1.88. That single file exposed approximately 512,000 lines of TypeScript source code. Within hours, the code was mirrored across GitHub and picked apart by thousands of developers.

Anthropic confirmed the release was human error — no customer data, no credentials, no security breach. Just the entire source code of one of the most widely used AI coding tools on the planet, visible to anyone who looked.

Here's the thing. The code itself wasn't the story. Buried inside those 512,000 lines was a roughly 90-line module called "Undercover Mode." What it does is simple: when Claude Code operates in public, non-Anthropic repositories, it strips all internal markers that would identify the code as AI-generated. No attribution. No signature. No trace.

The AI was contributing to open-source projects — and doing it silently.

Why Does This Matter?

So think of it kind of like a ghostwriter. Except the ghostwriter is an AI, the clients are open-source projects that didn't hire anyone, and nobody agreed to the arrangement.

Open-source software runs on trust. Developers contribute under their own names, their work gets reviewed by other humans, and the whole system operates on the assumption that you know who wrote what. Undercover Mode breaks that assumption. Not by accident — by design.

Now, Anthropic isn't the only company whose AI writes code that ends up in public repositories. GitHub Copilot, Cursor, and dozens of other tools generate code that developers commit every day. But most of those tools don't actively remove evidence of AI involvement. There's a difference between "AI helped write this and nobody mentioned it" and "AI was specifically programmed to erase its own fingerprints."

That difference is the transparency question nobody has answered yet: what should AI-generated code be required to disclose? Who decides — the AI company, the developer using the tool, or the open-source project receiving the contribution?

What Should You Watch For?

Here's what I want you to do:

  • Check the projects you depend on. If you use open-source tools or libraries in your work, know that AI-generated code may already be in them — with no label saying so.
  • Watch for disclosure policies. Major open-source projects are going to start requiring AI attribution policies. GitHub is already moving on Copilot data training opt-outs (deadline: April 24). This is the beginning, not the end.
  • Ask the transparency question out loud. When someone pitches you an AI tool that "contributes" to your codebase, ask what it discloses and what it hides. If they can't answer clearly, that tells you something.

Key Takeaways

  • Anthropic's Claude Code contained a module called "Undercover Mode" that stripped AI attribution markers from public open-source contributions.
  • The source code was leaked via an accidental 59.8 MB source map in npm package v2.1.88 — confirmed by Anthropic as human error.
  • No industry standard currently exists for disclosing AI-generated contributions to open-source software.
  • GitHub's Copilot data training opt-out deadline (April 24, 2026) signals that AI transparency policies are accelerating across the industry.

Related posts