Claude Code Source Code Exposed in npm Packaging Error
Anthropic accidentally published ~500,000 lines of Claude Code source across roughly 1,900 TypeScript files when version 2.1.88 of the npm package shipped with a 59.8 MB source map file attached. A source map reference pointed to a Cloudflare R2 bucket containing the unobfuscated source. Security researcher Chaofan Shou spotted it first and the code spread fast — a GitHub backup was forked more than 41,500 times before Anthropic could act.
Anthropic described it as "a release packaging issue caused by human error, not a security breach," and said no customer data or credentials were exposed. But the leak is significant: it’s the second major accidental disclosure in a week, coming just days after ~3,000 internal files including a Mythos draft blog post were made public. Anthropic said it is rolling out measures to prevent recurrence. The code itself has revealed internal details including frustration regexes, undercover mode references, and fake tool behaviors.
$100M Claude Partner Network Opens Applications
Anthropic’s $100 million Claude Partner Network is now open for applications. The program brings enterprise consultancies — Accenture, Deloitte, Cognizant, Infosys — into Anthropic’s go-to-market channel, with direct investment going to training, sales enablement, and market development. Anthropic is also scaling partner-facing headcount fivefold, adding Applied AI engineers for live deals and technical architects for complex deployments.
Any organization bringing Claude to market can apply for free membership. The Claude Certified Architect Foundations certification is available today for partners, with additional certifications for sellers, developers, and architects coming later in 2026. The program reflects Anthropic’s push to compete with OpenAI and Google in the enterprise channel ahead of its planned IPO.
Claude Code v2.1.89: Deferred Hooks, Named Subagents, Non-Blocking MCP
Claude Code v2.1.89 dropped today — somewhat overshadowed by the source code drama. The headline feature is a "defer" option for PreToolUse hooks: headless sessions can now pause at a tool call and resume later with -p --resume, letting the hook re-evaluate on continuation. This is useful for approval workflows where a human needs to greenlight a specific action mid-run.
Other additions: named subagents now appear in the @ mention typeahead, making multi-agent orchestration much easier to navigate. MCP_CONNECTION_NONBLOCKING=true skips MCP connection waits in -p mode for faster headless startup. A new CLAUDE_CODE_NO_FLICKER=1 env var enables flicker-free alt-screen rendering. And a PermissionDenied hook now fires after auto mode classifier denials so you can log or handle rejections programmatically.
1M Token Beta Retiring April 30 — Migrate to Sonnet 4.6 or Opus 4.6
Anthropic has confirmed that the context-1m-2025-08-07 beta header will stop working for Claude Sonnet 4.5 and Claude Sonnet 4 on April 30, 2026. After that date, requests using the header on those models will behave as if the header is absent — meaning the standard 200K context limit applies. If your app depends on million-token context windows, you need to migrate before the end of the month.
The migration path is straightforward: switch to Claude Sonnet 4.6 or Claude Opus 4.6, both of which support the full 1M token context window at standard pricing without a beta header. Sonnet 4.6 also improves agentic search performance and consumes fewer tokens per task, so the upgrade pays dividends beyond just context length. The 1M beta is also now available for Opus 4.6 for the first time.
Message Batches API Gets 300K max_tokens Cap
Anthropic raised the max_tokens cap to 300,000 on the Message Batches API for Claude Opus 4.6 and Sonnet 4.6. To enable it, include the output-300k-2026-03-24 beta header in your request. This is aimed at long-form content generation, large structured data extraction, and bulk code generation tasks where a single response needs to be very long.
Structured outputs are also now generally available on Claude Sonnet 4.5, Opus 4.5, and Haiku 4.5 with no beta header required. The GA release includes expanded JSON schema support, improved grammar compilation latency, and a simplified integration path. Fine-grained tool streaming is GA across all models. Model capability fields (max_input_tokens, max_tokens, capabilities) are now returned by GET /v1/models.
Claude Mythos: Still in Early Access, ~25% Odds of April Launch
Claude Mythos remains in limited early-access testing with no confirmed public release date. Polymarket prediction markets give roughly a 25% probability of a general launch by the end of April, with a Q2 release considered the most likely scenario overall at about 45% odds by June 30. Anthropic has said the rollout timeline is "determined by safety evaluation outcomes" rather than a commercial schedule — a notable stance for a company heading toward an IPO.
What makes Mythos different is its autonomous action model. Unlike Claude 4.6 which responds one step at a time, Mythos plans and executes sequences across systems without waiting for human input at each step. That capability is also what prompted the leaked docs to flag "unprecedented cybersecurity risks." Anthropic’s caution appears genuine — this isn’t the usual safety theater around a launch.
Claude Code Source Insights: What Devs Are Finding in the Leaked Code
The developer community has been combing through the leaked Claude Code source and the early findings are interesting. Researcher Alex Kim documented "frustration regexes" — patterns that detect when users express frustration mid-session — as well as an "undercover mode" flag and references to fake tool behaviors used in testing. The code also reveals internal architecture around how Claude Code manages permissions, hooks, and multi-agent coordination.
Anthropic has been blocking forks and issuing DMCA takedowns, but the code has already spread widely. The 41,500+ fork count means it’s effectively in the open. For the developer community, the main takeaway is a clearer picture of how the tool actually works under the hood — which may inform how people build with it going forward, regardless of Anthropic’s enforcement efforts.
Two Leaks in a Week: Anthropic’s Velocity Problem
In the span of eight days, Anthropic accidentally made ~3,000 internal files public (including a Mythos blog post draft), then exposed 500,000 lines of Claude Code’s source code through a packaging error. Both incidents were chalked up to human error. Both happened against the backdrop of a company shipping 14+ product launches in March alone while racing toward an IPO. The pattern is clear: Anthropic is moving fast and its release infrastructure hasn’t kept up.
The irony is sharp. A company that has built its entire brand on being the safety-first AI lab — the one that slows down, thinks carefully, does the hard work — is now the company that accidentally open-sources its flagship developer tool. The leaks don’t expose customer data, and they may not matter much technically. But they are an operational story that investors, partners, and enterprise customers will be watching closely as the IPO road show approaches. For Anthropic, getting release ops right is now as important as getting safety right.