Friday, April 10, 2026

Claude AI Daily Brief — April 10, 2026

Covering the last 24 hours · Edition #42

TL;DR — Today’s Top 3 Takeaways
1. Anthropic Signs Multi-Year CoreWeave Deal to Scale GPU Infrastructure — Bloomberg broke the story: Anthropic has agreed to rent data center capacity from CoreWeave across multiple Nvidia chip architectures in US facilities, securing the compute headroom it needs to keep Claude at production scale.
2. Anthropic’s Revenue Hits $30B Annualized Run Rate — Up more than 3x from the $9B run rate recorded at the end of 2025, cementing Anthropic as one of the fastest-growing software companies in history. Enterprise makes up roughly 80% of the business.
3. Advisor Tool Launches in Public Beta — The new API feature pairs a fast executor model with a high-intelligence advisor model that provides mid-generation strategic guidance — designed for long-horizon agentic tasks where quality matters as much as speed.
🚀 Official Updates
Breaking

Anthropic Signs Multi-Year GPU Deal With CoreWeave to Power Claude at Scale

Bloomberg reported today that Anthropic has agreed to rent data center capacity from CoreWeave under a multi-year agreement. The deal spans a range of Nvidia chip architectures at US-based CoreWeave facilities, giving Anthropic a dedicated GPU supply outside of its existing cloud commitments with Google and Amazon. CoreWeave confirmed that nine of the ten largest AI model providers now use its platform.

The timing is notable: Anthropic just crossed $30B in annualized revenue, launched Claude Managed Agents in public beta, and is running Claude Mythos Preview for Project Glasswing — all workloads that demand a significant and growing compute footprint. The CoreWeave deal buys headroom and negotiating leverage as Anthropic prepares to scale beyond what any single hyperscaler can absorb.

Official

Anthropic Revenue Surpasses $30B Annualized Run Rate — Up 3x From Year-End 2025

Anthropic’s annualized revenue run rate surpassed $30 billion in early April 2026, up more than 3x from the $9 billion figure recorded at the end of 2025. Enterprise contracts account for roughly 80% of total revenue per recent disclosures, driven by expanded deals with Snowflake, Accenture, AWS, and Google — all announced in the past week. The number underscores how quickly the frontier model market has shifted from API experimentation to mission-critical production deployment.

Official

Anthropic Exploring Custom AI Chip Design to Reduce Compute Dependency

Anthropic is considering designing its own AI chips to address ongoing GPU supply constraints, according to three sources familiar with the matter. No timeline has been confirmed and the effort appears to be in early exploration, but the move would follow a well-worn path: Google, Amazon, and Meta all developed internal silicon to reduce reliance on Nvidia and gain cost advantages at scale. For Anthropic, custom silicon would also allow tighter hardware-software co-design for Claude’s inference and training workloads.

💻 Developer & API
Developer

Advisor Tool Launches in Public Beta — Smarter Agents Without the Cost of Full Opus

Anthropic’s new advisor tool is now in public beta. It pairs a fast executor model (typically Sonnet or Haiku) with a high-intelligence advisor model (Opus 4.6) that provides strategic mid-generation guidance for long-horizon agentic tasks — all within a single API call. The idea: get near-Opus quality on decisions that matter while keeping latency and cost down for routine steps. To use it, include the beta header advisor-tool-2026-03-01 and add an advisor tool with type advisor_20260301 and model: claude-opus-4-6. Advisor tokens are billed at Opus 4.6 rates; executor tokens at Sonnet or Haiku rates.

Developer

Message Batches API Now Supports 300k max_tokens for Opus 4.6 and Sonnet 4.6

Anthropic raised the max_tokens cap to 300,000 on the Message Batches API for Claude Opus 4.6 and Claude Sonnet 4.6. Add the beta header output-300k-2026-03-24 to unlock long-form outputs in batch jobs — useful for generating full documents, large codebases, structured data exports, and other tasks that hit the previous 8K-64K ceiling. Standard Batches API pricing applies, with the 50% cost reduction versus synchronous calls still in effect.

Developer

Claude Code: Vertex AI Setup Wizard, Linux Subprocess Sandboxing, and Monitor Tool

The latest Claude Code release adds three notable features. An interactive Google Vertex AI setup wizard is now accessible directly from the login screen, making it faster to point Claude Code at a GCP project. Subprocess sandboxing now uses PID namespace isolation on Linux, hardening the execution environment against privilege escalation. And a new Monitor tool lets scripts stream events from background processes in real time, which is useful for long-running build, test, or data pipeline workflows running alongside active Claude Code sessions.

🌎 Community & Ecosystem
Community

Anthropic Tightens Claude Subscription Rules for Third-Party Agents

Anthropic updated its usage policies to restrict how third-party agents can access Claude through consumer subscription plans. Per PYMNTS reporting, agents built by external developers will need to use the API directly under their own credentials rather than routing through end-user Claude.ai subscriptions. The change closes a grey-area in how automation tools and agent frameworks have historically piggy-backed on personal plans. Developers running agents on behalf of users should review the updated usage policies and migrate to API keys if they haven’t already.

Community

Claude Cowork Gets Enterprise Features Alongside Managed Agents Push

Alongside this week’s Managed Agents launch, Anthropic quietly shipped enterprise-grade updates to Claude Cowork, the desktop AI tool for non-developers. Per 9to5Mac, the updates include centralized admin controls, audit logging, and expanded file-type support — features that enterprise IT teams have requested since Cowork launched. The move signals Anthropic is positioning Cowork as a serious enterprise productivity tool, not just a consumer companion app, and aligns with the broader enterprise platform push that defined this week’s announcements.

🧠 Analysis
Analysis

Anthropic’s Compute Strategy Is Now As Important As Its Model Strategy

CoreWeave deal, custom chip exploration, Google partnership, AWS deep integration — Anthropic is quietly assembling a multi-cloud, multi-vendor compute stack. That’s the right play for a company at $30B ARR with ambitions to go higher. A single cloud dependency is both a margin problem and a reliability risk, as anyone who watched the April 6-8 outages can attest. Diversifying compute is how you build the kind of SLA guarantees that enterprise contracts actually require.

But there is a real tension here. Anthropic’s safety mission has always been about moving carefully. Infrastructure races don’t reward careful. Every data center deal, every chip partnership, every managed service layer is a commitment to scale fast and keep scaling. At some point the question isn’t whether Anthropic can build a reliable platform — it’s whether the pace of that build is compatible with the cautious deployment culture that made Claude trustworthy in the first place.