Code with Claude SF Goes Live Today — 8am to 8pm PT, Free Livestream Open, Sonnet 4.8 Watch Window Now Inside the Keynote
Code with Claude SF opens this morning at 8:00 AM PT and runs through 8:00 PM PT. In-person seats closed weeks ago; the livestream registration is still live for everyone else. The published agenda spine is agentic AI inside the SDLC: production-grade agents on the Claude Platform, Claude Code at scale across long-horizon tasks, multi-repo work, parallel agents, and the operational infrastructure around them. Named on-stage leads include Ami Vora (Head of Product), Boris Cherny (Head of Claude Code), and Angela Jiang (Product Lead for the Claude API and SDKs). Code with Claude: Extended on Thursday May 7 picks up the independent-developer and early-stage-founder track. London follows May 19, then Tokyo June 10.
The Sonnet 4.8 watch window is now inside the conference. Anthropic’s typical pattern puts the next Sonnet generation 1–4 weeks after the corresponding Opus release, and Opus 4.7 shipped April 16 — the May 6–13 corridor is squarely in scope. Independent testers caught a model labeled claude-jupiter-v1-p in Claude Console red-teaming over the weekend. Leaked source-map references from March named KAIROS persistent agents and Undercover Mode in the same package; both surfaces would land alongside or just after the Sonnet ship. Expected pricing if the standard Sonnet pattern holds: $3 / $15 per MTok, unchanged from Sonnet 4.6. Whether today produces a generation jump or a measured 4.8 rollout, the keynote is the named inflection point of the week.
$200B Google Cloud Commitment Confirmed — Anthropic Is >40% of Google’s Disclosed Revenue Backlog, Alphabet Up 2% After-Hours
The Information reported Tuesday afternoon that Anthropic has committed to spend roughly $200 billion with Google Cloud over five years, with the spend including both cloud capacity and Broadcom-built TPU chips. The commitment implies Anthropic alone accounts for more than 40% of Google’s revenue backlog disclosed to investors last week. The April Google–Broadcom deal already locked in multiple gigawatts of TPU capacity for Anthropic, with the new silicon coming online starting in 2027. Alphabet’s separate up-to-$40 billion investment into Anthropic completes the loop — capital flows in one direction, compute commitments flow back the other. Alphabet shares were up about 2% in extended trading on the disclosure.
Read for the IPO clock and the cap-stack picture: the Google line is now structurally the largest enterprise contract in cloud history, and it is on the same balance sheet as the Wall Street JV ($1.5B with Blackstone, Hellman & Friedman, Goldman, Apollo, GIC, General Atlantic, Sequoia, Leonard Green) that landed Monday and the FIS Financial Crimes Agent partnership that landed alongside. The $40–50B preemptive round at $850–900B sitting on the board’s desk this month gets a new line in the prospectus narrative: a five-year, $200B compute floor anchored to a strategic equity partner. The bear case still notes the same sentence as concentration risk; the bull case reads it as a regulatory moat.
Briefing FS Recap — Ten Finance Agents Ship, Microsoft 365 Add-ins Live, Amodei + Dimon on the Same Stage, ‘Moment of Danger’ Frame Lands
Yesterday’s 11am ET livestream produced the densest financial-services product drop Anthropic has put out in a single sitting. Ten ready-to-run finance agent templates went live, covering the highest-volume bank back-office and front-office work: pitchbook construction, KYC file screening, month-end close, audit statement assembly, credit memo drafting, plus regulatory-reporting and reconciliation lines. The Claude add-ins for Microsoft 365 went generally available across Excel, PowerPoint, and Word with shared conversation context across all three apps; Outlook is positioned as a “chief of staff” and is in development. New first-party connectors landed for Moody’s, FactSet, S&P Capital IQ, MSCI, PitchBook, IBISWorld, Dun & Bradstreet, and Verisk — the back-end data spine the agents need to be useful. The first-ever shared-stage appearance of Dario Amodei and JPMorganChase Chairman/CEO Jamie Dimon framed the message: AI in capital markets, the workforce question, and the cyber risk window. Amodei said Anthropic’s Q1 revenue grew 80x on an annualized basis.
The cyber frame from the same stage is now its own narrative line. Amodei called Mythos a “moment of danger” and put the patch window at six to twelve months before Chinese AI catches up. The earlier Mythos run found ~20 vulnerabilities in Firefox; the current Mythos found ~300 in the same browser; the cumulative count across all major software now runs into the tens of thousands, most of them undisclosed because still unpatched. Dimon called the cyber risk “very heightened” but bounded as “a transitory period.” Read structurally: the Briefing FS event landed three layers of the stack visible at once — ten agents on the application layer, M365 add-ins on the productivity layer, and the Mythos cyber-window framing on the regulator layer that the Eurogroup and MAS tracks will now move on.
Both JVs Now in Talks to Acquire AI Services Firms — OpenAI’s Deployment Company Has Three Deals in Advanced Stages, Anthropic’s Capital Targets Engineering and Consulting Tuck-Ins
Reuters confirmed Tuesday that both the Anthropic and OpenAI joint ventures are in active talks to buy AI services firms — the engineering-and-consulting shops that put frontier models into production inside enterprise customers. OpenAI’s vehicle, “The Deployment Company,” is raising about $4 billion from 19 investors including TPG, Bain Capital, and Brookfield Asset Management, and is reportedly in advanced stages on three acquisitions; the formal launch is expected later this week. Anthropic’s $1.5 billion Wall Street JV with Blackstone, Hellman & Friedman, and Goldman is structured the same way: most of the capital is earmarked for tuck-in acquisitions of engineering services and consulting firms, with the goal of folding hundreds of engineers and consultants under the JV banner.
Read structurally: this is the moment the model-vendor “forward-deployed engineer” idea graduates to a buy-not-build motion. Both labs are doing the same arithmetic — deploying frontier models inside regulated industries needs a delivery layer they don’t have today and can’t hire fast enough through organic growth. Buying named services shops is the fastest path. Watch for the named acquisition target lists to surface inside the next two weeks; the structural read is that within a quarter the AI-services consulting industry will look meaningfully consolidated under the two largest US frontier labs.
Claude Code 2.1.129 Ships — --plugin-url Pulls ZIPs from URLs, Monitor Tool Streams Background Events, Linux Subprocess Sandboxing Lands
The keynote-day release wave starts here. --plugin-url <url> now fetches a plugin .zip directly from a URL for the current session — the natural pair to the 2.1.128 zip-from-disk support. The new Monitor tool gives the agent first-class streaming of events from background scripts, which makes long-running build/test/deploy loops observable inside a session without polling. CLAUDE_CODE_PACKAGE_MANAGER_AUTO_UPDATE on Homebrew or WinGet installs runs the upgrade in the background and prompts to restart, closing the “version drift across teams” complaint that surfaced after the 2.1.x feature flurry. CLAUDE_CODE_FORCE_SYNC_OUTPUT=1 covers terminals where auto-detection misses synchronized output (Emacs eat was the named offender).
Security and reliability: subprocess sandboxing with PID namespace isolation lands on Linux when CLAUDE_CODE_SUBPROCESS_ENV_SCRUB is set, and a new CLAUDE_CODE_SCRIPT_CAPS env var caps per-session script invocations — the right primitives for shops running Claude Code in regulated environments. Bug-fix highlights: /context stopped dumping its rendered ASCII visualization grid into the conversation (saving ~1.6k tokens per call), /agents Library list arrow-key navigation keeps the highlighted agent visible past the viewport, plain-CLI OAuth sessions now refresh tokens reactively on 401 instead of dying mid-session, and /branch success messages now include the new branch session id for /resume. Production estates pinning for the conference week should still hold the current build through Friday May 8 to absorb the post-keynote feature wave.
Claude Code Auto Mode Holds — Long-Running Permissions With Human Approval Gates, Team Research Preview Today, Enterprise and API Next
Auto Mode for Claude Code is the operational primitive going into the keynote. The mode sits between today’s explicit-confirm flow and a fully autonomous loop: Claude takes long-running actions on its own, but with safeguards and approval gates that surface only the decisions that materially change risk. Available today in research preview for Team users on Sonnet 4.6 and Opus 4.6 with Enterprise and API rollout queued behind it. The framing on the published page is “safer long-running permissions” rather than “autonomous” — consistent with the Anthropic posture that has run through the Managed Agents launch, the Glasswing testing program, and the Claude Security GA approach.
Read with the conference: Auto Mode is the Claude Code shape of the same idea Cowork puts on the knowledge-worker side — long-horizon execution with tunable human-in-the-loop gates. Shop-floor pattern that’s working: scope Auto Mode to a single repo or a single project at first, raise the script-cap from 2.1.129 to a deliberate ceiling, instrument the OTLP feed, then expand. Every shop that has skipped the gate-design step is back at the explicit-confirm flow inside two weeks; the shops that did the gate design are running Auto Mode on production code today.
Managed Agents and Rate Limits API Hold the Operational Spine — Claude Status Clean Going Into the Conference
The Managed Agents public beta on the Claude Platform (live since April 8) hands teams a hosted harness with secure sandboxing, authentication, and tool execution handled for them — pricing is the standard Claude model usage plus eight cents per agent runtime hour, with web search billed at $10 per 1,000 calls. The Rate Limits API (shipped April 25) lets administrators programmatically query the rate limits configured for their org and workspaces, the right primary instrument for the rolling-7-day cumulative outage budget conversation that emerged after the late-April incident cluster. Notion, Rakuten, Asana, Sentry, and Vibecode are still the named early adopters on Managed Agents.
For shops planning conference-day load, the operational stack to pin: Managed Agents on the harness layer, Rate Limits API on the budget layer, flex tier on Bedrock plus secondary failover to Vertex on the inference layer. The Claude status page is clean going into Wednesday morning — seven consecutive incident-free days for Claude.ai, the Anthropic API, Claude Code, and the Bedrock and Vertex tiers. No postmortem has yet shipped for the April 28 78-minute multi-surface event; the typical inside-ten-business-days cadence puts publication around May 8 to May 11, the same window as Code with Claude SF and Extended.
Claude Cowork Enterprise Capabilities Roll Out — “In 2026, Claude Will Do for Knowledge Work What It Did for Developers in 2025”
The Briefing FS keynote yesterday also landed the Claude Cowork enterprise rollout. Scott White (Head of Product, Claude Enterprise) framed the ambition: Cowork makes it possible for Claude to deliver “polished, near-final work” — not just drafts and chat. The new enterprise capabilities tighten the surface that has been in research preview since January: organization controls including role-based access, group spend limits, expanded OpenTelemetry observability, and usage analytics for admins. Spotify and Epic took the on-stage anchor case-study slots: Spotify integrated Claude into the system its engineers use every day so any engineer can “kick off a large-scale migration just by describing what they need in plain English”; Epic noted that more than half of its Claude Code usage is now from non-developer roles across the company — a pattern that pushed into support and implementation in ways the company didn’t plan for.
Read for the channel: the line that “in 2025 Claude transformed how developers work, in 2026 it will do the same for knowledge work” is the Cowork pitch in one sentence. MCP is now the connective tissue that lets a Cowork session reach across the customer’s actual stack — finance, HR, project management, the cloud control plane. The Spotify and Epic case studies are the proof points the Briefing FS keynote needed. The Cowork narrative now stacks with the financial-services agents underneath and the M365 add-ins on top, all on the same Claude API line.
Fed’s Bowman Outlines Three Steps Banks Should Take on Mythos Risk — The US Regulator Side Joins the EU and MAS Posture
Federal Reserve Vice Chair for Supervision Michelle Bowman put a marker down on the Mythos discussion this week, calling out three near-term actions banks should take to harden against AI-discovered vulnerabilities. The substance of the guidance maps to the same reference architecture the European Banking Authority and Singapore’s MAS have been working from: review and strengthen cyber safeguards, identify and patch high-impact exposures rapidly, and assume that AI advances will continue to accelerate both flaw discovery and exploit attempts. The Bowman comments line up with Amodei’s Briefing FS “moment of danger” framing and Dimon’s public posture that Mythos is “very heightened risk” but the cyber-defense window is “a transitory period.”
Three jurisdictions are now formally on the cyber-window question on the same day: the US Federal Reserve added its three-step playbook, the EU is in formal access talks with Anthropic post-Eurogroup, and Singapore’s MAS continues the private CEO-circuit on cyber posture without local Mythos access. Reference architecture being held up everywhere is still the British framework: AISI on the testing side, FCA and NCSC on the regulator side, named bank participants under Glasswing-equivalent terms. UK banks are reportedly in line to gain access to Mythos in the coming weeks.
Higher-Ed Channel Holds — Harvard FAS Claude Migration On Track, Creative-Curriculum Wins Stack Behind It
The Harvard Crimson reporting from late April still sits on the running list: the Faculty of Arts and Sciences will add Claude to the suite of AI platforms available to affiliates while discontinuing access to ChatGPT Edu. The pattern continues with Anthropic’s named curriculum partnerships in art and design programs — Art and Computation at Rhode Island School of Design, Fundamentals of AI for Creatives at Ringling College of Art and Design, and the MA/MFA Computational Arts program at Goldsmiths, University of London. The cumulative effect is Anthropic moving from research-lab brand to institutional stack at the front door of higher education, in parallel with the financial-services and creative-tools channel pushes.
The Higher-Ed track now overlaps the IPO calendar in three ways: an education-vertical revenue line for the S-1, named-institution wins for the channel-momentum chart, and Claude familiarity inside the cohort whose hiring patterns will set the next decade of enterprise procurement. Q2 enterprise-vendor calls in May and June will likely pick up the Harvard FAS transition as a marker.
Wall Street Week, Day Three: The Compute Floor, the Application Layer, and the Keynote All Land Inside 72 Hours
Step back from the wire feed and the picture going into today’s keynote is unusually clean. Inside 72 hours, Anthropic has put down a $200 billion five-year compute floor with Google and Broadcom that lands ahead of the IPO clock; closed the $1.5B Wall Street JV with Blackstone, Hellman & Friedman, and Goldman with Apollo, GIC, General Atlantic, Sequoia, and Leonard Green rounding the cap table; shipped ten ready-to-run finance agents and the Microsoft 365 add-ins on the application layer; and put Amodei and Dimon on the same stage for the first time. The Briefing FS event also re-anchored the Mythos cyber-window narrative as “moment of danger,” and the Fed’s Bowman took the regulator-side cue inside 24 hours. Today’s Code with Claude SF keynote sits on top of the same 72-hour stack with Sonnet 4.8 chatter, Jupiter-v1-p in console testing, the Cowork enterprise rollout, and Claude Code 2.1.129 shipped this morning.
The strategic point is that those four layers economic-stack on each other in a way OpenAI’s parallel motion does not yet have visible: a confirmed compute floor, a JV-as-channel, a vertical agent suite, and a developer-platform inflection all in the same week. The IPO clock now reads forward from a private-market revenue narrative dense enough that the Pentagon-blacklist drag on the federal-procurement page reads like a separate accounting line. The bear case is still that an October S-1 with an active Pentagon lawsuit pending and $200B of concentrated cloud-vendor commitment is harder to underwrite than a calendar without those lines. The bull case is that the private-sector acceleration is now structurally separable from the federal track and that the compute commitment is the moat. Today’s keynote is the first venue where all of those threads are likely to be referenced from the same stage. Watch the model surface, watch the Cowork enterprise GA timing, watch any named JV acquisition target.