TIME Magazine: “How Anthropic Became the Most Disruptive Company in the World”
TIME published a sweeping profile of Anthropic on March 11 that doubles as the clearest accounting yet of the company’s business impact. The numbers are striking: Claude Code revenue alone hit $2.5 billion by February 2026, up from $1 billion at year-end 2025. The company is on track to surpass OpenAI’s total revenue before 2027. When Anthropic launched non-coder plugins targeting sales, finance, marketing, and legal teams, $300 billion evaporated from the market cap of enterprise software companies in a single session. When Anthropic published a blog post about Claude Code translating legacy COBOL into modern languages, IBM lost roughly $40 billion in market cap in a single day.
The article frames the Pentagon dispute as the inflection point that turned Anthropic from a respected AI safety company into a cultural phenomenon. Refusing a blanket permission for autonomous weapons systems and mass surveillance — the move the Trump administration branded a “supply chain risk” — turned out to be a remarkably effective consumer marketing event. Every negative government headline has produced a wave of sign-ups and enterprise subscriptions. The irony TIME identifies: the administration’s attempt to sideline Anthropic commercially has accelerated its rise.
March 11 Outage Resolved — What Enterprise Teams Need to Know
Claude experienced a significant international outage on Wednesday March 11, with Down Detector reports peaking at over 1,400 simultaneously. The incident affected Claude.ai login, web access, and specific model endpoints — though the Claude API remained largely operational. Anthropic acknowledged the issue promptly and marked it resolved within approximately two hours of confirmation. The Anthropic status page at status.claude.com tracked the incident in real time.
For enterprise teams building on Claude, the pattern is worth noting: this was the second notable outage in two weeks, following a March 2 disruption that affected Claude.ai shortly after the app hit the top of the App Store. Rapid user growth is stress-testing infrastructure in real time. Anthropic’s API held up both times, making API-based deployments more resilient than web-tier access during incidents.
Claude Code Auto Mode: No More Permission Prompts for Every Action
Anthropic is rolling out Auto Mode for Claude Code as a research preview starting today, March 12. The feature hands permission decisions to Claude itself — the agent reasons about whether each action (file edit, shell command, external network call) needs developer sign-off rather than surfacing an approval prompt for every step. Read-only operations on project files are generally auto-approved; commands with broad filesystem access or external calls are more likely to surface a prompt. The result is longer uninterrupted coding sessions with fewer interruptions on routine tasks.
The feature ships with prompt injection safeguards — protections against malicious content in files or command outputs that could hijack Claude’s actions. IT and security teams can disable Auto Mode entirely via MDM or file-based OS policies before it reaches their users. One tradeoff to flag: the per-action reasoning adds overhead, so token consumption, latency, and cost all increase when Auto Mode is active. The research preview label means Anthropic is actively soliciting feedback before a full rollout.
Claude Code Release: Actionable Context Suggestions, autoMemoryDirectory, Memory Leak Fixes
A new Claude Code release shipped today with a focus on reliability and developer workflow. The headline new feature: actionable suggestions in the /context command. Rather than just reporting context usage, Claude Code now identifies context-heavy tools, memory bloat, and capacity warnings and offers specific optimization tips. A new autoMemoryDirectory setting lets teams configure a custom directory for auto-memory storage, replacing the default path.
The release also includes a significant round of bug fixes. Fixed: a startup UI freeze triggered when many claude.ai proxy connectors refresh an expired OAuth token simultaneously; forked conversations sharing the same plan file (edits in one fork could overwrite another); memory leaks in the git root detection cache and JSON parsing cache that could grow unbounded over long sessions; and a streaming API response buffer leak causing RSS growth on Node.js. Also new: a modelOverrides setting to map model picker entries to custom provider model IDs, plus actionable SSL guidance when OAuth login fails due to corporate proxy certificate issues.
Claude for Excel & PowerPoint Get Shared Context — One Session Across Both Apps
Anthropic shipped a major update to its Office add-ins on March 11: Claude for Excel and Claude for PowerPoint now share full conversation context. That means a single Claude session can read live cell values, write formulas, and edit slides without the user switching contexts or re-explaining their data. The practical use case is quarterly reporting: pull actuals into Excel, build the variance analysis, then push the findings directly into a PowerPoint deck — all from one conversation.
The update also ships a Skills feature: teams can save standardized workflows as one-click actions inside both sidebars. Instead of re-uploading reference documents or re-prompting instructions every session, orgs can lock in approved templates and analyses that anyone can trigger without knowing the underlying prompt. Both add-ins can now route through existing LLM gateways on Bedrock, Vertex AI, or Microsoft Foundry — so enterprise customers don’t need a separate Claude account. Available now for Mac and Windows users on paid Claude plans.
Microsoft Copilot Cowork: Claude’s Agentic Engine Inside Microsoft 365
Microsoft announced Copilot Cowork, a new agentic tool built on Anthropic’s Claude Cowork technology, that embeds multi-step task automation across Outlook, Teams, Excel, and the broader Microsoft 365 suite. The product is currently in private preview for select customers, with a broader rollout through Microsoft’s Frontier program expected later in March. Ad Age and Thurrott both reported on the announcement this week, with Thurrott noting the integration leans heavily on Claude’s file-handling and scheduling capabilities from the original Cowork product.
This is a significant distribution play for Anthropic. Microsoft 365 has over 400 million commercial seats. Even a fraction of those activating Claude-powered agentic features would represent a massive expansion of Claude’s enterprise footprint — entirely separate from the direct Anthropic subscription base. The product is distinct from the earlier announcement that Claude Sonnet models are available as selectable backends inside Copilot; Cowork adds agentic, multi-app task execution on top of the base model access.
Anthropic Is Now Everywhere: The Distribution Strategy Behind the Headlines
Take today’s news together and a clear pattern emerges. Claude Code Auto Mode removes the last major friction point for developers using Claude in long autonomous sessions. The Excel/PowerPoint shared context update locks Claude deeper into the Office workflow most enterprise employees live in. Microsoft Copilot Cowork puts a Claude-powered agent in front of 400 million M365 seats. The Claude Marketplace (launched last week) gives enterprise procurement teams a single place to buy Claude-powered apps without a separate contract. None of these are flashy model announcements — they’re all distribution moves.
This is the part of the TIME story that deserves more attention than the revenue numbers: Anthropic has figured out that being the safest AI isn’t just a values statement, it’s a product strategy. Every enterprise that blanched at giving a less cautious model access to its Outlook inbox or its COBOL codebase is now a potential Claude customer. Auto Mode with prompt injection safeguards, Shared Context with gateway routing through existing infrastructure, Cowork embedded in the tool employees already use — these are answers to the exact objections enterprise security teams raise. The “safety” and “distribution” stories are the same story.