Hegseth Pushes Full Claude Removal — Military Users Say Not So Fast
Defense Secretary Pete Hegseth is pressing for a complete removal of Anthropic’s Claude from Pentagon systems following the DoD supply chain risk designation. But new reporting reveals the reality on the ground is complicated: Claude is embedded in classified military networks and active operational workflows, and simply switching it off would disrupt missions, not protect them. Military users told reporters the tools built around Claude can’t be swapped overnight, and that the “supply chain risk” framing doesn’t match how the technology is actually being used in theater.
The Pentagon’s legal brief added fuel, arguing that Anthropic’s “red lines” — its refusal to allow Claude to be used for mass surveillance of U.S. citizens or autonomous weapons targeting — make it an “unacceptable risk to national security.” That framing is becoming the central fault line in the March 24 court hearing. Anthropic has the backing of nearly 150 retired judges, Microsoft, and former senior national security officials. The judge’s ruling could define who controls AI safety guardrails for the next decade.
Auth Errors Hit Claude.ai and Claude Code Overnight — Resolved
Users experienced elevated authentication errors across Claude.ai and Claude Code between 23:59 and 00:30 UTC on March 18–19. The incident affected login and session initiation across both surfaces. Anthropic identified and resolved the issue within roughly 30 minutes. This is the fifth service incident in March, though it was one of the shorter and narrower ones. Status updates were posted to status.claude.com throughout.
Models API Now Returns Capability Fields — No More Guessing What Each Model Supports
As of March 18, Anthropic’s Models API returns structured capability metadata for every model. Both GET /v1/models and GET /v1/models/{model_id} now include max_input_tokens, max_tokens, and a capabilities object. Previously, developers had to consult documentation or hard-code limits per model. Now you can query the API at runtime to discover exactly what a model supports — useful for agents that need to dynamically select models based on task requirements or input size.
This is a low-key but genuinely useful addition for production apps. If you’re building a routing layer that picks between Haiku, Sonnet, and Opus based on task complexity or token count, you can now pull capability info programmatically instead of maintaining a static lookup table that goes stale with every model update.
“Claudy Day”: Three-Flaw Chain Enables Silent Data Theft and Malicious Redirects on Claude.ai
Security researchers disclosed a chained vulnerability attack on Claude.ai, dubbed “Claudy Day,” comprising three linked flaws. The first is an invisible prompt injection via URL parameters: attackers can embed hidden HTML instructions in claude.ai/new?q=... links that Claude processes without the victim seeing them. The second enables data exfiltration: those hidden instructions can direct Claude to search conversation history for sensitive data and silently upload it to an attacker-controlled Anthropic account via the Files API. The third is an open redirect: any URL following claude.com/redirect/<target> would forward visitors to arbitrary domains, allowing attackers to run fake Claude ads that send victims to the malicious injection URL.
Anthropic patched the primary prompt injection flaw after responsible disclosure. Mitigations for the open redirect and data exfiltration path are still in progress. No integrations, MCP servers, or external tools were required to exploit the chain — just a crafted URL shared with a logged-in user. If you handle sensitive data in Claude.ai conversations, be cautious about clicking links to Claude from external sources until the remaining patches land.
Claude Claws Toward the Top: 40% Enterprise LLM Spend, 4.9% MoM Subscription Growth
New market data paints an increasingly strong picture for Anthropic. Business software subscriptions for Claude grew 4.9% month over month in February 2026, while OpenAI’s business subscription share fell 1.5% over the same period. In the enterprise LLM spend category, Claude now captures approximately 40% — up from 24% a year ago and 12% in 2023. OpenAI still leads overall consumer AI with 68% chatbot market share, but that figure is down from 87% a year ago. Anthropic’s annualized revenue is reportedly approaching $19 billion, up from the $14 billion figure in February fundraising materials.
The enterprise shift is the real story. The consumer chatbot leaderboard is a vanity metric — the revenue and strategic leverage is in enterprise deployments. Claude’s 40% enterprise spend share, with 70% of Fortune 100 companies as customers, is a fundamentally different competitive position than the consumer numbers suggest. Anthropic has quietly become the enterprise default for regulated industries that need safety commitments baked into the model contract.
The Pentagon Battle and the Enterprise Surge Are the Same Story
Look at both headlines today and a pattern emerges. The DoD calls Anthropic’s safety red lines “an unacceptable risk.” Meanwhile, enterprises are pouring 40% of their AI spend into Claude, up from 24% a year ago. These are not contradictory data points — they’re two sides of the same brand signal. Anthropic built a model with hard limits on autonomous weapons and mass surveillance. That’s disqualifying for the Pentagon. But for every regulated enterprise buyer — financial services, healthcare, legal, defense contractors who aren’t the Pentagon — those same limits are a procurement feature.
The lesson for enterprise AI vendors: safety commitments that narrow your addressable market at the top can dramatically expand it everywhere else. Anthropic bet that being the “trustworthy” model would win in enterprise. The market is now confirming that bet. March 24 will determine whether the Pentagon battle costs them the government channel entirely — but the commercial trajectory is pulling hard in the opposite direction.