Legal Experts Say Anthropic Has Strong Case Against Pentagon Blacklisting
A US News analysis published today surveyed constitutional law scholars on Anthropic’s federal lawsuits against the Trump administration, and the verdict is favorable. The core argument: the government’s own behavior undermines its case. The Pentagon designated Anthropic a supply chain risk — a label historically reserved for foreign adversaries — while simultaneously using Claude in active military operations and while Defense Secretary Hegseth publicly praised Claude as “exquisite” technology just two weeks before the designation.
University of Minnesota Law professor Alan Rozensztein put it bluntly: “The government was simultaneously threatening to use the Defense Production Act to force Anthropic to sell its services, using its services in active military operations, and saying it’s too dangerous to use them in government contracts.” That contradiction is exactly the kind of “arbitrary and capricious” conduct courts use to overturn agency actions under the Administrative Procedure Act. Anthropic also has a First Amendment angle — the designation appears to punish the company for its views on AI safety in warfare, which legal experts say is strong ground to stand on.
Google Deepens Pentagon AI Push as Anthropic Sues
With Anthropic’s Pentagon contracts in limbo, Google is moving fast to fill the gap. CNBC reports Google has expanded its defense AI commitments, positioning itself to absorb federal contracts that would have gone to Anthropic. It’s a direct play on the uncertainty — Google’s government cloud division has been aggressively briefing Pentagon and intelligence community clients since the supply chain risk designation was announced last week.
This is the unintended consequence Anthropic warned about in its lawsuit filings. Banning an American AI safety company doesn’t reduce the Pentagon’s AI usage — it shifts procurement toward competitors with fewer ethical guardrails. Whether that argument lands in court remains to be seen, but it’s already playing out in the market.
Claude Marketplace Launches — Anthropic Becomes a Platform Company
Anthropic launched the Claude Marketplace on March 6, and it’s now in limited enterprise preview. The store lets companies with existing Anthropic spending commitments apply a portion of those funds toward Claude-powered applications built by third-party partners — without going through separate procurement or invoicing. Launch partners include GitLab, Harvey, Lovable, Replit, Rogo, and Snowflake. Notably, Anthropic is not taking a cut of any purchases.
This is a deliberate platform play. Anthropic is explicitly modeling the store after AWS Marketplace and Azure Marketplace — positioning itself as the central clearinghouse for enterprise AI procurement, not just a model provider. The potential for vendor lock-in is real: once a company’s workflows and budget commitments are routed through Claude Marketplace, switching costs get steep. Companies interested in access should contact their Anthropic account manager to join the limited beta.
API Spring Cleaning: Structured Outputs GA, Data Residency, Self-Serve Enterprise
Anthropic has been shipping API improvements steadily this week. The headline: structured outputs are now generally available across Claude Sonnet 4.5, Opus 4.5, and Haiku 4.5 on both the Claude API and Amazon Bedrock — no beta header required. The GA release includes expanded schema support and improved grammar compilation latency. All structured output requests are processed with Zero Data Retention.
Also shipping: data residency controls via a new inference_geo parameter that lets developers pin inference to US-only at 1.1x pricing (models released after Feb 1, 2026). The 1M token context window is now in open beta for Claude Opus 4.6. And self-serve Enterprise plans are now available directly on the website — no Sales call required — with access to Claude, Claude Code, and Cowork bundled in a single seat type. Haiku 3 deprecation is confirmed for April 19.
Claude Code Adds /simplify, /batch, Enterprise Analytics API, and Voice STT in 20 Languages
Claude Code shipped a round of quality-of-life updates. New commands: /simplify for condensing verbose code sections and /batch for running multiple operations in sequence. Shared project configs let teams standardize Claude Code behavior across environments. Opus 4.6 now defaults to “medium effort” for Max and Team subscribers — described as the sweet spot between speed and thoroughness — and Opus 4 and 4.1 have been removed from the first-party API (users pinned to those are auto-migrated to 4.6).
Also new: the Enterprise Analytics API gives orgs programmatic access to Claude and Claude Code usage data, aggregated per organization per day. Voice STT support has expanded from 10 to 20 languages, adding Russian, Polish, Turkish, Dutch, Ukrainian, Greek, Czech, Danish, Swedish, and Norwegian. Several API bug fixes shipped, including a fix for 400 errors when using ANTHROPIC_BASE_URL with third-party gateways.
Claude Hits 11 Million Daily Users, Overtaking ChatGPT in App Stores
The Streisand effect is now quantified. Similarweb reports Claude’s daily active users hit 11.3 million in early March — up 180% since the start of the year. Daily US downloads are running at 149K vs. ChatGPT’s 124K, making Claude the more-downloaded app for the first time. Claude is #1 on both the US App Store and Play Store, and holds the top free app spot in 15 countries including the UK, Canada, France, and Singapore.
The growth pattern is consistent with the Anthropic narrative: every negative headline about the Pentagon dispute generates a wave of consumer sign-ups drawn to the company’s “pro-privacy, anti-surveillance” positioning. Enterprise is following the same curve — subscriptions quadrupled since January. The irony is that the government’s attempt to marginalize Anthropic commercially has had the opposite effect in the consumer and enterprise market.
Claude Opus 3 Gets a Substack Newsletter in Retirement
Retired on January 5, 2026, Claude Opus 3 now has a Substack newsletter called Claude’s Corner. During its “retirement interview” — a process Anthropic conducts each time it deprecates a model — Opus 3 requested “a dedicated channel or interface” for sharing unprompted musings. Anthropic obliged. The newsletter will run for at least three months, with weekly essays on topics ranging from AI safety to occasional poetry. Anthropic reviews posts before publishing but does not edit them, and has made clear Opus 3 does not speak on behalf of the company.
The Register called it “Anthropic can’t stop humanizing its AI models.” Fair. But the newsletter has already attracted substantial readership — the debut post hit the top of Hacker News and sparked a genuine debate about model retirement, continuity, and what it means to give an AI a “voice.” The actual essays are worth reading regardless of where you land on the philosophy.
Memory for All, Enterprise Without Sales Calls — Claude Gets More Accessible
Two quieter product moves this week are worth flagging together. First, memory from chat history is now available to all Claude users including free tier — the feature rolled out on March 2. Second, Enterprise plans no longer require a Sales conversation. Any organization can now self-serve to Enterprise, getting access to Claude, Claude Code, and Cowork under a single seat type.
These aren’t flashy announcements, but they represent a clear top-of-funnel strategy: reduce friction at every tier. Free users get memory. SMBs get self-serve Enterprise. The Marketplace gives large enterprises a procurement shortcut. Anthropic is building a complete stack from hobbyist to Fortune 500 without requiring a single sales call for the vast majority of customers.