Anthropic Denies Any Kill Switch on Claude — “No Back Door, No Remote Ability to Disable”
In a sworn declaration filed ahead of Monday’s preliminary injunction hearing, Thiyagu Ramasamy — Anthropic’s Head of Public Sector — stated that Anthropic has no technical ability to disable, modify, or sabotage Claude once it is deployed in military or government systems. The company does not maintain any back door or remote kill switch, and Anthropic personnel cannot access or log into Department of Defense systems to alter model behavior during operations. A second declaration from Policy Head Sarah Heck reinforced the point, noting that Claude’s guardrails are trained-in behavioral tendencies, not live toggles controlled from San Francisco.
The filing is a direct preemptive strike against one of the Pentagon’s implied arguments: that allowing Anthropic to provide AI to the military while refusing an “any lawful purpose” clause creates a latent sabotage risk. Anthropic argues this concern is technically incoherent — a model already deployed cannot be recalled or reprogrammed remotely. The declarations were filed in California federal court in advance of Judge Rita Lin’s hearing on Anthropic’s motion for a preliminary injunction, scheduled for Tuesday, March 24 in San Francisco.
Microsoft + 22 Retired Military Chiefs File Amicus Briefs Supporting Anthropic
Microsoft and a coalition of 22 former high-ranking U.S. military officials — including former service secretaries of the Air Force, Army, and Navy, retired Coast Guard Admiral Thad Allen, and former CIA Director and Air Force General Michael Hayden — filed amicus briefs urging Judge Rita Lin to halt the Pentagon’s actions against Anthropic. The retired generals called the supply-chain risk designation a “misuse of government authority for retribution against a private company.” Microsoft stated plainly that “American AI should not be used to conduct domestic mass surveillance or start a war without human control” — a direct endorsement of Anthropic’s stated ethical limits.
The amicus coalition is notable because it includes people who oversaw actual DoD procurement and AI strategy, not just civilian tech advocates. Their position — that Anthropic’s refusal to sign an open-ended use contract is legitimate, not threatening — directly contradicts the Pentagon’s framing that the company is an unacceptable security risk. Monday’s hearing is the first major opportunity for Judge Lin to signal how she views the government’s core legal argument.
DoD Says It’s “Pretty Confident” It Can Replace Claude — Military Users Say It’s Not That Simple
Pentagon CTO Emil Michael told Breaking Defense he is “pretty confident” the DoD can phase out Anthropic’s Claude within the six-month window set by President Trump, pointing to existing OpenAI and Gemini deployments as fallback options. But the operational reality appears far messier. Palantir’s Maven Smart System — used for military intelligence analysis and weapons targeting — has deep integrations with Claude Code-built prompts and workflows that would need to be rebuilt and recertified for classified networks. “It’s a substantial cost to replace those models with alternatives,” said Joe Saunders, CEO of government contractor RunSafe Security.
Pentagon staffers and former officials told reporters they are reluctant to give up Claude, view it as superior to alternatives, and some are privately preparing to revert to Anthropic’s platform should the court halt the designation. The disconnect between leadership confidence and ground-level reluctance is one reason Anthropic’s lawyers are emphasizing irreparable harm — a key injunction test — even if a court win eventually restores access.
Extended Thinking Display Field: Skip Thinking Blocks for Faster Streaming
Anthropic shipped a new thinking.display field for extended thinking responses. Setting it to “omitted” returns thinking blocks with an empty thinking field — the content is discarded server-side — but the cryptographic signature is preserved, keeping multi-turn continuity intact. In practice, this means a developer can run extended thinking for better reasoning quality while only paying for and streaming the final response. Billing is unchanged: input tokens for thinking still count, but the blocks don’t travel over the wire. The feature is live on the Claude Platform and Azure AI Foundry (preview).
This closes one of the most common extended thinking complaints: that verbose thinking chains balloon response latency and payload size in production apps. Combined with the previously released effort parameter (which replaces budget_tokens on new models and is now GA without a beta header), developers have finer control over the thinking-speed-quality tradeoff than at any previous point. The effort parameter supports Claude Opus 4.6 and Sonnet 4.6, and works across all deployment targets.
Claude’s 2x Off-Peak Usage Boost Ends March 27 — Five Days Left
Anthropic’s March promotion — doubling usage limits for Free, Pro, Max, and Team plans during off-peak hours — enters its final stretch with five days remaining. Off-peak means outside 8am–2pm ET on weekdays; weekends are fully off-peak all day. No opt-in required — the boost applies automatically across Claude on web, desktop, mobile, Cowork, Claude Code, Claude for Excel, and Claude for PowerPoint. Enterprise customers are not included in the promotion.
For teams running batch jobs, large document analysis, or extended coding sessions, the weekend window is the best remaining opportunity to burn through heavy workloads at effectively double capacity. The previous promotion — a similar boost over Christmas week — was limited to paid subscribers, making this version the most broadly accessible offer Anthropic has run. The promotion ends at end-of-day March 27.
Defense Contractors Embedded in Claude Workflows Brace for Transition — or a Court Win
Inside the defense-tech ecosystem, the practical fallout from the Pentagon’s Anthropic blacklist is becoming clearer. Contractors who built mission-critical workflows on Claude Code — including intelligence analysis pipelines and targeting support tools — now face either expensive rebuilds or the operational risk of operating outside official guidance while the court case plays out. Some defense-tech companies dropped Claude immediately after the March 3 designation; others are quietly maintaining integrations while waiting to see how Monday’s hearing goes.
The pattern mirrors what typically happens when procurement rules collide with deeply embedded tooling: official orders arrive faster than technical capability can follow. Recertifying AI models for classified networks is a multi-month process, making a clean six-month swap unrealistic for many integrated programs. The irony is that the Pentagon’s own operational users — people who depend on Claude day-to-day — are among the most motivated parties hoping Anthropic wins the injunction on Monday.
Monday’s Hearing Is a Referendum on Whether AI Ethics Policies Count as Protected Speech
Strip away the politics and the Monday hearing before Judge Rita Lin comes down to a single question: can the government punish a company for refusing to promise it will comply with any future lawful order? Anthropic’s refusal to sign an “any lawful purpose” clause isn’t a refusal to serve the military — it already did. It’s a refusal to pre-authorize uses it considers unethical. The Pentagon says that’s a business dispute. Anthropic says it’s a First Amendment issue. The outcome will shape how every AI company writes its acceptable-use policies for government contracts going forward.
The “no kill switch” filing adds an interesting wrinkle. If Anthropic genuinely cannot alter Claude post-deployment, then the government’s sabotage concern evaporates as a rationale — what remains is purely a policy disagreement about future uses. That’s a much harder case for the government to win on national security grounds. And with 22 retired generals, Microsoft, and a growing pile of internal DoD emails all pointing the same direction, Judge Lin has plenty of material to work with before she decides whether to issue a preliminary injunction Monday afternoon.