Claude Suffers 3.5-Hour Outage — Over 5,300 Reports at Peak
Claude experienced widespread service disruptions on March 25, starting in the early afternoon UTC. Downdetector reports surged past 5,300 at peak, with users on both web and mobile reporting failures across chat, Claude Code, and Cowork. Anthropic’s status page flagged elevated errors specifically on Claude Opus 4.6 and elevated connection reset errors in Cowork.
Anthropic confirmed the root cause was identified and a fix was deployed. The outage lasted 3 hours and 34 minutes before full recovery. The company did not share technical details about what went wrong, but stated engineers are working to prevent recurrence. This is the second notable outage in March, following the widespread disruption on March 2 that also affected thousands of users.
Pentagon Ruling Watch: Anthropic Had Requested Decision by Today
The preliminary injunction ruling from Judge Rita Lin could land at any moment. After Tuesday’s hearing — where she called the Pentagon’s Anthropic ban “troubling” and said it looked like punishment — Lin said she expected to rule within days. Anthropic had formally requested a decision by March 26 (today), though the court is not bound by that date.
The stakes are enormous. If Lin grants the injunction, the Pentagon’s supply-chain risk designation would be suspended while the full lawsuit proceeds — effectively restoring Anthropic’s ability to do government business and removing the chilling effect on commercial partners. If she denies it, Anthropic’s IPO timeline and commercial momentum face significant headwinds. Either way, the ruling will be the first judicial test of whether the administration’s actions against an AI company constitute unlawful retaliation.
Pentagon CIO Confirms Claude Used in Operation Epic Fury Against Iran
Pentagon Chief Information Officer Kirsten A. Davies confirmed that the U.S. military is using Anthropic’s Claude AI as part of Operation Epic Fury, the ongoing military campaign against Iran. The confirmation adds a striking layer to the Pentagon ban saga: the same government that designated Anthropic a supply-chain risk is actively using Claude in a live military operation.
The revelation undercuts the Pentagon’s argument that Claude poses a national security threat. If the AI is trusted enough for wartime operations, the supply-chain risk designation looks even more like the retaliatory move Judge Lin suggested it might be. Anthropic has not commented on the specific military use case, consistent with its stated policy of not allowing Claude for autonomous weapons or citizen surveillance — the very stance that triggered the dispute.
Anthropic Economic Index: Experienced Users Are 10% More Successful, Tackle Harder Work
Anthropic published its March Economic Index report, analyzing one million conversations from February 2026. The headline finding: users with six or more months on the platform see roughly a 10% higher success rate in their conversations, a gap that persists after controlling for task type, geography, and model selection. Anthropic attributes this to “learning-by-doing” — users get better at prompting, structuring tasks, and iterating with the AI over time.
The more striking data point: each additional year of Claude usage correlates with users tackling tasks that require approximately one additional year of formal education to understand. Experienced users are not just doing the same work faster — they’re doing fundamentally harder work. They also show more “task iteration” (back-and-forth refinement), while newer users tend toward single-shot directive interactions. The implication for businesses: the ROI from AI tools compounds with experience, and the learning curve is real but rewarding.
Computer Use in Cowork Now Rolling Out — Control Your Mac While Away
Computer use in Cowork and Claude Code continues its rollout to Pro and Max subscribers on macOS. Claude can now point, click, and navigate your desktop autonomously — opening apps, using the browser, filling spreadsheets, and running dev tools without setup. When a dedicated connector exists (Google Workspace, Slack), Claude uses it first. When not, it falls back to direct mouse and keyboard control.
The capability pairs with Dispatch, released last week, which lets you assign tasks from your iPhone. The workflow: message Claude from your phone while commuting, and return to finished work on your desktop. Anthropic says safeguards are built in — Claude requests permission before accessing new apps. The feature is still in research preview and being tuned based on user feedback. This is the most tangible step yet toward Claude as a persistent desktop agent, not just a chat window.
2x Usage Boost Ends Tomorrow — Last Day to Maximize Off-Peak Limits
Anthropic’s March off-peak promotion expires at the end of March 27 — that’s tomorrow. Free, Pro, Max, and Team subscribers get double usage limits outside 8am–2pm ET on weekdays, and all day on weekends. The boost applies across Claude on web, desktop, mobile, Cowork, Claude Code, Claude for Excel, and Claude for PowerPoint. Enterprise customers are excluded. Capacity doesn’t roll over, so tonight and tomorrow are your last windows.
India Leads in Claude Coding and Job-Related Tasks Despite Low Overall Usage
A secondary finding from the Economic Index report: India ranks lower in overall Claude usage compared to the U.S. and Europe, but Indian users disproportionately use Claude for coding and job-related tasks. This pattern suggests Claude’s early adoption in India is concentrated among developers and professionals using it as a productivity tool rather than for general conversation. The data highlights regional differences in how AI tools get adopted — technical use cases lead in markets where AI is still gaining mainstream traction.
The Pentagon Paradox: Banning the AI You’re Using in a War
The confirmation that Claude is actively being used in Operation Epic Fury creates a contradiction that’s hard to explain away. The Pentagon designated Anthropic a supply-chain risk — a label normally reserved for foreign adversaries — while simultaneously relying on Claude in a live military campaign. If the AI is trustworthy enough for wartime ops, the national security argument for the ban falls apart.
This is likely why Judge Lin was so skeptical on Tuesday. The government’s position was already shaky: punishing a company for refusing to build autonomous weapons is a tough sell in federal court. Now add the fact that the military is using the product it claims threatens national security, and the case looks even more like retaliation than policy. If the ruling comes today — as Anthropic requested — this dynamic will be hard for the government to overcome. Meanwhile, the outage yesterday is a reminder that Anthropic’s more immediate challenge might be reliability. Two notable outages in one month, at a time when the company is pitching itself as enterprise-grade infrastructure, is not ideal timing.