Bessent and Powell Summon Bank CEOs to Emergency Meeting Over Mythos Cyber Risks
Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell convened the heads of America’s largest banks at Treasury headquarters on Tuesday to discuss the cybersecurity implications of Anthropic’s Claude Mythos model. The meeting was designed to ensure banks are aware of the risks Mythos-class models pose and are taking precautions to defend their systems. JPMorgan’s Jamie Dimon was the only major CEO who could not attend.
The urgency stems from Mythos Preview’s demonstrated ability to autonomously identify thousands of high-severity zero-day vulnerabilities across every major OS and web browser. Financial institutions represent some of the highest-value targets for adversaries wielding such capabilities, and regulators want banks patching and hardening now — not after the first incident.
D.C. Circuit Denies Anthropic Emergency Stay — Pentagon Blacklist Remains in Force
A D.C. Circuit appeals panel denied Anthropic’s emergency bid to block the Pentagon’s supply-chain-risk designation on April 8, keeping the ban on Claude in military procurement firmly in place. The DOD declared Anthropic a supply chain risk in early March over two clauses in Anthropic’s terms of service: a ban on fully autonomous weapons systems and a prohibition on mass surveillance of US citizens.
The ruling creates a split with a San Francisco federal court that previously granted Anthropic a preliminary injunction in a separate case. Expedited oral arguments are set for May 19, and the outcome could reshape US government AI procurement policy for years. Anthropic continues to serve non-DOD federal agencies under the San Francisco ruling.
Security Experts React to Project Glasswing — Cautious Optimism With Open Questions
In the days since Anthropic launched Project Glasswing and revealed Mythos Preview’s vulnerability-finding capabilities, cybersecurity experts have weighed in with a mix of cautious optimism and concern. Simon Willison called the restricted-access approach “necessary,” noting that limiting Mythos to defensive use by vetted partners is the responsible move given the model’s capabilities. Security Magazine surveyed practitioners who praised the transparency of disclosing what Mythos can do before wide release.
The open question: how long can restricted access hold? Mythos-class capabilities are an emergent property of stronger coding and reasoning, not a feature Anthropic bolted on. That means competing models will eventually reach similar thresholds, and the industry needs the safeguards Anthropic is building — whether or not Mythos itself ever ships publicly.
Claude Haiku 3 Retires April 19 — Migrate to Haiku 4.5 Now
Anthropic’s original budget model claude-3-haiku-20240307 hits end-of-life on April 19, 2026 — eight days from now. After that date, API calls to the model will fail. The recommended migration path is Claude Haiku 4.5 (claude-haiku-4-5-20251001), which is faster, more capable, and priced at $0.80/$4.00 per million tokens. Teams still running Haiku 3 in production should test against Haiku 4.5 this weekend and update model strings before the deadline.
Anthropic Launches ant CLI — A Typed Command-Line Client for the Claude API
Anthropic shipped ant, a new CLI tool for interacting with the Claude API directly from the terminal. Unlike raw curl calls, ant builds request bodies from typed flags or piped YAML, inlines file contents with @path references, and extracts response fields with a built-in --transform query. It ships with completion scripts for bash, zsh, fish, and PowerShell. Claude Code can shell out to ant natively, parsing structured output without custom integration code.
Install with go install 'github.com/anthropics/anthropic-cli/cmd/ant@latest'. The tool reads your API key from ANTHROPIC_API_KEY.
1M Token Context Window Beta Ends April 30 for Sonnet 4.5 and Sonnet 4
A reminder that the 1M token context window beta on Claude Sonnet 4.5 and Claude Sonnet 4 sunsets on April 30, 2026. Teams using these models with extended context need to migrate to Claude Sonnet 4.6 or Claude Opus 4.6, both of which support the full 1M window at standard pricing with no beta header. This is a two-week heads-up — update your model strings and test against the new versions before the cutoff.
Software Stocks Slide 2.6% as Mythos Reignites AI Disruption Fears
The S&P 500 Software and Services Index fell 2.6% on Thursday, extending the sector’s brutal 2026 decline to 25.5% year-to-date. Cybersecurity names took the hardest hit: Cloudflare, Okta, CrowdStrike, and SentinelOne each dropped 4.9% to 6.5%. The catalyst was Anthropic’s Mythos announcement earlier in the week, which demonstrated that AI models can now find and exploit software vulnerabilities at a scale that unsettles investors who had bet on traditional security vendors.
The sell-off underscores a broader market anxiety: if AI can automate vulnerability discovery at this level, what does that mean for the business models of companies that charge humans to do the same work?
Anthropic Expands Google Cloud and TPU Partnership
Anthropic announced an expansion of its use of Google Cloud services and TPU chips on April 6, scaling its infrastructure for foundation model training, agents, and enterprise applications. The expanded partnership deepens Anthropic’s multi-cloud compute strategy alongside the CoreWeave deal announced this week and its existing AWS commitment. Together, the moves give Anthropic access to three distinct GPU/TPU supply chains — critical redundancy for a company at $30B ARR that cannot afford a single-vendor bottleneck.
Mythos Broke the Assumption That AI Safety and AI Capability Are Separate Conversations
Before this week, policymakers treated AI safety as a future-tense problem — something to regulate before it mattered. Mythos collapsed that timeline. When the Treasury Secretary and the Fed Chair are pulling bank CEOs into emergency meetings about a model that found zero-days in every major OS, you are no longer debating hypotheticals. You are doing incident response for a capability that already exists.
The Pentagon blacklist adds another layer. The same company building the most capable defensive cyber tool on the planet is locked out of military procurement because its terms of service prohibit autonomous weapons. That is not an irony you can resolve with a policy memo. It is a structural tension between the kind of company Anthropic wants to be and the kind of company the US government needs its AI vendors to be. The May 19 oral arguments will not settle that tension, but they will tell us which direction the courts are leaning. Everyone building on Claude should be watching.