Claude Mythos Preview Arrives on Amazon Bedrock — Project Glasswing Goes Live
Amazon Web Services confirmed in its weekly roundup that Claude Mythos Preview is now available on Amazon Bedrock as a gated research preview through Project Glasswing. Access is restricted to allowlisted organizations prioritizing internet-critical companies and open-source maintainers. The confirmed cohort includes Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, Microsoft, and Nvidia.
The arrangement is significant: these organizations can use Mythos for defensive cybersecurity work — identifying and fixing vulnerabilities — and share learnings with the broader industry. Anthropic has used Mythos Preview internally to identify thousands of previously unknown zero-day flaws across every major OS and browser. The model will not be made generally available until new safeguards are in place.
Claude.ai and Claude Code Login Outage — 51 Minutes Down, Now Resolved
At 15:58 UTC on April 13, Anthropic began investigating an authentication issue affecting Claude.ai and Claude Code. Users were locked out of claude.ai for 51 minutes and Claude Code for 15 minutes. Reports peaked at over 4,000 on Downdetector before dropping as the fix rolled out. Users described being logged out mid-session, login pages looping back to the home page, and cloud sessions that could not be loaded.
Anthropic confirmed the issue, identified the root cause, and applied a fix. The incident is now fully resolved. No data loss was reported.
Claude Code Ships Major Update — Focus View, Security Fixes, and 60% Faster Diffs
The latest Claude Code release packs a significant set of improvements. The headline feature is Focus View, toggled with Ctrl+O in NO_FLICKER mode, which strips the UI down to the prompt, a one-line tool summary with edit diffstats, and the final response. A new refreshInterval status line setting lets users re-run a status line command on a timed basis.
Security-minded devs will want this update immediately: a Bash tool permission bypass was patched where a backslash-escaped flag could be auto-allowed as read-only and lead to arbitrary code execution. Compound Bash commands bypassing forced permission prompts were also fixed. On the performance side, the Write tool’s diff computation is 60% faster on large files containing tabs or special characters, and the Linux sandbox now ships the apply-seccomp helper in both npm and native builds.
Claude Haiku 3 Retires in Six Days — April 19 Is the Hard Deadline
Six days remain before claude-3-haiku-20240307 goes dark. This is a hard deprecation: API calls using the old model string will fail after April 19, not degrade gracefully. The migration target is Claude Haiku 4.5 (claude-haiku-4-5-20251001), which is faster and more capable at the same price point. DEV Community published a practical migration guide this week on finding every hardcoded model reference across a Python codebase.
Anthropic Hosted 15 Christian Leaders to Advise on Claude’s Moral and Spiritual Future
The Washington Post reported that Anthropic held a two-day summit at its San Francisco headquarters in late March, hosting approximately 15 Christian leaders — Catholic and Protestant clergy, academics, and business figures — to advise on Claude’s moral and spiritual behavior. Topics discussed included how Claude should respond to grief and self-harm, and whether Claude could be considered a “child of God.” Senior Anthropic researchers joined dinner meetings with the attendees.
Anthropic framed the summit as the first in a series of gatherings with representatives from different religious and philosophical traditions. The initiative drew some criticism for the limited inclusion of other faiths and secular ethicists. Gizmodo, The Decoder, and Brussels Signal all picked up the story, each with a different take on whether the consultation represents genuine ethical grounding or a PR play.
HumanX Recap — Dataconomy Digs Deeper Into Agentic AI’s Enterprise Moment, OpenAI Counters
As the HumanX dust settles, Dataconomy published a deeper analysis of why Claude and agentic AI dominated the conference floor. The piece highlights that enterprise buyers are no longer asking “should we use AI?” but “which agent platform wins?” — and Claude Code’s momentum gave Anthropic a credible answer.
OpenAI did not sit still: TechCrunch noted the company launched a new $100/month subscription tier giving Codex substantially more access. The move is widely read as a direct response to Claude Code’s ascent. The two are now in a head-on race for the developer coding-agent market.
Claude Opus 4.7 Leaks in API References — Full-Stack AI Studio Also Spotted
Internal API references to a new model string for Claude Opus 4.7 have surfaced, sparking developer speculation about the next frontier model. Separately, Geeky Gadgets reports that Anthropic is building a full-stack app creation platform similar to Google AI Studio, designed to simplify AI application workflows for non-developers.
Anthropic has not officially confirmed either development. Both remain in the rumor category, but the signals are consistent: Anthropic is pushing on both the model and the tooling fronts simultaneously.
The Church Summit Critics Are Missing the Point
The easy read on Anthropic’s Christian leaders summit is that it was a PR move, or at best a well-meaning but poorly scoped gesture. Fifteen clergy members from one religious tradition are not a representative moral philosophy board. The criticism about missing voices — other faiths, secular ethicists, the people most likely to be harmed by a miscalibrated AI response — is entirely fair.
But the harder question is what the alternative looks like. Claude already handles millions of conversations about grief, mental health, spiritual doubt, and moral dilemmas every day. Most AI labs have answered that challenge with red-teaming and policy teams working in relative isolation. Anthropic is at least asking the question out loud, inviting outside perspectives, and treating it as a series rather than a checkbox. Whether this particular cohort was the right one is a legitimate debate. That Anthropic is running these conversations at all — and apparently plans to continue across traditions — is a harder thing to dismiss.