Anthropic Offers Mythos Cyber Model to UK Banks

Anthropic is moving to provide controlled access to its Claude Mythos Preview cybersecurity model to major British banks within days. The model can autonomously discover and, according to reports, exploit software vulnerabilities across major operating systems and web browsers, prompting emergency engagement from the Bank of England, the Financial Conduct Authority, HM Treasury, and the National Cyber Security Centre. US regulators including the Treasury and Federal Reserve have already convened systemically important banks to assess systemic risk. Anthropic is distributing Mythos under Project Glasswing, and says it has paused wider release to allow expert testing and to build safeguards. The development forces banks and regulators to balance defensive uses against acute dual use risk.
What happened
Anthropic is set to grant controlled access to its Claude Mythos Preview model to major UK banks within days under its access program Project Glasswing. The company says the model can autonomously identify software weaknesses and has been used in testing to find thousands of alleged zero-day vulnerabilities across major web browsers and operating systems. The model has already triggered emergency meetings in Washington and Ottawa; US Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell convened systemically important banks last week. UK bodies including the Bank of England, Financial Conduct Authority, HM Treasury, and the National Cyber Security Centre will brief senior bank executives through CMORG channels.
Technical details
Claude Mythos Preview is described as an advanced code-understanding and modification model that can take a target description and autonomously search for software vulnerabilities. Anthropic temporarily paused a broader release to allow external testing and to mitigate misuse risks. The company claims internal tests uncovered thousands of previously unknown vulnerabilities, a finding that raises both defensive opportunity and exploitation risk. Project Glasswing provides early access to select partners and reportedly includes major cloud and banking partners as launch participants.
- •Major stakeholders engaged: central banks, financial regulators, systemically important banks, Anthropic launch partners
- •Defensive use case: automated vulnerability scanning and remediation prioritization at scale
- •Dual-use risk: the same capabilities can be repurposed to generate exploit code autonomously
Context and significance
This is a rare instance where a single AI model has escalated to the level of central-bank attention because of potential systemic cyber risk. Regulators treating the issue as a financial stability concern, signaled by Powell's participation in US briefings, reframes an AI safety conversation as an operational resilience conversation for critical infrastructure. For practitioners, the event crystallizes a few ongoing trends: frontier models are increasingly capable of code-level reasoning; dual-use harms now manifest as tangible national-security and systemic-financial risks; and access control rather than model capability alone is becoming the primary governance lever.
Why it matters for practitioners
Security teams now face a decision matrix where models like Claude Mythos Preview can compress the vulnerability discovery lifecycle from months to minutes. That raises trade-offs between rapid automated defense and the need for strict access controls, explainability, and audit trails. Organizations adopting such tools must plan for hardened sandboxes, privileged-access management, structured prompt templates, deterministic logging, and legal/regulatory coordination for vulnerability disclosure.
What to watch
Will regulators mandate operational conditions for access, such as audited sandboxes, enumerated allowed use cases, or formal disclosure protocols? Will Anthropic deliver capability-limiting controls, robust usage monitoring, or a certification program for enterprise adopters? Also monitor whether other labs release similar forensic-capable models, which would increase the urgency for industry-wide standards and rapid red-team sharing.
"The engagement that I've had from CEOs in the past week in the UK has been significant," said Pip White, Anthropic's head of EMEA, describing demand for controlled access. Anthropic has stated, "We are putting our own safeguards and our own limitations around this product because we know how powerful it can be." These statements underscore a hybrid approach: distribute powerful tooling to defenders under strict guardrails while coordinating with regulators and major institutions on systemic risk mitigation.
Scoring Rationale
The story combines a high-capability model with systemic cyber risk that has drawn central-bank and regulator attention, making it a major security event for the financial sector and AI practitioners. It is not a paradigm shift in model architecture, but its operational and regulatory consequences are material and immediate.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.
