Anthropic Model Finds 271 Vulnerabilities in Firefox

According to Mozilla, an early preview of Anthropic's Claude Mythos helped the Firefox team identify and patch 271 vulnerabilities in the Firefox 150 release. Mozilla says the team previously used Anthropic's Opus 4.6 to find 22 bugs in Firefox 148, and describes the Mythos results as producing "vertigo" before concluding "Defenders finally have a chance to win, decisively," said Mozilla CTO Bobby Holley (Mozilla blog). SecurityWeek reports that more than 40 CVEs were addressed in Firefox 150 but only three are officially credited to Claude, and that access to Mythos remains restricted under Anthropic's Project Glasswing program. Industry context: This episode validates AI-assisted, large-scale code auditing as operationally powerful while raising immediate triage, tooling, and controls questions for security teams.
What happened
According to Mozilla, the Firefox security team used an early preview of Anthropic's Claude Mythos to scan Firefox, and the Firefox 150 release includes fixes for 271 vulnerabilities identified during that initial evaluation (Mozilla blog, April 21, 2026). Mozilla also reports that the team previously used Anthropic's Opus 4.6 to find 22 security-sensitive bugs in Firefox 148 (Mozilla blog). Mozilla CTO Bobby Holley is quoted as saying the findings produced "vertigo" and that "Defenders finally have a chance to win, decisively" (Mozilla blog). SecurityWeek reports that Firefox 150 addresses more than 40 CVEs, but only three CVEs are officially credited to Claude: CVE-2026-6746, CVE-2026-6757, and CVE-2026-6758 (SecurityWeek, April 22, 2026). SecurityWeek and Anthropic public materials indicate that access to Claude Mythos remains limited to a set of major partners under Anthropic's Project Glasswing program (SecurityWeek; Anthropic red team post).
Technical details
Anthropic's public red-team commentary on Claude Mythos Preview describes the model as purpose-built for cybersecurity tasks and claims strong capabilities in semantic code reasoning and multi-step vulnerability discovery (Anthropic red team). Security reporting highlights two technical patterns in the Mozilla engagement: Claude Mythos scaled reasoning over codebases to propose many candidate issues, and it integrated with fuzzing-style workflows and repro tooling to accelerate verification and patching (Mozilla blog; SecurityWeek). Mozilla stated that the vulnerabilities identified did not include new classes of bugs that human experts could not find, saying the issues "could have also been found by an elite human researcher" (SecurityWeek quoting Mozilla).
Editorial analysis - technical context: Companies and teams that apply large, reasoning-capable foundation models to codebases and fuzzing pipelines typically see a sharp increase in candidate findings. This often shifts work from discovery to triage, repro automation, and patch pipeline throughput. Observed patterns in prior deployments show heightened demand for automated verification, confidence scoring, and developer tooling to convert candidates into actionable fixes without overwhelming maintainers.
Context and significance
Industry context
The Mozilla-Anthropic episode is an early production-scale demonstration that frontier models can materially increase vulnerability discovery velocity for well-resourced defenders. Reporting by Mozilla and SecurityWeek frames the result as operationally disruptive rather than existential; Mozilla emphasizes that the bugs were finite and in principle discoverable by skilled humans, while Anthropic materials and partner reports emphasize the model's ability to chain findings and accelerate coverage. Access controls matter: SecurityWeek notes Anthropic has restricted Mythos access to select partners through Project Glasswing, limiting immediate widespread use.
What to watch
- •Indicators of operational strain: audit teams reporting backlog, increased patch churn, or expanded hiring for triage and repro automation. These are observable signals that follow high-volume AI-assisted scans.
- •Disclosure patterns: whether future browser or critical-infrastructure releases include similar volumes of AI-attributed findings and how many of those become public CVEs.
- •Controls and misuse mitigation: public documentation from Anthropic and other model vendors on guardrails, unintended-reuse protections, and partner vetting for security-focused models.
For practitioners: Expect an increased focus on engineering around repro, automated exploit validation, and patch deployment pipelines if AI-assisted discovery becomes common. Industry observers should track both tooling investments and disclosure norms as AI changes vulnerability throughput.
Scoring Rationale
This is a notable security development showing frontier models materially increase vulnerability discovery velocity, which matters for defenders and tooling builders. The story is significant but not a model-release paradigm shift, and access restrictions limit immediate ecosystem-wide impact.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problems


