Jamie Dimon Warns Mythos Exposes Banking Cyber Vulnerabilities

Jamie Dimon, CEO of JPMorgan Chase, said Anthropic's new model Mythos has surfaced more software vulnerabilities after JPMorgan tested the Mythos preview. Dimon called AI a double-edged sword, saying "AI's made it worse, it's made it harder," because models can both reveal new attack vectors and ultimately help defenders. He emphasized that banks invest heavily in cybersecurity and coordinate with government, but that risks extend to exchanges and the broader financial plumbing. For practitioners, the takeaway is immediate: treat LLMs as both a threat actor capability and an asset discovery tool, update threat models, expand red teaming, and harden patching and incident response pipelines.
What happened
Jamie Dimon, CEO of JPMorgan Chase, said testing of Anthropic's model Mythos shows "a lot more vulnerabilities need to be fixed," after the bank evaluated the Mythos preview. Dimon framed AI as a double-edged sword, saying "AI's made it worse, it's made it harder," because large models can surface previously unknown weaknesses even as they may eventually aid defense. He stressed banks run continuous, costly cybersecurity programs and coordinate with the U.S. Treasury and government partners.
Technical details
Anthropic's Mythos preview reportedly found vulnerabilities in corporate software that may be exploitable when combined with prompt engineering and automated probing. Key technical implications for practitioners:
- •LLMs can act as automated reconnaissance tools that map attack surfaces at scale, accelerating vulnerability discovery.
- •Model outputs may suggest exploit chains or misconfigurations when prompted to analyze software behavior, increasing attack fidelity.
- •Defensive uses of the same models require strict guardrails, deterministic testing, and controls around model access and prompt logging.
Context and significance
The comment matters because it moves AI-caused cyber risk from an academic concern into mainstream financial risk management. Major banks already perform continuous scanning, red teaming, and information sharing, but a high-capacity LLM that systematically enumerates vulnerabilities raises the bar for defensive tooling and governance. Regulators and the Treasury have heightened attention, and exchanges and fintech platforms with thin operational security are particularly exposed. This dynamic also reframes vendor risk management: evaluating an LLM provider now includes assessing how models are trained, their safety layers, and disclosure practices when models surface third-party vulnerabilities.
What to watch
Expect immediate operational changes across the financial sector: expanded adversarial red teams, stricter vendor model access controls, mandatory logging of model queries, and government-industry coordination on disclosure and mitigation timelines. Monitor Anthropic's follow-up guidance, any coordinated vulnerability disclosures, and whether regulators propose mandatory controls for LLM testing in critical infrastructure.
Scoring Rationale
High relevance to enterprise security and financial infrastructure because a major bank CEO flagged model-driven vulnerability discovery. Not a paradigm shift, but a notable operational and regulatory inflection point for defenders.
Practice with real Banking data
90 SQL & Python problems · 15 industry datasets
250 free problems · No credit card
See all Banking problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.


