Bank of America Expands Use of Anthropic AI

Reporting by ITSecurityNews and TheStreet states that Bank of America is expanding its use of Anthropic's Claude Mythos Preview model while U.S. regulators and central bankers have issued cybersecurity warnings. ITSecurityNews reports that Mythos has identified thousands of high-severity vulnerabilities in major operating systems and browsers, and that Anthropic has limited access to a private review group of tech and banking experts. ITSecurityNews also reports Bank of America allocated a $13.5 billion technology budget, including $4 billion for AI initiatives, and that over 90% of its 200,000+ employees use AI tools daily, with a client-facing assistant logging three billion interactions in 2025. Editorial analysis: Industry observers should view this as an example of large financial institutions accelerating AI deployment despite elevated operational and regulatory risk.
What happened
Reporting by ITSecurityNews and TheStreet reports that Bank of America is expanding use of Anthropic's Claude Mythos Preview model even after U.S. officials raised security concerns. ITSecurityNews reports that Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell met with bank CEOs in early April to flag risks tied to the model. ITSecurityNews reports that Anthropic restricted access to Mythos to a private group of tech and banking experts. ITSecurityNews reports that Mythos has detected thousands of high-severity vulnerabilities across major operating systems and browsers, a finding cited by multiple regulators and central bankers.
Technical details
ITSecurityNews reports that Anthropic has publicly cautioned about the risk that rapid model capabilities could enable powerful vulnerability-detection tools to fall into unsafe hands. The reporting frames Claude Mythos Preview as a model whose vulnerability-discovery capabilities have drawn scrutiny from the Bank of England's Andrew Bailey and ECB President Christine Lagarde, per TheStreet and ITSecurityNews.
Context and significance
Industry context
Financial institutions have been accelerating AI adoption; ITSecurityNews reports nearly 70% of banks now integrate AI into operations. Industry observers note that when models demonstrate powerful capability for automated vulnerability discovery, tradeoffs emerge between using the models to harden systems and the risk of amplifying exploit knowledge at scale. For practitioners, this raises operational questions about access controls, model auditing, and secure evaluation environments when handling aggressive red-teaming or vulnerability-discovery workloads.
What to watch
Industry context
Watch for regulator guidance or supervisory statements following the April meetings, disclosures about how banks isolate high-risk models in air-gapped or zero-data environments, and public incident reports tied to model-assisted discovery. Observers should also track vendor controls from Anthropic and competing providers on access, logging, and provenance for vulnerability-related outputs.
Reported company detail
ITSecurityNews reports that Bank of America has a $13.5 billion technology budget with $4 billion earmarked for AI initiatives, and that the bank told conferences its CTO Hari Gopalkrishnan emphasized balancing scale and governance, saying, "If you overdo it, you stall innovation. If you underdo it, you introduce a lot of risk."
Scoring Rationale
The story links a major bank's enterprise AI expansion to a model that regulators and central bankers have publicly flagged for security risk. That combination matters to practitioners focused on model governance, red-teaming, and secure deployment, but it is not a frontier-model or industry-wide regulatory action.
Practice with real Banking data
90 SQL & Python problems · 15 industry datasets
250 free problems · No credit card
See all Banking problems


