Banks Reinforce Defenses as AI Drives Cyberattacks

Banks are stepping up cybersecurity as AI materially changes the threat landscape. Research from Kroll shows 76% of companies experienced an incident involving AI applications or models in the last two years, and the financial and insurance sector accounted for 27% of incidents. Executives warn that automated tooling compresses the time to find and exploit vulnerabilities, creating a higher-volume, faster-moving adversary. Major banks are accelerating adoption of proven controls such as multi-factor authentication, tighter supply-chain scrutiny, and advanced detection, while also investing in AI-aware defenses: red teaming LLM-generated attacks, bolstering observability, and integrating threat intelligence that recognizes AI-crafted phishing, social engineering, and malware. For practitioners, this raises immediate priorities around model-risk controls, vendor assurance, telemetry coverage, and tabletop exercises that simulate AI-enabled attacker tactics.
What happened
Banks are increasing cybersecurity defenses as AI materially alters attacker capabilities. Research commissioned by Kroll found 76% of companies experienced an incident involving AI applications or models in the last two years, and the financial and insurance sector made up 27% of incidents. "More is changing now and faster than we have seen in a long time, the time to find and exploit vulnerabilities is drastically decreasing," said the chief information security officer at JPMorgan, speaking to the Financial Times.
Technical details
AI is lowering the cost, speed, and scale of attack tooling, enabling rapid generation of convincing phishing content, automated reconnaissance, and tailored social-engineering campaigns. Detection and response now require AI-aware signals and analytics rather than rules tuned to human-crafted attacks. Recommended defensive measures include:
- •Expand telemetry and logging to capture user interaction patterns, chain-of-event context, and model-related API usage
- •Harden model and vendor risk management with contractual security SLAs, supply-chain audits, and red-team exercises that simulate AI-generated threats
- •Deploy anomaly detection and behavioral baselining to spot high-velocity or mass-targeting campaigns
- •Increase automation in IR playbooks to contain fast-spreading campaigns and update detection artifacts quickly
Context and significance
Financial institutions are already early adopters of security controls because of their high-value data and clear monetization vectors for attackers. The Kroll statistic crystallizes a broader trend: AI is not just a new attack surface; it amplifies existing threat economics by reducing attacker time-to-exploit and lowering skill barriers. That change makes traditional, slow-moving governance and periodic pen tests insufficient. Security programs must integrate continuous validation and model-specific threat intelligence.
What to watch
Expect banks to accelerate investments in AI-aware threat detection, vendor assurance for model providers, and industry information-sharing on AI-enabled TTPs. Practitioners should prioritize closing telemetry gaps, conducting model-risk assessments for third-party AI services, and running realistic drills that include AI-generated social engineering scenarios.
Scoring Rationale
The report quantifies a clear, near-term shift: AI materially increases attack velocity and lowers skill barriers. That directly affects practitioners in financial services and security operations, requiring new tooling and governance. The story is notable but not paradigm-shifting on its own.
Practice with real Telecom & ISP data
90 SQL & Python problems · 15 industry datasets
250 free problems · No credit card
See all Telecom & ISP problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.



