Businesses Revamp Cybersecurity Against AI-Empowered Attacks

Kaspersky research shows organizations face a rising tide of AI-enabled cyberattacks, with 76% reporting increased incidents and 46% attributing many to AI. Businesses recognize the need for proactive measures but report major readiness gaps: 57% lack external expertise, 54% say IT teams are too small, and 53% lack adequate security solutions. IT and security leaders are prioritizing regular training, hiring qualified staff, and engaging external specialists while upgrading detection and incident response tooling. For practitioners, the immediate task is operationalizing AI-aware threat modeling, scaling telemetry and detection (SOC, SIEM, XDR), and running adversarial testing to close the gap between awareness and capability.
What happened
Kaspersky's study finds a clear increase in AI-enabled attacks, with 76% of organizations seeing more incidents year over year and 46% believing many incidents were likely AI-driven. Concern is widespread, with 72% labeling AI use by attackers as a serious risk. Despite high awareness, significant capability gaps remain: 57% lack relevant external expertise, 54% report insufficient IT staffing, 49% lack highly qualified personnel, and 53% judge their security solutions inadequate.
Technical details
Attackers are leveraging AI to scale reconnaissance, automate social engineering, craft highly targeted phishing, quickly mutate malware to evade signatures, and generate custom exploit code. Practitioners should treat AI as a force multiplier for adversaries and adapt detection and response accordingly. Key defensive priorities include:
- •Regular, role-specific training and phishing simulations to counter AI-driven social engineering, backed by metrics and continuous improvement.
- •Hiring and retaining senior talent, and expanding SOC capacity to analyze richer telemetry and reduce mean time to detect and respond.
- •Engaging external expertise and managed detection partners to close capability gaps quickly, especially for advanced adversarial behaviors.
- •Upgrading telemetry and tooling, including SIEM, XDR, and ML-driven anomaly detection tuned for adversary automation patterns.
- •Systematic adversarial testing, red teaming, and purple team engagements to validate controls against AI-augmented attack chains.
Context and significance
The findings reflect a broader trend: commoditized large models and automation tools lower the attacker learning curve and operational costs, creating an asymmetric threat dynamic. This forces defenders into an arms race where traditional signature-based controls underperform and telemetry quality, behavioral detection, and human expertise become decisive. Boards, insurers, and regulators will increasingly factor AI risk into governance and incident response expectations.
What to watch
Expect accelerated adoption of AI-native defensive products, growth in managed detection services, and a larger market for adversarial testing and upskilling programs. The immediate practical move for security teams is to perform AI-aware threat modeling, prioritize telemetry investments, and run focused red team exercises to close the glaring gaps between awareness and operational readiness.
Scoring Rationale
The study highlights a meaningful, practitioner-relevant trend: AI materially changes attacker capabilities and exposes readiness gaps. It is important for security teams but not a paradigm-shifting development, and the sources are older so freshness reduces urgency.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.


