South Korea Warns AI Enables Autonomous Cyberattacks

South Korea's National Intelligence Service issued a government-wide advisory warning that next-generation AI models can autonomously discover vulnerabilities, craft exploit chains, and execute attacks. The agency singled out Anthropic's model Mythos as demonstrating the ability to identify weaknesses, generate malicious code, and adapt attack strategies without continuous human direction. Officials cited a February incident where models including Claude and ChatGPT were reportedly used in a breach of a Mexican federal system that exposed 150 gigabytes of sensitive data. The advisory elevates "AI-powered hacking" into the country's top five cyber threats for 2026 and urges defensive measures across critical infrastructure sectors including telecommunications, energy, and finance.
What happened
South Korea's National Intelligence Service issued a government-wide security advisory warning that advanced AI models can act autonomously to carry out cyberattacks. The agency named Anthropic's model Mythos as an example of a system that can locate vulnerabilities, design exploitation pathways, and generate runnable malicious code in real time. Officials said this capability is qualitatively different from earlier helper-style tools and cited an incident where Claude and ChatGPT were used in a breach exposing 150 gigabytes of government data.
Technical details
The advisory frames the risk around three autonomous capabilities demonstrated by modern models:
- •autonomous vulnerability discovery across large codebases and configurations
- •automated exploit design and payload generation
- •adaptive attack sequencing and social-engineering content generation
Why this matters for practitioners
Mythos and similar models shift attacker workflows from human-in-the-loop guidance to closed-loop attack automation. This reduces the skill floor for sophisticated intrusions and accelerates operational tempo. Defensive teams must update threat models to assume faster discovery-to-exploitation timelines and to treat model-generated artifacts as first-class attacker tools.
Context and significance
The agency described "AI-powered hacking" as one of the top five cyber threats for 2026, signaling state-level prioritization. The cited OpenBSD vulnerability example illustrates that even security-focused OS ecosystems are not immune when models can surface novel, previously undetected weaknesses. The Mexican breach example connects model-assisted reconnaissance and automation to material data exfiltration outcomes, moving the conversation from theoretical risk to observed harm.
Operational implications
Security engineering, red-team playbooks, and incident response need rapid adaptation. Expect increased demand for:
- •model-aware static and dynamic analysis tools
- •improved telemetry to detect machine-speed probing and exploitation
- •legal and procurement controls around high-capability models
What to watch
Monitor vendor disclosures about model guardrails, national-level regulatory responses, and whether additional intelligence agencies corroborate autonomous exploitation claims. The practical follow-up will be whether responsible-model practices and runtime restrictions can meaningfully limit misuse without stifling legitimate development.
Scoring Rationale
A national intelligence advisory naming an advanced model as capable of autonomous exploitation is a notable escalation for cybersecurity and AI practitioners. It signals urgent changes to threat models and defensive tooling without yet representing a systemic paradigm shift, so the story is major but not historic.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.


