Hackers Discuss and Experiment With AI for Crime

Researchers analyzed more than 160 cybercrime forum conversations collected over seven months and found growing curiosity about using AI to automate and scale attacks. Forums show two parallel trends: attempts to repurpose legitimate AI tools and early work on bespoke illicit models, and persistent skepticism about AI's reliability, cost, and operational security impacts. The study uses the diffusion of innovation framework and thematic analysis to map how ideas about AI move through criminal communities. Findings highlight areas where defenders and policymakers can intervene, including monitoring tool misuse, improving attribution signals, and prioritizing defense-in-depth against AI-enabled social engineering and malware automation.
What happened
Researchers examined more than 160 cybercrime forum conversations collected over seven months to document how malicious actors are thinking about and experimenting with AI. The study identifies interest in both repurposing legitimate AI services and developing bespoke models for illicit use, while also recording doubts about effectiveness, cost, and operational security.
Technical details
The research combines the diffusion of innovation framework with thematic qualitative analysis to map idea adoption stages in forum conversations. Key practitioner-relevant observations include:
- •Growing attempts to misuse legitimate AI tools for phishing, social engineering, content generation, and automation of reconnaissance.
- •Early efforts to design tailored models and workflows that evade simple detection or scale existing criminal services.
- •Expressed concerns among operators about AI hallucinations, reliability, cost, and how automation changes business models and OPSEC.
Context and significance
The findings place criminal interest in AI at an early but consequential phase. Democratized access to powerful models and low-cost compute lower the barrier for automation, making routine tasks like phishing and scam content generation cheaper and faster. At the same time, the recorded skepticism shows that AI is not an immediate panacea for sophisticated operators; accuracy, traceability, and integration costs limit wholesale adoption. This nuance matters for defenders: attackers will likely combine AI components with human-in-the-loop processes rather than replace skilled operators overnight.
Practical implications: Law enforcement, SOC teams, and threat intelligence providers should prioritize detection signals for AI-assisted attacks, track marketplaces offering illicit fine-tuning or hosted inference, and update attribution playbooks to account for synthetic content. Policy and platform responses can target abuse vectors where legitimate APIs are repurposed.
What to watch
Monitor whether forum chatter transitions into operational tooling, increases mentions of specific model names or hosted services, or spurs a commercial market for illicit AI models. Those indicators will mark movement from curiosity to scalable abuse.
Scoring Rationale
The study documents early but meaningful adoption signals of AI among cybercriminals, which has notable implications for detection, attribution, and policy. It is important for defenders but not yet a systemic industry-shifting event.
Practice with real Ad Tech data
90 SQL & Python problems · 15 industry datasets
250 free problems · No credit card
See all Ad Tech problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.


