FBI Flags AI-Driven Scams Costing $893 Million

The FBI’s Internet Crime Complaint Center (IC3) logged 22,364 internet crime complaints referencing artificial intelligence in 2025, reporting $893 million in losses tied to AI-enabled fraud. For the first time in IC3’s 25-year history the agency separated AI-related complaints as a distinct category in its annual report released April 6, 2026. The filing highlights use of AI to create social media profiles, personalized conversations and synthetic audio/video at scale, enabling business email compromise, romance/confidence scams, employment and investment fraud. IC3 also recorded 1,008,597 total complaints and $20.9 billion in losses in 2025, up from 859,532 complaints and $16.6 billion in 2024. The report warns that “AI-enabled synthetic content is becoming increasingly difficult to detect and easier to make,” pushing businesses and regulators to accelerate defenses while financial institutions invest in AI, behavioral analytics and cloud infrastructure to mitigate evolving threats.
What happened
The FBI’s Internet Crime Complaint Center (IC3) reported that 22,364 internet crime complaints in 2025 referenced artificial intelligence, representing $893 million in reported losses. IC3’s annual publication, released April 6, 2026, marked the first time the agency broke out AI-related complaints as a separate category in its 25-year reporting history.
Technical context
Attackers are weaponizing generative AI and synthetic media to scale and personalize fraud. The report calls out AI-driven creation of social media profiles, personalized conversations and synthetic audio/video. Those capabilities enable existing vectors—business email compromise (BEC), confidence/romance scams, employment and investment fraud—to become more convincing and automated, increasing both reach and plausibility of scams.
Key details
IC3’s 2025 dataset shows AI-related complaints contributed $893 million of the $20.9 billion total reported losses across 1,008,597 complaints (up from 859,532 complaints and $16.6 billion in losses the prior year). The FBI emphasizes that “People have manipulated video and audio similarly for decades, but the widespread availability of this developing technology makes it possible to create high-quality content,” and warns that “AI-enabled synthetic content is becoming increasingly difficult to detect and easier to make, which allows criminal actors to potentially conduct successful fraud schemes against individuals, businesses and financial institutions.” The coverage also notes PYMNTS Intelligence findings that content produced by AI can deceive both humans and automated systems, and that financial institutions are increasing investments in AI, behavioral analytics and cloud infrastructure to counter more sophisticated fraud.
Why practitioners should care
This is not a novelty statistic—it's a signal that generative models and synthetic media are materially changing threat surfaces. Data scientists, ML engineers and fraud teams must reassess detection baselines, adversarial test suites and telemetry collection to spot higher-quality synthetic artifacts and scaled social engineering. Behavioral analytics, anomaly detection on communication patterns, provenance tracing and multi-factor authentication gains renewed priority.
What to watch
Look for vendor and open-source tool updates that improve deepfake detection, provenance metadata standards, wider adoption of behavioral baselines, and regulatory or compliance guidance targeting synthetic content in financial workflows.
Scoring Rationale
The FBI isolating AI-related complaints for the first time and reporting $893M in losses makes this a notable operational signal for practitioners. It directly affects fraud detection, model robustness, telemetry design and defense investments. Freshness is same-week, so score reduced slightly.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problems


