Google Deploys AI to Block Ads and Scams

Generative AI has lowered the cost and increased the scale of online spam and scams, producing a surge in fraudulent ads, fake business listings, and deceptive search summaries. Google is responding by embedding its own AI, notably the Gemini family and Gemini Nano for on-device protection, across Search, Chrome, Maps, and Android to detect and block malicious content at scale. Google reports catching over 99% of policy-violating ads before they reach users, blocking or removing 8.3 billion ads in 2025 and suspending 24.9 million advertiser accounts. The story highlights an escalating arms race: AI enables more convincing fraud while also powering faster, large-scale defenses that raise technical, privacy, and adversarial-resilience questions for practitioners.
What happened
Generative AI has dramatically increased the volume and sophistication of online spam and scams, with the FBI receiving over 22,000 AI-related complaints and losses exceeding $893 million last year. Google has expanded its AI-powered defenses across Search, Chrome, Maps, and Android, deploying the Gemini family and on-device Gemini Nano models to detect and block malicious content. The company reports blocking or removing 8.3 billion ads in 2025, including 602 million scam-related policy violations, and catching over 99% of policy-violating ads before they reached audiences.
Technical details
Google combines large-scale indexing, improved classifiers, and on-device inference to intercept scams earlier in the pipeline. Key components include:
- •Gemini and Gemini Nano models powering both server-side classifiers and on-device protections in Chrome Enhanced Safe Browsing and Android scam detection.
- •Cross-product telemetry linking Search results, ad delivery, Maps listings, and Messages/Phone signals to identify coordinated scam campaigns and fake business profiles.
- •Policy enforcement at scale, including automated advertiser account suspension and removal systems that suspended 24.9 million advertiser accounts in 2025.
Context and limitations
The defensive gains are significant but not absolute. Google's filters now catch far more scammy pages and ads, yet attackers have adapted by planting fake phone numbers, using synthetic voices, and manipulating low-visibility web pages that AI crawlers later surface. Recent reporting highlights a concrete failure mode: AI Overviews and synthesized search summaries can surface fraudulent contact details scraped from obscure sites, creating end-to-end social engineering vectors.
Why it matters
This is an accelerating arms race between generative content creation and automated detection. For ML practitioners, the operational lessons are clear: detection systems must be trained on both synthetic and real-world adversarial examples, models should support on-device inference for latency and privacy-sensitive signals, and telemetry aggregation across products is critical to detect coordination. Google's adoption of Gemini Nano for on-device pattern recognition shows the practical value of smaller, efficient models when you need real-time user protection.
Trade-offs and risks
Increasing automation raises accuracy and trust challenges. High recall against scams risks false positives that can penalize legitimate advertisers and small businesses. On-device scanning improves privacy and speed but complicates model updates and label collection. Attackers will continue to innovate with name impersonation, audio deepfakes, and poisoning low-traffic pages to poison extractive systems.
What to watch
Practitioners should monitor how detection models handle adversarially planted contact details and voice deepfakes, the evolution of policy-enforcement pipelines to reduce false positives, and cross-industry data sharing or standards to flag coordinated scam infrastructure. Expect continued investment in hybrid pipelines that combine server-scale models, efficient on-device models, and human review for high-risk decisions.
Scoring Rationale
This story is notable for practitioners because it documents large-scale, production-grade deployment of AI defenses against generative-AI-enabled fraud, with measurable impact metrics. It is not paradigm-shifting, but it influences operational practices for detection, on-device inference, and policy enforcement.
Practice with real Ad Tech data
90 SQL & Python problems · 15 industry datasets
250 free problems · No credit card
See all Ad Tech problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.

