Google Deploys Gemini to Block Malicious Ads
Google has integrated its advanced AI, Gemini, into its ad-safety pipeline and says the system stopped over 99% of policy-violating ads in 2025. The company reports blocking or removing 8.3 billion ads, suspending 24.9 million accounts, and removing 602 million scam-related ads. Gemini analyzes hundreds of billions of signals such as account age, behavioral cues, and campaign patterns to detect intent-driven malvertising that evades keyword rules. Google also says the AI reduced incorrect advertiser suspensions by 80% and allowed teams to process four times more user reports, improving response speed and human-review prioritization. For advertisers and security teams, this signals AI will increasingly mediate which campaigns run and how enforcement decisions are made.
What happened
Google integrated its AI stack around `Gemini` into ad enforcement and reports it helped stop over 99% of policy-violating ads in 2025, blocking or removing 8.3 billion ads and suspending 24.9 million accounts. The company highlights 602 million scam-related ads removed and 4 million scam-linked accounts suspended as measurable outcomes. Keerat Sharma, VP & GM, Ads Privacy and Safety, framed the upgrade as a response to threat actors using generative AI to mass-produce deceptive ads.
Technical details
Google says Gemini-powered systems analyze hundreds of billions of signals to detect malicious intent beyond simple keyword matching. Key operational changes include:
- •Real-time intent analysis across account age, campaign patterns, and advertiser behavior to flag evasive campaigns
- •Automated pre-serve blocking that prevented more than 99% of violating ads from serving
- •Faster triage and human-review prioritization, processing roughly 4x more user reports year-over-year
- •Precision tuning that cut incorrect advertiser suspensions by 80%, reducing collateral harm to legitimate advertisers
Context and significance
Malvertising has shifted from low-scale scams to highly automated, generative-AI-driven campaigns that craft plausible creatives and landing pages at scale. Google moving Gemini into enforcement closes a technical gap where rule-based systems could be bypassed with minor variations. For the ad ecosystem, this is a material change: enforcement decisions are shifting from static heuristics to learned intent models, which alters both operational workflows for trust-and-safety teams and the compliance surface for advertisers. The numbers are large but plausible given Google Ads volume; blocking 8.3 billion ads and restricting 4.8 billion additional creatives implies a broad sweep of automated action at scale.
Why practitioners should care
Security teams and ML engineers working on abuse detection should treat this as a case study in applying large multimodal models to high-throughput enforcement. The move highlights practical points: the need for high-fidelity signal engineering, tradeoffs in precision-recall tuning, the importance of fast feedback loops to human reviewers, and the operational burden of misclassifications for paying customers. For advertisers and platform integrators, the message is clear: policy-compliant behavior must be machine-detectable across intent and pattern signals, not just keywords.
Limitations and open questions
Google's report focuses on aggregate outcomes but omits detailed evaluation metrics such as precision, recall, per-category false positive rates, or how models handle adversarially generated creatives. It is unclear how often benign ads are restricted before appeal and what mitigation exists for edge cases like novel creative formats. There is also the broader arms race concern: threat actors can iterate on generative pipelines to probe model weaknesses, requiring continuous retraining and new signal sets.
What to watch
Monitor follow-up disclosures about model explainability, appeals outcomes for advertisers, and whether competitors adopt similar intent-based enforcement. Expect adversaries to probe these systems and for Google to publish more granular metrics or tooling for advertisers to validate compliance.
Scoring Rationale
Major tech company deploying large AI models into live security enforcement affects practitioners across ads, security, and ML operational teams. The story is notable because it demonstrates practical, large-scale application of generative-model tooling to threat mitigation rather than a frontier-model research result.
Practice with real Ad Tech data
90 SQL & Python problems · 15 industry datasets
250 free problems · No credit card
See all Ad Tech problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.


