SEO Industry Adapts to Influence AI Responses

AI-powered search from Google and OpenAI has created a new battleground for visibility: getting cited inside model-generated answers. SEO firms and vendors are rolling out tactics and products specifically designed to earn AI citations — from self-authored “comparison” pages that promote the publisher, to structured data and playbooks aimed at optimizing content for AI overviews. The Verge documents examples where vendor roundup pages (Zendesk, Freshworks) appear in AI Mode citations despite clear self-promotion, highlighting risks to result quality and trust. For practitioners, the immediate task is to treat AI citations as a new signal, reassess provenance controls, and prepare for arms-race dynamics between platform models and content manipulators.
What happened
The landscape of web search is shifting: Google, OpenAI and other vendors are embedding large language models into search interfaces that synthesize answers and cite web sources. As these AI overviews become primary entry points for users, the SEO industry has begun creating tactics and products to influence which pages are chosen as citations. The Verge (Mia Sato, Apr 6, 2026) documents concrete examples where vendor-authored “best-of” and comparison pages appear in AI Mode citations and favor the publisher’s own product, illustrating how easily citation heuristics can be gamed.
Technical context
Generative search surfaces synthesized answers built from multiple web sources; models use heuristics and retrieval pipelines to select and weigh inputs. Traditional SEO focused on ranking pages in list-style SERPs; the new problem is getting selected as a high-quality, cited source inside a single synthesized response. That changes optimal content structure (concise factual passages, explicit comparisons, easily extracted metadata) and incentives (publish comparison pages, apply structured markup, or create content tailored for snippet extraction).
Key details from sources
The Verge shows two concrete cases: a Zendesk-branded “comprehensive breakdown” listing multiple vendors but ranking Zendesk first, and a Freshworks comparison page that recommends Freshservice as top choice. Both were cited by Google’s AI Mode in synthesized answers, despite being self-promotional. The piece ties these examples to a broader industry reaction: agencies and SEO vendors are building new services and playbooks to “earn” AI citations.
Why practitioners should care
For ML engineers and data practitioners, this matters on three fronts:
- •provenance and data quality — models that surface self-promotional or misleading sources will degrade user trust and downstream decision-making
- •metric drift — click and engagement metrics from synthesized answers will require rethinking evaluation and training signals for retrieval and reranking
- •adversarial dynamics — expect rapid iteration as SEO actors adapt to model behaviors, which will force retrieval teams to harden source selection and incorporate stronger source-verification signals
What to watch
Platform responses (stricter provenance signals, penalties for self-promotional content), vendor tooling aimed at AI-optimized content, and standards for labeling or certifying authoritative sources. Also monitor whether search providers change retrieval scoring to downweight self-authored comparison pages or require richer provenance metadata.
Scoring Rationale
The phenomenon is widely applicable across the web (scope) and directly concerns AI-driven retrieval and provenance (relevance). It’s credible and actionable for search and ML teams, though not highly novel — tactics mirror historical SEO manipulation adapted for models.
Practice with real Ad Tech data
90 SQL & Python problems · 15 industry datasets
250 free problems · No credit card
See all Ad Tech problems