Pew Research Center Flags AI and Bogus Respondents Threat

According to a Q&A published by Pew Research Center, Courtney Kennedy, the center's vice president of methods and innovation, said Pew does not use so-called "silicon sampling" and "only interview[s] real people." The Q&A says bad actors are using AI to fabricate survey responses and that some human respondents also fail to take surveys in good faith. Pew reports that experimental studies, including work done by the center for learning purposes, have found AI-generated estimates tend to stereotype groups, more often misrepresent Republican viewpoints compared with Democratic ones, and understate disagreement in public opinion. The Q&A frames these findings as scientific and ethical reasons for continuing to sample real people rather than replacing interviews with AI.
What happened
According to a Q&A published by Pew Research Center on May 12, 2026, Courtney Kennedy, the center's vice president of methods and innovation, answered common questions about threats to polling from AI and bogus respondents. The Q&A states, "No. We only interview real people. We don't use AI to tell us what the public thinks," and notes that some firms are experimenting with "silicon sampling." The piece also reports that bad actors are using AI to fabricate survey responses and that some real respondents do not take surveys in good faith. The Q&A says Pew has conducted experimental research on AI respondents for learning only, not for reporting.
Editorial analysis - technical context
Studies cited in the Q&A, including Pew's internal experiments, found that AI-generated estimates can "stereotype groups of people," underrepresent certain partisan viewpoints, and "understate the level of disagreement in public opinion." Industry-pattern observations: prior methodological research shows that replacing human respondents with synthetic or model-generated answers alters the joint distribution of demographics and opinions in ways that complicate weighting, calibration, and variance estimation.
Industry context
For survey researchers and practitioners, the issues highlighted by Pew are twofold: first, the rise of automated techniques creates new attack vectors where inauthentic responses enter panels or opt-in samples; second, model-based answers introduce systematic bias patterns that are not equivalent to random nonresponse. Observed patterns in similar transitions suggest that detection requires both metadata checks (timing, keystroke patterns, device signatures) and substantive validation (consistency checks, attention items, crosswalks to benchmark distributions).
For practitioners - what to watch
Monitor vendor disclosures about respondent sourcing and any use of synthetic responses. Track methodological research on validating authenticity signals and on quantifying how model-generated answers affect margins of error and subgroup estimates. Watch peer-reviewed replication studies and technical notes from major centers (including Pew) for concrete detection algorithms and recommended transparency practices.
Scoring Rationale
The piece is directly relevant to survey researchers and practitioners who rely on representative public-opinion data. It updates methodological risk around AI and fabricated respondents but does not report a technical breakthrough or new large-scale attack, placing it in the "notable" range.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problems

