Generative AI Fuels Deepfakes and Online Disinformation

Generative AI is dramatically increasing the volume and quality of synthetic content, and malicious actors are scaling its use for fraud, impersonation, and social engineering. Shuman Ghosemajumder, a veteran of Google and Shape Security and now CEO of Reken, warns that cybercriminals exploit generative AI because model errors are irrelevant to their goals. The result is an exponential rise in sophisticated deepfakes, fabricated text, and coordinated disinformation campaigns that waste time, erode trust, and cause direct harm. Mitigation will require a combination of technical, policy, and legal measures. Practitioners should prioritize multi-modal detection, provenance signals, and rate-limiting automation while preparing for adversaries that will continually adapt.
What happened
Shuman Ghosemajumder, founder of Google's Trust & Safety product group and former CTO of Shape Security, presented a keynote warning that generative AI is driving an exponential rise in deepfakes, disinformation, and AI-enabled fraud. He argued that cybercriminals are heavy adopters of gen AI because hallucinations and inaccuracies do not hinder their aims; quantity and plausibility matter more than factual precision. Ghosemajumder is now CEO of Reken, a startup focused on defending against AI-enabled fraud, and his talk framed the problem as large-scale, multi-modal abuse rather than isolated incidents.
Technical details
Practitioners should treat the threat as a systems problem, not a single-model failure. Key technical observations:
- •Synthetic content spans modalities: images, video deepfakes, and weaponized conversational text.
- •Attackers use automation to scale content generation, account creation, and distribution.
- •The volume and plausibility of synthetic content make detection and trust verification more challenging.
Context and significance
This is a maturation of prior trends in misinformation and fraud. The move from manual social engineering to automated, model-driven campaigns amplifies reach and reduces per-attack cost. The problem intersects with platform design and verification systems. Mitigations will likely require a mix of technical, policy, and societal responses.
What to watch
Expect more investment and productization in AI-native defenses from startups and platform teams, and growing pressure for policy measures and stronger enforcement. Operationally, teams should prioritize improved detection and verification approaches, controls to limit large-scale automated abuse, and red-team exercises that simulate automated adversaries.
Scoring Rationale
The keynote highlights an accelerating, high-impact trend for security and trust teams but does not introduce a new technical breakthrough. It is notable for warning practitioners about scale and needs for systemic defenses, which is operationally important but not a paradigm shift.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problems
