Apple, Google Promote Apps That Create Deepfake Nudes

The Gateway Pundit reports that a January Tech Transparency Project (TTP) study, relayed by 9to5Mac, found Apple App Store and Google Play search results and autocomplete led users to apps that can produce sexualized deepfakes. The Gateway Pundit says the TTP analysis found about 40% of the top 10 apps returned for queries such as "nudify", "undress", and "deepnude" could render women nude or scantily clad. 9to5Mac told one app developer that the developer "had no idea it was capable of producing such extreme content," per The Gateway Pundit. The Gateway Pundit reports that, according to 9to5Mac, Apple removed 15 apps and issued notices to others under its review guidelines. The article also quotes California Governor Gavin Newsom criticizing xAI over alleged nonconsensual sexually explicit deepfakes on X.
What happened
The Gateway Pundit reports that a January report from the Tech Transparency Project (TTP), as covered by 9to5Mac, found that Apple App Store and Google Play search results and autocomplete suggestions led users to apps that can create sexualized deepfakes. The Gateway Pundit summarizes the TTP finding that roughly 40% of the top 10 apps returned for search terms like "nudify," "undress," and "deepnude" could render women nude or scantily clad. The Gateway Pundit also reports that 9to5Mac reached a developer who said the developer "had no idea it was capable of producing such extreme content," and that 9to5Mac reports Apple removed 15 apps and notified others of potential guideline violations.
Technical details
The Gateway Pundit notes these apps combine elements from two images to generate sexualized outputs; this description appears in the TTP summary as relayed by 9to5Mac. The scraped coverage does not publish model names, architectures, or training-data provenance for the apps; it also does not provide technical forensic analysis of how the apps perform facial editing or layering.
Industry context
Editorial analysis: Platform discoverability and autocomplete have repeatedly been flagged by observers as amplification mechanisms for problematic content. Companies and researchers tracking app-store moderation have previously documented search optimization and metadata gaps that surface borderline or rule-violating apps. For practitioners, that pattern means enforcement gaps often show up first in search and recommendation metadata rather than in binary app-review outcomes.
Regulatory and public response
The Gateway Pundit quotes California Governor Gavin Newsom criticizing xAI, calling alleged nonconsensual sexually explicit AI deepfakes "vile" and urging an investigation; that quote is presented in the scraped article. The Gateway Pundit reports Apple removed some apps after the coverage and that notices were issued to others, per 9to5Mac's reporting.
What to watch
Editorial analysis: Observers should watch for:
- •follow-up technical disclosures from TTP or independent researchers showing reproducible methods for detecting these outputs
- •platform-level changes to search ranking, autocomplete, and metadata curation on both major app stores
- •any regulatory or enforcement actions that specify obligations for app marketplaces to screen for nonconsensual deepfake capabilities. For engineers building detection or moderation tooling, monitoring search-level signals and metadata hygiene on distribution channels may be as important as model-output classifiers
Limitations of the coverage
The Gateway Pundit article relays the TTP findings via 9to5Mac but does not include direct technical appendices, raw test sets, or detailed methodology in the scraped text. Several high-stakes claims in the coverage are attributed to the TTP report or to 9to5Mac in the Gateway Pundit piece; the underlying TTP report or primary 9to5Mac article should be consulted for method-level details and original quotes.
Scoring Rationale
The story highlights a notable moderation and safety gap with direct implications for nonconsensual-image risk and child safety; it is important for practitioners working on detection, content policy, and platform governance but does not introduce a new modeling or infrastructure breakthrough.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problems
