AI Chatbots Expose Users' Phone Numbers, Fuel Scams

Reporting by The Independent and the New York Post documents multiple complaints that AI chatbots, including Gemini, ChatGPT, and Grok, have returned private phone numbers as contact placeholders. Both outlets cite a Reddit post in which a victim says strangers called repeatedly after receiving their number from "Google's AI." A spokesperson for data-removal firm ClearNym is quoted by The Independent saying, "Gemini's problem is not a defect. It's the result of unchecked years of data brokerage practices that meet generative AI." The New York Post also reports fraud-prevention observers saying criminals can plant fake customer-service numbers online that chatbots may regurgitate, creating new opportunities for scams.
What happened
Reporting by The Independent and the New York Post documents multiple, independent accounts of so-called "AI doxxing", where large language model chatbots return personal phone numbers when users ask for contact details. Both outlets cite a Reddit post from r/Google in which a user wrote, "Strangers are calling me constantly looking for a lawyer, a product designer, a locksmith... Every single one of them tells me: 'I got your number from Google's AI'." The Independent and the New York Post also report similar anecdotal incidents implicating chatbots such as Gemini, ChatGPT, and Grok.
Technical details
Editorial analysis - technical context: Public reporting frames these incidents as arising from generative models trained on broad internet data, which can surface outdated, scraped, or otherwise sensitive entries. Industry accounts cited by the outlets link the problem to online data aggregation and indexed content being used as promptable knowledge by LLMs, producing either verbatim reproductions or plausible-seeming placeholders.
What sources reported
The Independent quotes a spokesperson for data-removal firm ClearNym saying, "Gemini's problem is not a defect. It's the result of unchecked years of data brokerage practices that meet generative AI." The New York Post reports comments from fraud-prevention observers and quotes Murray Mackenzie, described in that article as a fraud prevention director, warning that scammers are planting fake customer-service numbers online that chatbots can regurgitate.
Context and significance
Editorial analysis: For practitioners, these reports illustrate a privacy and safety failure mode where model outputs reproduce or amplify contactable identifiers, creating harassment and fraud risks. Observed patterns in similar incidents include amplification of scraped personal data, adversarial "poisoning" of public listings with fake numbers, and downstream social engineering that converts automated outputs into real-world abuse.
What to watch
Editorial analysis: Observers and practitioners should track vendor disclosures about training data curation, developer controls for redacting or blocking phone-number outputs, and platform-level mitigations such as rate-limited contact suggestions or provenance tags. Reported indicators to follow in coverage include named incident volumes, vendor statements about dataset removals, and any tooling that lets users request deletion of replicated contact details.
Scoring Rationale
This is a notable cross-platform safety and privacy incident affecting major LLM-based chatbots and raising operational risk for practitioners. It is not a paradigm-shifting model release but is important for deployment safeguards, data curation, and abuse mitigation.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problems


