Google's AI Allegedly Finds Pattern in Bigfoot Sightings

BroBible reports that a viral TikTok clip, reposted by @irish.demon5 and originating from the YouTube channel "AI Discovery," claims that Google's DeepMind analyzed alleged Bigfoot sightings. The video, titled "Google's AI Was Fed Every Bigfoot Sighting Since 1958, The Pattern It Found Is Unexplainable," reportedly states that in 2024 researchers processed "10,000 pieces of Bigfoot evidence" spanning "66 years" and that "each sighting report was coded with over 150 data points," the narrator says, per BroBible. The TikTok repost reportedly has over 63,500 views, according to BroBible. BroBible does not present independent verification of the analysis, nor does it cite an official Google statement.
What happened
BroBible reports a viral clip, reposted on TikTok by @irish.demon5 and originally published by the YouTube channel "AI Discovery," that claims Google's DeepMind conducted an analysis of alleged Bigfoot sightings. The video, titled "Google's AI Was Fed Every Bigfoot Sighting Since 1958, The Pattern It Found Is Unexplainable," reportedly says researchers in 2024 ran machine learning over "10,000 pieces of Bigfoot evidence" covering 66 years and that "each sighting report was coded with over 150 data points," the narrator says, per BroBible. The repost on TikTok reportedly has over 63,500** views, according to BroBible. BroBible does not provide independent verification or a cited Google response.
Editorial analysis - technical context
Projects that claim pattern discovery in long-running anecdotal databases typically face serious data-quality challenges. Common issues include inconsistent reporting standards across decades, coarse or incorrect geocoding, high label noise in eyewitness accounts, selection and survivorship bias, and unclear inclusion criteria. These factors complicate reproducibility and make it difficult to separate spurious correlations from robust signals.
Industry context
For practitioners, viral claims like this highlight two recurring themes in applied ML: first, model outputs are only as reliable as the underlying data and curation; second, extraordinary claims without released code, data, or methodology invite skepticism. Industry reporting often emphasizes the need for transparent provenance, evaluation against baseline models, and independent replication before accepting surprising results.
What to watch
Indicators that would increase the claim's credibility include a public dataset release, a methodological writeup or preprint, released code and model checkpoints, independent replications by third parties, or an official statement from Google or DeepMind. Absent those artifacts, the claim remains an unverified viral report, per the available coverage.
Scoring Rationale
The story concerns a viral, unverified claim about DeepMind analyzing folklore data. It has low technical impact for practitioners until data, code, or methodology are published for scrutiny.
Practice with real Ad Tech data
90 SQL & Python problems · 15 industry datasets
250 free problems · No credit card
See all Ad Tech problems

