Researchers Reveal AI Misrepresents Neanderthal Appearance
Researchers Matthew Magnani and Jon Clindaniel recently published a study in Advances in Archaeological Practice showing that ChatGPT and DALL-E produced grossly inaccurate images and texts when prompted about Neanderthals. Generative outputs exaggerated body hair, apelike posture and simplified cultural behaviors, reflecting older, freely available sources; 100 DALL-E images showed these stereotypes. The findings imply that training-data access and prompt design materially shape AI reconstructions of the past.
Scoring Rationale
Study provides credible, actionable evidence of training-data bias, but offers limited generalizability beyond archaeological subjects.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.


