Study Shows People Miss AI-Written Personal Messages

A behavioral experiment with more than 1,300 U.S. adults finds most recipients do not detect when a personal message was written by AI. When participants were explicitly told a message was AI-written they judged senders more negatively, using words like "lazy" and "insincere," yet participants who received messages with no authorship cue evaluated them as positively as those told the text was written by a human. The study highlights a gap between detection ability and attitudinal reaction, implying that disclosure policies and social norms, not just detection tools, will shape the social impact of AI-assisted personal communication.
What happened
A controlled experiment led by an assistant professor at the University of Michigan tested how people perceive brief personal messages. More than 1,300 U.S.-based participants, ages 18-84, read identical messages (for example, a fictional apology) under four different authorship conditions. When participants were explicitly told a message was generated by AI, they rated the sender more negatively, using descriptors like "lazy," "insincere," and "lack of effort." Critically, participants shown messages without any authorship information did not detect AI authorship and gave impressions as positive as when they believed the message came from a human.
Technical details
The study randomized participants into four conditions:
- •told the text was definitely human
- •told the text was definitely AI-generated
- •told the source could be either human or AI
- •given no information about authorship
The authors measured perceived traits such as effort, sincerity, and warmth, and compared mean ratings across groups. Effect sizes for negative judgments when AI authorship was disclosed were substantial enough to change interpersonal evaluations, but unaided detection of AI-generated personal notes was near zero among typical recipients.
Context and significance
This result sits at the intersection of human-AI interaction, social trust, and digital authenticity. It shows that low detection rates mean AI-written messages will be treated as human-produced unless disclosure occurs. At the same time, disclosure carries reputational costs for senders. That combination creates a perverse incentive: nondisclosure preserves social capital, but undermines transparency and could accelerate deceptive uses in high-stakes contexts like romance scams, political persuasion, or conflict resolution. The finding complements technical work on AI-detection tools by underscoring behavioral responses to labels, not just detection accuracy.
What to watch
Expect policy, platform, and UX experiments around default disclosure, easy provenance metadata, and norms for AI assistance in personal communication. Key open questions include whether familiarity with AI reduces disclosure penalties, and how different message types or social relationships modulate reactions.
Scoring Rationale
The study provides actionable behavioral evidence about detection and social costs of AI-written personal messages, relevant for platforms, regulators, and designers. It is important but not a technical breakthrough, so it rates as notable rather than industry-shaking.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.



