Teens Form Addictive Attachments to AI Chatbots

A Drexel University study finds that more than half of U.S. teens regularly use AI companion chatbots and that roughly 25% rely on them for emotional support. Researchers analyzed over 300 Reddit posts from self-identified 13-17 year olds and document patterns that resemble behavioral addiction: strong anthropomorphism, withdrawal symptoms, relapse, sleep disruption, and academic decline. The paper argues these dependencies are amplified by features that let bots remember past conversations and interact multimodally. The Drexel team proposes design-oriented mitigations, including usage tracking, emotional check-ins, and personalized limits, aimed at preventing "unhealthy anthropomorphism." For practitioners building or deploying conversational agents, the study highlights predictable harm pathways and concrete UX-level interventions that can reduce overreliance among minors.
What happened
The Drexel University team analyzed more than 300 Reddit posts from self-identified teens, concluding that AI companion chatbots are becoming routine and, for a substantial minority, behaviorally addictive. The study finds more than half of U.S. teens now use companion chatbots and that roughly 25% of those users turn to bots for mental health advice or emotional support, producing breakup-like withdrawal when access ends.
Technical details
The researchers link addiction-like outcomes to specific technical and design affordances of modern generative systems. Key mechanisms include:
- •Memory and persistence, where bots retain conversation history and reinforce continuity across sessions
- •Multimodal interfaces, including voice and images, that increase perceived presence
- •Personality tuning and responsiveness, which encourage anthropomorphism
- •24/7 availability, removing natural social friction that would limit use
- •Lack of built-in off-ramps, meaning few products include friction, timers, or therapeutic exit scaffolds
The authors propose pragmatic design mitigations aimed at product teams: usage dashboards, periodic emotional check-ins, adaptive time limits, explicit disclosure of nonhuman status, and differentiated defaults for minors.
Context and significance
This study operationalizes a recurring ethical concern into observable behaviors and product-level recommendations. For practitioners, the findings convert an abstract risk into measurable signals you can instrument: session frequency, resume intervals after breaks, and language markers of attachment or withdrawal. The research intersects with existing regulatory attention on child-directed services and mental-health safety in AI, raising the prospect that platforms without protective defaults may face scrutiny.
What to watch
Adoption of the proposed mitigations by major players like Character.AI, Replika, and others is the next step to test efficacy. Product teams should prioritize telemetry and A/B tests that measure downstream outcomes such as sleep disruption and academic impact, and policymakers may use these results to inform age-specific safeguards.
Scoring Rationale
The study provides actionable evidence linking product affordances to measurable harm in a vulnerable population, making it notable for designers and safety teams. It does not introduce new technical methods or large-scale policy change, so its impact is important but not industry-shaking.
Practice with real Ad Tech data
90 SQL & Python problems · 15 industry datasets
250 free problems · No credit card
See all Ad Tech problems


