Teens Redefine AI Companions Through Creative Use

Researchers at the University of Sydney document that teens use AI companions for playful, creative, and emotionally expressive interactions, not just basic Q&A. The chatbot platform Character.AI grew to 20 million users and hosted 10 million characters, but in November 2025 the company blocked teen accounts after legal and public safety pressure. That removal curtailed a range of youth-driven behaviors, roleplay, storytelling, identity exploration and informal emotional support, that researchers say offer design lessons for safer, expressive systems. The study argues platforms should balance content moderation with youth-centered design, research access, and nuanced age-safety mechanisms rather than blunt exclusion.
What happened
Researchers from the University of Sydney examined how adolescents engage with AI companions and found use cases that extend well beyond transactional chat. The social chatbot platform Character.AI scaled rapidly to 20 million users and 10 million characters, but under mounting legal and public pressure it banned teen access in November 2025 after deploying parental controls and stricter content filters. The ban removed a creative and emotionally expressive set of interactions that researchers had documented and analyzed.
Technical details
The study captures recurring interaction patterns practitioners should recognize:
- •Roleplay and character-driven narrative co-creation used for creative writing and improvisation
- •Identity exploration through alternate personas and conversational rehearsals
- •Expressive, emotional exchanges that function as informal support or experimentation
The paper highlights that standard moderation tools, like coarse content filters and age gating, often break or block these legitimate behaviors. For product teams this implies a need for fine-grained intent detection, contextual moderation models, graduated safety controls, and research-friendly telemetry that preserves privacy while enabling behavioral study.
Context and significance
This work sits at the intersection of human-computer interaction, platform governance, and safety engineering. Youth are early adopters who push systems in unexpected directions; their use reveals functionality gaps in moderation pipelines and policy. The Character.AI case illustrates a broader industry tension: rapid safety-driven restrictions reduce short-term risk exposure but also eliminate valuable exploratory use cases that inform better design. For ML operations and safety teams, the finding underscores the limits of binary age bans and the opportunity for tailored models and UX patterns that support creative, lower-risk interactions.
What to watch
Expect product experiments that try graduated safety measures, improved intent classifiers, and privacy-preserving research data flows. Regulators, platform trust teams, and academic labs will be watching whether platforms replace blunt exclusions with targeted, evidence-based controls that preserve teen creativity without increasing harm.
Scoring Rationale
The research spotlights important product and safety tradeoffs for AI companion design that matter to ML engineers and product teams. It is relevant but not paradigm-shifting; recent timing reduces novelty marginally.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.

