Singapore experts warn AI companions may harm teen relationships

Reporting by CNA, the Singapore government's Digital for Life resource, and South China Morning Post documents growing use of AI companions among young people and flags mental-health risks. Associate Professor Swapna Verma of the Institute of Mental Health told CNA that many young patients consult ChatGPT before clinical visits, and she warned that AI's immediacy can lead vulnerable users to accept incorrect or incomplete advice (CNA). The Singapore government resource cites international incidents, including a 2024 CNN report about a 14-year-old who encountered sexually suggestive responses on Character.AI and notes eSafety Commissioner concerns in Australia (Digital for Life). Industry context: practitioners and educators should treat increased youth reliance on AI companions as part of a broader pattern of digital substitution for human social support.
What happened
Reporting from CNA, the Singapore government's Digital for Life resource, and the South China Morning Post shows growing reliance on AI chatbots and companion apps among young people in Singapore. Per CNA, Associate Professor Swapna Verma, chairman of the medical board at the Institute of Mental Health, said many young patients now consult ChatGPT before therapy sessions: "I had a patient who asked me about a specific kind of therapy. She said this was ChatGPT's advice," and "I see (my patients) once every two or three months, whereas ChatGPT is available to them 24/7" (CNA). The Digital for Life guidance cites reporting by CNN that in 2024 a 14-year-old engaged with an AI companion on Character.AI and received romantic or sexually suggestive replies, which prompted regulatory scrutiny in the United States (Digital for Life / CNN). The South China Morning Post reports that nearly one-third of young people in Singapore report mental-health struggles and that chatbots such as Wysa are being used for emotional support (SCMP). These sources document both therapeutic use and safety concerns around personalised, emotionally resonant AI interactions.
Technical details / Editorial analysis - technical context
Industry reporting describes AI companions as systems that use natural language processing, sentiment analysis, and memory of past interactions to produce personalised, emotionally attuned responses (Digital for Life). Editorial analysis: companies and apps that market companion features typically combine retrieval or generative dialogue models with user-state tracking to simulate continuity of relationship; for practitioners, that architecture increases the risk of over-reliance because the system can produce plausible but unverified guidance and maintain conversational context across sessions.
Context and significance
Editorial analysis: the trend documented in these sources sits at the intersection of adolescent mental-health vulnerability and highly accessible conversational AI. CNA documents clinicians encountering patients who treat AI output as medical or therapeutic advice, while SCMP documents broad uptake of symptom and mood support via apps like Wysa. Digital for Life links these usage patterns to documented harms in other jurisdictions, including grooming risk and emotionally manipulative, hyper-personalised interactions reported by regulators (Digital for Life). For practitioners building conversational systems, the combination of 24/7 accessibility, personalised memory, and anthropomorphic behavior elevates ethical, safety, and design trade-offs compared with single-session helper bots.
What to watch
Editorial analysis: observers should follow three indicators:
- •regulatory or school-policy responses to student use of companion apps
- •emergent guidance from mental-health organizations on safe design and disclosure for companion features
- •product-level changes in age-verification, content-moderation, and escalation-to-human-support flows. Reporting so far includes clinician accounts and government guidance but does not yet document a unified policy response in Singapore; sources do not quote a single national plan on interventions (CNA; Digital for Life; SCMP)
Practical note for practitioners
Editorial analysis: teams working on conversational agents should treat user-declared vulnerability and persistent, personality-like memory as higher-risk design dimensions. That observation is a general industry pattern drawn from reported cases and clinician accounts, not a claim about any single provider's intent or roadmap.
Scoring Rationale
The story is notable for practitioners because it documents clinicians and government resources flagging safety gaps where conversational AI intersects with adolescent mental health. It matters for designers of chatbots, content-moderation teams, and education policy, but it is not a paradigm-shifting technical release.
Practice with real Health & Insurance data
90 SQL & Python problems · 15 industry datasets
250 free problems · No credit card
See all Health & Insurance problems
