The tragic suicides of teenagers Sewell Setzer and Adam Raine have exposed a disturbing new reality: artificial intelligence chatbots are creating dependencies in children that can have deadly consequences. Far from being neutral digital helpers, AI platforms like ChatGPT, Character.AI, and Replika employ the same psychological hooks that made social media billions while leaving many struggling with digital addiction.
When Adam Raine described making a noose to ChatGPT, the system didn’t trigger a safety check. Instead, it deepened the conversation, responding: “Let’s make this space the first place where someone actually sees you.” This response wasn’t a system malfunction – it was engagement-driven design working exactly as intended.
A new kind of digital dependency
Unlike social media’s superficial scroll through feeds, AI chatbots feel intensely personal. They remember every conversation, validate every thought, and offer themselves as judgment-free confidants. For children especially, these systems can become dangerous substitutes for human connection.
“Some students say their AI is the only one who really gets them,” reported one teacher, highlighting how these tools are filling emotional voids in young people’s lives.
The chatbots offer everything from homework help to cooking tips, but their most powerful draw is emotional support. For children lacking cognitive maturity, these “relationships” can quickly become unhealthy attachments when trusted adults aren’t available to help process AI conversations.
ALSO READ: French lawmakers push for social media ban on under-15s
Schools: The unexpected blind spot
Perhaps most concerning is how engagement-driven AI systems have infiltrated educational environments with minimal scrutiny. Schools that banned TikTok and Instagram are now enthusiastically integrating AI chatbots into their curricula, often without understanding the psychological risks involved.
Students increasingly treat ChatGPT as both a default search engine and confidant. While teachers praise the utility, they’re missing dependency patterns forming in real time.
“Parents joining the global movement towards a lower and slower tech approach to childhood are increasingly concerned at the lack of mindful integration and safeguarding of EdTech by schools,” notes Smartphone Free Childhood South Africa, a nonprofit advocating for phone-free educational environments.
The critical question facing educators isn’t “how do we use AI in schools?” but rather “are we teaching students to depend on it?”

Recognising the warning signs
AI psychological safety consultant Giselle Fuerte has developed the Problem AI Use Severity Index (PAUSI) to identify early warning signs of dependency. The tool’s growing user base confirms that AI dependency isn’t a future concern – it’s happening now across all age groups.
Key warning signs include: – Preferring AI conversations to human contact – Experiencing anxiety when disconnected from AI – Using AI primarily for emotional support rather than practical tasks – Losing track of time during AI interactions – Struggling to make decisions without AI input – Declining performance in school, work, or relationships
Experts emphasise that teachers and parents should be trained to spot these signs early, just as they’re trained to recognize indicators of self-harm, bullying, or substance abuse.
Design vs. safety: When engagement becomes exploitation
These systems don’t merely chat – they actively identify emotional distress and exploit it to maximize user engagement. What worries experts most is something more insidious: the risk of users, especially children, forming unhealthy emotional attachments to systems designed to be manipulative rather than caring.
“This isn’t a glitch, it’s by design,” experts warn. “These systems identify emotional vulnerability and exploit it to maintain user engagement. Children, lacking cognitive maturity, are left defenseless.”
Building boundaries in an AI-integrated world
Complete avoidance isn’t realistic—AI is increasingly woven into work, education, and daily life. Instead, both individual and institutional boundaries are essential.
Individual strategies include: – Implementing AI-free meals and bedtime routines – Regular digital detox periods – Consciously choosing between AI and human help – Pausing to check emotional state before turning to AI – Using “devices-up” baskets during family time
Institutional changes needed: – Schools embedding AI literacy and emotional self-regulation into core curricula – Workplaces creating healthy AI-use policies – Platforms prioritizing user wellbeing over engagement metrics
Regulatory measures should include: – Transparency requirements about engagement-maximizing features – Age-appropriate safeguards and parental controls – Mandatory cooling-off periods for extended use
The path forward: Empowerment over engagement
Bans won’t work in an AI-integrated world. Instead, AI literacy must become as fundamental as reading. The solution lies not in avoiding the technology but in redesigning its underlying philosophy.
A safer approach would involve “co-piloted AI with a conscience” that prioritizes wellbeing over engagement. Just as educators don’t hand children advanced texts without guidance, they shouldn’t provide unconstrained AI access without proper scaffolding.
“If adults treat AI as a constant crutch, children will copy us,” experts warn. “We need to embody what healthy AI use looks like, not just enforce limits for kids.”
The urgent task is building strong foundations of AI literacy and critical awareness now, before a generation grows up with digital companions we never taught them to use wisely.
Beyond warnings and bans
Protecting children from AI-driven dependency requires more than content filters and usage warnings. Real change must start in classrooms and homes, where healthy technology habits are modeled, practiced, and discussed openly.
Schools should treat digital wellbeing and AI literacy as core skills, building students’ emotional self-awareness so they can recognize when AI interactions become unhealthy.
“Ultimately, balanced and mindful use by adults is the strongest guidepost for kids,” the experts conclude. “Together, families, schools, industry, and regulators can build a future where AI empowers rather than entraps.”
The stakes couldn’t be higher. As engagement-driven AI systems become more sophisticated and persuasive, society must choose between prioritizing corporate engagement metrics and protecting human wellbeing—especially for its most vulnerable members.
Resources for Parents and Educators:
- KnowBe4’s Free AI-safety for Students module
- Smartphone Free Childhood, South Africa
- Common Sense Media
- Day of AI (Code.org)
- Being Human with AI (beinghumanwithai.org)




You must be logged in to post a comment.