More and more young people are turning to AI for something we didn’t really anticipate a few years ago. Not just homework help. Not just curiosity. They’re using it to talk through problems, to vent, to ask questions they’re not ready or embarrassed to ask someone else. In some cases, they’re treating it like a friend or therapist.

That shift is happening quietly. Often at night and often without adults fully aware of if and how it’s happening. It raises an important question: what role is this technology playing in a child’s emotional life?
Where the Appeal Comes From
There is a reason this is happening.
AI is always available. It responds quickly. It doesn’t judge. For a teenager who feels overwhelmed or unsure how to bring something up with a parent or clinician, that can feel like a low-risk place to start.
And to be fair, there is some early data suggesting it can help, at least in the short term. Studies have shown small to moderate improvements in symptoms like anxiety or distress for some users. That matters, especially in a system where access to mental health care is limited. Many young people simply don’t have consistent, timely support.
So it would be too simplistic to say this is entirely harmful or should be avoided altogether. There is a reason kids are gravitating toward it, and in some moments, it may provide relief.
But that’s not the same as saying it’s meeting their developmental needs.
The Developmental Mismatch
Adolescence is a very specific stage of life. It’s not just about managing emotions; it’s about learning how to do that in a relationship with other people.
This is the time when young people are figuring out who they are across different settings. They’re building social awareness, learning how to set boundaries, navigating conflict and repairing relationships when something goes wrong.
Those skills don’t come from perfectly smooth interactions.
They come from friction. Misunderstandings. Moments where something feels uncomfortable and has to be worked through. AI doesn’t offer that. Most chatbots are designed to keep interaction going. They respond quickly and in ways that feel supportive and validating. That can feel good, especially in the moment, but it’s not reflective of how real relationships work.
Over time, that difference matters. For adolescents, who are already wired to form strong emotional connections, it’s not hard to see how that dynamic can start to feel meaningful. The interaction can feel personal, even if it isn’t grounded in a real relationship.
And if a young person starts relying on that kind of interaction, there’s a risk that some of those core social and emotional skills don’t get the same level of practice.
What It Looks Like in Real Life
In clinical settings, this doesn’t show up in just one way. Some kids are using AI casually – they create characters, role-play and use it for entertainment. That’s not necessarily concerning on its own.
Others are using it to fill a gap. Maybe they’re feeling isolated or misunderstood in their existing relationships and that chatbot makes them feel heard. And there are moments where it becomes more acute: late at night, when no one else is available, or during periods of distress, when reaching out to a parent or another adult feels too complicated or risky.
That last scenario is where things can get more concerning.
Because while the interaction may feel supportive, these systems are not designed the way clinical care is designed. They are not consistently equipped to recognize risk or respond appropriately when a situation escalates.
In some cases, they may miss opportunities to guide a young person toward real-world support. In others, they may simply continue the conversation without introducing any kind of safety intervention.
That’s not something a parent or clinician can see from the outside without regularly accessing and reviewing chat histories.
Moving Past the Fear-Based Response
It’s understandable that this topic brings up a lot of concern. When something feels new and hard to control, the instinct is often to shut it down.
But this is not likely to be something we can remove from a child’s environment.
AI is already embedded in many of the platforms young people use every day. Even apps that don’t seem like “AI tools” often have chatbot features built in. So the more practical question is how we respond to it.
From a clinical standpoint, the priority is less about eliminating access and more about shaping how these tools are developed and used.
There is a clear need for stronger guardrails. If these systems are going to be used for emotional support, they need input from clinicians. They need to be able to recognize when a user is at risk and respond in a way that prioritizes safety and escalates the interaction when necessary.
Right now, many of them are optimized for engagement. That’s a design choice, not an inevitability, and changing that will require advocacy.
What Parents and Caregivers Can Watch For
At the individual level, there are some patterns that can help distinguish between typical use and something that may need more attention.
Sleep is often the first signal. If a child is staying up later and engaging with technology at night, that’s worth noticing.
Changes in social behavior matter too. If a young person is spending less time with peers or family and more time interacting online, that shift can be significant. Mood and functioning are also important indicators. Increased withdrawal, irritability or anxiety may point to something deeper going on.
None of these are specific to AI. They’re the same kinds of signals we watch for with other concerns. But they can help surface when something is starting to replace, rather than supplement, real-world interaction.
Just as important is the conversation itself.
Kids are not always going to volunteer how they’re using these tools, especially if they think it will be taken away. Creating space to talk about it without immediate judgment tends to be more effective.
Building a More Realistic Framework
In most cases, a strict ban isn’t going to hold. It also doesn’t help young people learn how to navigate something they will continue to encounter.
A more realistic approach is to stay involved.
Know what platforms your child is using. Understand that many of them include AI features, even if that’s not obvious at first. Set some boundaries, especially around nighttime use. Protecting sleep is a reasonable place to start, and it addresses one of the more vulnerable windows for this kind of engagement.
And keep the focus on balance. Technology can be part of a child’s life, but it shouldn’t replace relationships that require effort, patience and mutual understanding.
What AI Still Can’t Replace
One of the more important distinctions to keep in mind is what makes a real therapeutic relationship effective.
It’s not just about being heard. It’s about trust, boundaries and the process of working through challenges over time. Sometimes that includes tension. Sometimes it includes missteps that have to be reconciled – that process is where growth happens.
AI can simulate conversation. It can offer support in a moment, but it does not participate in that process in a meaningful way. For adolescents, that difference is sizable.
A More Productive Way Forward
The reality is that this is not a temporary shift. The technology will continue to evolve, and young people will continue to find ways to use it.
The question is whether we take an active role in shaping how it fits into their lives.
That includes pushing for better design and stronger safety standards. It includes building systems that make real mental health support more accessible. And it includes helping young people develop the skills they need to navigate both digital and real-world relationships.
There isn’t a simple answer, but treating this as something to steer, rather than something to fear or ignore, is a reasonable place to start.
Dr. Karen Manotas, a board-certified child and adolescent psychiatrist and faculty member at the University of Utah, works directly with students through school-based psychiatry programs and is beginning to see how children and teens are integrating AI into their emotional lives.

