When she was younger, my daughter loved reading Warrior Cats, a series of books about the dramatic adventures of wild cat clans full of the things you’d expect in a good fantasy: prophecies, heroes, villains, secrets, truths, and betrayals. Like many die-hard fans, she loved it so much that she wanted to spend more time in the Warrior Cats’ world. So, she wrote fan fiction. I was impressed by her passion and dedication. Then, she asked ChatGPT to help. She crafted some prompts, and ChatGPT quickly produced more chapters, extending the saga of her favorite characters. I was, again, impressed by her creativity and ingenuity in using a chatbot to achieve her literary goals. But still, I wondered, is this a new form of interactive entertainment, or is it something I should think more critically about? I could see some of the benefits, but what about the risks?
Kids are growing up in a world where chatbots are part of everything. Whether text or voice, they are in games, apps, and classroom tools. Google turns on the room lights. Siri can make phone calls. Humans are social animals, and we have a built-in desire to communicate using patterns and cues that help us connect with others. But just because kids can talk to chatbots doesn’t mean they know how to think about the interactions, especially when Snap’s My AI or ChatGPT responds to and reflects back feelings.
Chatbots aren’t inherently good or bad. They are tools—albeit powerful ones. Like any technology, their impact depends on how they’re used. Understanding how our kids think about chatbots, especially those bots with a conversational interface, can help us understand the appeal and the pitfalls and provide the guidance kids need to have a healthy balance between curiosity and creativity on the one side and caution on the other.
How Age and Maturity Shape the Experience
A preschooler chatting with a voice assistant about dinosaurs is having a different kind of conversation than a 12-year-old confiding worries to a mental health chatbot. Age and maturity influence how a child interprets the chatbot’s responses, but age doesn’t preclude an emotional bond. Cognitive knowing is not the same as emotional experience.
Kids in early childhood often confuse fantasy and reality. They anthropomorphize easily and may treat chatbots like sentient beings. A voice that responds and remembers feels real (Goldman & Poulin-Dubois, 2024).
Tweens, on the other hand, can understand that AI isn’t conscious, but emotional connections can still develop. The transition from child to teen can introduce new social pressures and confusing emotions, which can sometimes feel overwhelming and isolating. Chatbots are always available conversational buddies that can provide a sense of belonging. They can feel like safe places to ask weird or embarrassing questions, but that safety can be deceptive without the skills to judge the response.
Adolescents “know” chatbots aren’t real, but that doesn’t stop them from confiding in them or asking for advice when they’re struggling with loneliness, identity, or social anxiety. The emotional relief of being “heard” is powerful, and it’s hard to write that off to an algorithm. This sense of validation mimics genuine human communication, influencing how our brains process the experience. Even when we know a chatbot is virtual, our brains still respond to the interaction emotionally because we don’t easily discriminate between live and virtual interaction when the social cues are in place (Kalantari et al., 2021).
When the Line Between Tool and Companion Blurs
Naming a chatbot and talking to it like a friend can be fun, but it also can reflect an emotional investment. A parasocial relationship is a one-sided emotional bond connection a person feels for a media figure or celebrity. Chatbots are new territory here. There is no one on the “other side,” but they don’t feel one-sided. They simulate reciprocity and can create feelings of familiarity, closeness, and attachment. Kids (and adults) can know a chatbot doesn’t have feelings and still have the exchange feel authentic. With chatbots, there is no rejection, no awkward silences, and no need to share. This can be healthy when it allows disclosure and models positive connection dynamics, but it can also be problematic if it hinders real social interactions.
Unfiltered Conversations and Misinformation
AI is only as good as the data it’s trained on, which is the internet. A chatbot has no ability to filter inaccuracies or inappropriate information. Tech companies continue to build filters, but this is a herculean task, even with the best intentions, given the way information proliferates and language, especially slang, evolves (Zhou et al., 2023). The best protection is digital literacy education, so kids have a growing toolbox to deal with new tools and what will be an increase in the subtlety, sophistication, and invisibility of AI. For example, even a well-designed chatbot can:
- Give answers that are inaccurate, out of context, or beyond a child’s understanding.
- Suggest harmful behaviors or validate unhealthy thinking.
- Reduce opportunities for real emotional growth.
- Elicit personal details that are stored, analyzed, or even breached.
Where Kids Find Chatbots
From smart speakers to mobile games, chatbots are part of the digital landscape. It isn’t just ChatGPT. They’re everywhere—and often invisible as “bots.” Supervision isn’t enough. Kids need guidance and training to recognize and think critically about AI interactions.
- Alexa, Siri, and Google Assistant are essentially chatbots with voices—and they’re often a child’s first AI experience.
- Educational games, story apps, and platforms like Roblox frequently include bot-like characters.
- Several storytelling apps use AI to tell stories, but sites also use AI behind the scenes to drive engagement.
- Chatbots built into learning platforms may act as tutors, quiz partners, or motivational coaches.
- Some apps like Heeyo and Curie are designed as AI friends for kids, but many AI companion app age restrictions are easy for kids to work around.
Creating a Space for Dialogue
The best way to protect kids from harm and help them benefit from AI tools is to start conversations early and often. Kids try new things because they are curious, so be curious, too. Ask them about chatbots, if they talk to one, what it’s like, or if it has a name or personality. When you’re nonjudgmental, kids are more likely to talk openly and think critically about their experiences. More importantly, it builds trust so they will turn to you when it matters most.
This isn’t about policing technology use as much as it’s about making sure that our kids have the skills to use AI in positive ways. These are the tools that will inhabit their world. Understanding where fantasy ends and programming begins and learning how to balance AI’s speed and responsiveness with critical thinking and accuracy will help them harness these tools to support their creativity and learning, not diminish it.