Garcia channeled her grief into both activism and litigation. She has become a leading voice in warning about chatbot dangers to minors, arguing that AI companies target teens with alluring advertising and addictive design features. A July study by Common Sense Media found that almost three-quarters of American teens have used AI for companionship—and their parents, Garcia says, don’t understand the implications. “A lot of parents don’t realize just how sophisticated chatbot technology actually is, and that [it’s] virtually indistinguishable from a real person—especially to a child,” she says. “There’s room for manipulation, love-bombing, gaslighting, deception.”
In October, Garcia, who is a lawyer, sued Character.AI, its two co-founders, and Google, which had a licensing agreement with the startup, accusing them of recklessly offering children access to chatbot companions without providing proper safeguards. After the companies’ motion to dismiss the case failed in May, the suit will proceed this fall. It could set legal precedents for AI developers and their liability for their chatbots’ actions.
A spokesperson for Character.AI told TIME the company had since rolled out stricter guardrails, like content filters and parental controls. A spokesperson for Google’s Gemini did not respond to a request for comment.
“Children are pouring their hearts out to these chatbots, telling them their deepest, darkest secrets, talking about things they don’t feel comfortable telling their parents and friends,” Garcia says. “The more parents know that, the more they’re going to be stepping out, demanding better from these companies.”
If you are in crisis, please call, text, or chat with the Suicide and Crisis Lifeline at 988, or contact the Crisis Text Line by texting TALK to 741741.

