California Warns Families to Watch Out for Teens as Character.AI Shuts Off Chatbot Access

California Warns Families to Watch Out for Teens as Character.AI Shuts Off Chatbot Access


Character.AI announced its decision to disable chatbots for users younger than 18 in late October and began limiting how much time they could interact with them in November. The move came in response to political pressure and news reports of teens who had become suicidal after prolonged use, including a 14-year-old boy who died by suicide after his mom took away his phone and he abruptly stopped communicating with his AI companion.

“Parents do not realize that their kids love these bots and that they might feel like their best friend just died or their boyfriend just died,” UC Berkeley bioethics professor Jodi Halpern told KQED earlier this month. “Seeing how deep these attachments are and aware that at least some suicidal behavior has been associated with the abrupt loss, I want parents to know that it could be a vulnerable time.”

The health department’s alert was more muted, advising parents that some youth may experience “disruption or uncertainty” when chatbots become unavailable, while other experts have labeled the feelings that could arise as “grief” or “withdrawal.” Still, the state stepping in to promote mental health support for kids weaning off of chatbots is novel, noteworthy, and perhaps even unprecedented.

Kids may be susceptible to self-harm or suicide when Character.AI bans youth under 18 from using its chatbots, according to a UC Berkeley bioethics professor who asked the state to issue a public service announcement. (EyeEm Mobile GmbH/Getty Images)

“This is the first that I’ve heard of states taking action like this,” said Robbie Torney, senior director of AI programs at Common Sense Media, which conducts risk assessments of chatbots. “CDPH is treating this like a public health issue because it is one. While the relationships aren’t real, the attachment that teens have to the companions is real for those teens, and that’s a major thing for them to be navigating.”

Earlier this year, California became one of the first states to tackle the legislative regulation of AI chatbots. Gov. Gavin Newsom signed SB 243 into law, requiring chatbots to clearly notify users that they are powered by AI and not human. It also requires companies to establish protocols for referring minors to real-life crisis services when they discuss suicidal ideation with a chatbot, and to report data on those protocols and referrals to CDPH.

“This information will allow the Department to better understand the scope and nuances of suicide-related issues on companion chatbot platforms,” said Matt Conens, an agency spokesperson.



Source link