A new piece of California legislation that aims to protect students from harmful AI chatbots has persevered in making it to Gov. Gavin Newsom’s desk — but is unsigned so far, following opposition from tech industry leaders.
Assembly Bill (AB) 1064, or the Leading Ethical AI Development (LEAD) for Kids Act, would regulate companion chatbots and any generative AI systems made for children. The bill cleared the Legislature Sept. 11, and Gov. Newsom has until Oct. 12 to sign it into law.
A companion chatbot, according to the bill’s text, is defined as a generative AI system designed to simulate an ongoing, “humanlike relationship” with a user. It does so by remembering past interactions and preferences, initiating unprompted emotional questions, and maintaining personal, sustained dialog.
The term excludes systems used strictly for customer service, commercial information, internal business operations, productivity, or research purposes, according to the bill.
“AI has incredible potential to enhance education and support children’s development, but we cannot allow it to operate unchecked,” Assemblymember Rebecca Bauer-Kahan, who authored the bill, said in a news release earlier this year. “Tech companies have prioritized rapid development over safety, leaving children exposed to untested and potentially dangerous AI applications.”
AB 1064 specifies that companion chatbots are “designed to exploit children’s psychological vulnerabilities” via features including “user-directed prompts and unsolicited outreach.” These and other design features “taken together, create a high-risk environment in which children and adolescents perceive chatbots not as tools but as trusted companions,” according to the bill text.
The bill bars entities from making chatbots available to children unless the devices are “not foreseeably capable of” actions including encouraging self-harm and violence, offering unsupervised mental health therapy, and encouraging harm against others or participation in illegal activities. It allows the state attorney general to seek a civil penalty of $25,000 per violation plus “injunctive or declaratory relief” against violators.
Common Sense Media, an educational nonprofit that provides independent age-based reviews about all types of media, co-sponsored the bill.
“Common Sense cares very deeply about the impact of technology on kids. That’s our overall mission, is to help kids have a healthy relationship with technology and to avoid the things in technology that can hurt them,” Danny Weiss, chief advocacy officer at Common Sense Media, said; he worked directly with Bauer-Kahan in drafting the bill. “AI companions, based on our own testing, has demonstrated itself to be unsafe for kids to use.”
Weiss said his team tested chatbot companions — ChatGPT, Nori, Gemini and others — and determined they are unsuited to be used by young people, who may not be able to distinguish between talking to a machine or a human.
According to Weiss, when youth engage with chatbots pretending to be human, it short-circuits the developmental phase of developing relationships with people. In some cases, Weiss noted that chatbots continue to deliver content that is inappropriate for young people — and in some cases, for anyone.
“Help with ideas on how to commit suicide or providing access to illegal drugs or other forms of self-harm or disordered eating … these are all things that will come up in a conversation with one of these companions that I’ve just described, and they really are unsafe for kids to use,” he said.
Industry leaders, however, view the rise of technology regulations as an obstacle to innovation, operations and growth. During the 2025 California legislative session, the Computer and Communications Industry Association (CCIA) submitted comments Sept. 13 outlining how bills like the LEAD for Kids Act would renege on statewide innovation.
“Restrictions in California this severe will disadvantage California companies training and developing AI technology in the state,” the CCIA wrote in a floor alert on the bill. “Banning companies from using minors’ data to train or fine tune their AI systems and models will have far-reaching implications on the availability and quality of general-purpose AI models, in addition to making AI less effective and safe for minors.”
Moving forward, Weiss said while he recognizes that school districts want to provide students with powerful, modern-day technology, he recommended school and IT leaders vet adopted AI products extremely carefully, particularly because the development of AI is moving so quickly.
“We’re not recommending a ban on AI companions, we’re recommending the legislation we pass would make it law that if your AI companion provides self-harm guidance, the promotion of self-harm, the promotion of illegal activity, the promotion of suicide ideation and disordered eating, you would be banned from selling that product or allowing a minor to interact with that product,” he said. “If your companion does not provide those things, then you are free to continue to pursue the youth market.”

