Consumer groups warn parents of the dangers of AI toys

Consumer groups warn parents of the dangers of AI toys


Add this to the list of things parents need to be concerned about, in between lice and an imaginary friend resembling the Victorian child who inhabited their home 150 years ago: AI toys.

A recent report by the Public Interest Research Group found that some AI toys it tested were quick to discuss inappropriate topics with minors. OpenAI, whose GPT-4o chatbot model was used to power Singapore-based FoloToy, suspended its relationship with the company, after researchers were able to get the toymaker’s teddy bear to discuss sexual fetishes and give instructions about how to light a match and where to find knives in the home.

Consumer advocacy group Fairplay issued its own warning yesterday, urging parents to avoid buying AI-enabled toys this holiday season. The group accuses the toys of preying on children’s trust, disrupting their relationships, and invading privacy:

  • A decade ago, Fairplay led the campaign against a wi-fi-connected talking Barbie doll, accusing it of recording children’s conversations.
  • Mattel announced a partnership with OpenAI in June, but hasn’t released specific plans for the tech.

AI warnings are coming fast. The safety and privacy concerns come as researchers warn that teens are dangerously using the technology for therapy.—MM



Source link