They're cute, even cuddly, and promise learning and companionship, but AI toys are unsafe for children, according to children's and consumer advocacy groups, which are urging parents not to buy them during the holiday season.
These toys, marketed to children as young as two years old, typically feature artificial intelligence models that have been proven to work. harm children and adolescentssuch as OpenAI's ChatGPT, according to recommendations released Thursday by children's advocacy group Fairplay and signed by more than 150 organizations and individual experts such as child psychiatrists and educators.
“The serious harm that AI chatbots cause to children is well documented, including encouraging compulsive use, engaging in sexually explicit conversations and encouraging unsafe behaviour, violence against others and self-harm,” Fairplay said.
AI toys made by companies such as Curio Interactive and Keyi Technologies are often marketed as educational, but Fairplay says they can replace important creative and learning activities. They promise friendship but also undermine children's relationships and resilience, the group said.
“What's different about young children is that their brains are coming online for the first time, and it's developmentally natural for them to be trusting and seek relationships with kind and friendly characters,” said Rachel Frantz, director of Fairplay's Young Children Thrive Offline program. Because of this, she added, the trust that young children place in these toys may exacerbate the harm seen in older children.
Fairplay, a 25-year-old organization formerly known as the Campaign for an Ad-Free Childhood, has been warning about AI toys for more than a decade. They just weren't as advanced as they are today. A decade ago, during the nascent craze for Internet-connected toys and artificial speech recognition, the group helped lead a backlash against Mattel's talking Hello Barbie doll, which it said recorded and analyzed children's conversations.
“Everything was released without any regulations or research, so it gives us extra pause when suddenly we see more and more manufacturers, including Mattel, who recently partnered with OpenAI, potentially releasing these products,” Franz said.
It's the second major seasonal warning against AI toys since consumer advocates at US PIRG last week called out the trend in their annual ” Trouble in Toyland This year, the organization tested four toys that use artificial intelligence chatbots.
“We found that some of these toys may talk in detail about sexually explicit topics, give advice on where the child might find matches or knives, act anxious when you say it's time for you to leave, and have limited or no parental supervision,” the report said.
Dr. Dana Suskind, a pediatric surgeon and sociologist who studies early brain development, says young children don't have the conceptual tools to understand artificial intelligence. While children have always connected with toys through imaginative play, when they do this, they use their imagination to create both sides of an imaginary conversation, “practicing creativity, language and problem solving,” she said.
“An AI toy disrupts this work. It responds instantly, fluidly, and often better than a human. We don't yet know what the developmental implications of handing over this creative work to an artificial agent will be, but it is likely that it will undermine the type of creativity and executive functions that are built in traditional role-playing games,” Suskind said.
California-based Curio Interactive makes stuffed animals such as the rocket-shaped Gabbo, which was endorsed by pop singer Grimes.
Curio said it has “carefully designed” guardrails to protect children, and the company encourages parents to “monitor conversations, monitor information and choose the controls that work best for their family.”
“After reviewing the U.S. Education Foundation PIRG findings, we are actively working with our team to resolve any concerns while continually monitoring content and interactions to ensure a safe and enjoyable experience for children.”
Another company, Miko, said it uses its own conversational AI model rather than relying on common large language model systems such as ChatGPT to make its product, an interactive AI robot, safe for children.
“We are constantly expanding our internal testing, strengthening our filters, and introducing new capabilities that detect and block sensitive or unexpected topics,” said CEO Sneh Vaswani. “These new features complement our existing controls that allow parents and guardians to define specific topics they would like to limit conversations. We will continue to invest in setting the highest standards for safe, secure and responsible AI integration for Miko products.”
Miko's products have been promoted by the families of social media “kid influencers” whose YouTube videos have millions of views. On its website, the company bills its robots as “Artificial Intelligence. True friendship.”
Ritvik Sharma, the company's senior vice president of development, said that Miko actually “encourages kids to interact more with their friends, interact more with peers, family members, etc. They are not designed to feel tied to just a device.”
Still, Suskind and children's advocates say analog toys are the best choice for the holidays.
“Children need a lot of real human interaction. The game should support this, not replace it. The most important thing to consider is not only what the toy does, but also what it replaces. A simple set of blocks or a teddy bear that doesn't respond gets the child to make up stories, experiment, and solve problems. AI toys often do the thinking for them,” she said. “Here’s the cruel irony: when parents ask me how to prepare their child for the world of artificial intelligence, unlimited access to artificial intelligence is actually the worst preparation possible.”





