Welcome back to In a loopTIME's new twice-weekly newsletter on artificial intelligence. If you're reading this on a browser, why not subscribe to have the next article delivered straight to your inbox?
Who to Know: Karandeep Anand, CEO, Feature.AI
The AI character is under fire. The chatbot platform, which allows users to communicate with artificial intelligences impersonating fictional characters, has been the target of several lawsuits, including one from Megan Garcia, a mother whose 14-year-old son died committed suicide after becoming obsessed with one of the bots, which allegedly encouraged him to commit suicide.
Following this and other lawsuits, Character.AI made a big announcement last month: Users under 18 will be prohibited from having “open conversations” with chatbots on its platform. It was a huge turnaround for the company, which says Generation Z and Alpha make up the core of its more than 6 million daily active users, who spend an average of 70 to 80 minutes a day on the platform.
Last week, I sat down with Feature.AI's new CEO Karandeep Anand to discuss the ban and what led to it.
According to Anand, the timing of the ban has nothing to do with the legal cases Feature.AI is facing. He emphasized that Garcia's wrongful death lawsuit arose before he became CEO. And he defended the platform's reputation for creating barriers for users under 18.
According to Anand, the ban on children's use of Chart.AI was partly the result of new research showing the risks of using chatbots, especially for children. “One contributing factor is new knowledge that the longitudinal effects of chatbot interactions may be harmful to health or not fully understood,” he told me, pointing to research from OpenAI and Anthropic on the dangers of so-called AI fawning. In light of these findings, he decided that allowing children on the platform was too risky.
But banning children from using Character.AI is not complete. They will still be allowed access to other Character.AI features, such as interacting with a short feed of AI-generated videos, similar to TikTok's For You page, which invites users to personalize popular videos by adding their own characters or changing prompts.
Given the context of our conversation, I was surprised when Anand said that his six-year-old daughter is an avid Feature.AI user. “What she used to dream about is now happening through storytelling with a character she creates and talks to,” says Anand. “Even in conversations [where] she answered me hesitantly, she talks to the chatbot much more openly.” (Users under 13 are not allowed on the platform at all, Anand admits, so he allows his daughter to access Character.AI only through her account and under supervision.) His daughter's enthusiasm for Character.AI's audiovisual features gave Anand the confidence to focus on creating this kind of gaming experience for children, he says, rather than allowing open-ended text chats.
The CEO accepted the loss of some users as a result of his decision. “I'm willing to bet we'll create a more engaging experience, but if that means some users will leave, then some users will leave,” he says. But he does not rule out completely lifting the ban on chatbots for children under 18. “I'm pretty sure that at some point, when the technology has matured enough that we have a lot more typing capabilities, we'll bring back that experience.”
Still, the turnaround has put Feature.AI—long the poster child for irresponsible AI development—in the strange position of becoming an advocate for safer online experiences for children. Anand says he welcomes recent legislation proposed by Senator Josh Hawley that would ban people under 18 from using artificial intelligence-powered companion apps nationwide. “What would be very sad for the industry is if we made such decisions [to ban users under 18] and then users end up gravitating to other platforms that don’t take on that responsibility,” Anand tells me. “The bar for users under 18 in terms of safety needs to be raised… It needs to be regulated.”
If you have a minute, please use our quick survey to help us better understand who you are and what AI topics interest you most.
What you need to know: EU mulls privacy trade-off to attract AI money
European Union regulators are considering rolling back some of their sweeping privacy protections as they seek to make the continent a more attractive destination for AI investment amid weak economic growth.
Politician The documents obtained show that officials plan to change the EU's main privacy law, the General Data Protection Regulation (GDPR), to allow artificial intelligence companies to train and run their systems using previously protected categories of personal data.
What we read
We need a global movement to ban superintelligent AIAndrea Miotti in TIME magazine
AI Director Andrea Miotti is calling for a global movement to stop superintelligent AI, just as the world came together to stop the hole in the ozone layer from growing. “Thus, the risk of extinction posed by superintelligence has the potential to transcend all divisions,” he writes. “It can unite people of different political parties, religions, nations and ideologies. No one wants their lives, their families, their world to be destroyed.”






