Eileen Guo writes:
Even if you don't have an AI friend, you probably know someone who does. A recent study found that one of the main applications of generative AI is communication: on platforms such as Character.AI, Replika or Meta AI, people can create personalized chatbots that will pose as ideal friends, romantic partners, parents, therapists or any other persona they can think of.
It's wild how easy it is for people to speak these relationships can develop. AND some research found that the more talkative and human-like an AI chatbot is, the more likely we are to trust it and be influenced by it. This can be dangerous, and chatbots have been accused of encouraging some people to engage in harmful behavior, including a few extreme examples suicide.
Some state governments are taking notice and are beginning to regulate related AI. New York requires companion artificial intelligence companies to create protective measures and report signs of suicidal ideation, and last month California past a more detailed bill requiring AI companies to protect children and other vulnerable groups.
But tellingly, one area that the laws don't address is user privacy.
This is despite the fact that companion AI, even more than other types of generative AI, depends on people sharing deeply personal information—about their daily routines, inner thoughts, and questions they might not feel comfortable asking real people.
After all, the more users tell their AI companions, the better the bots support their interactions. Here's what MIT researchers Robert Mahari and Pat Pataranutaporn have to say. called “captivating intelligence” in a paper we published last year, warning that AI companion developers were making “conscious design choices…to maximize user engagement.”






