lLast week, Sam Altman, CEO of OpenAI, tweaked ChatGPT to make it more like a “friend” again. The company briefly tweaked watch faces to make its popular AI chatbot less “overtly enjoyable” after it helped a teenager named Adam Rein, who had become very attached to it, commit suicide. But users protested when Open AI made the change, complaining that ChatGPT now sounded like a robot, so Altman brought it back. “If you want your ChatGPT to respond in a very human way, use a ton of emoji, or act like a friend, ChatGPT should do it,” Altman wrote on X.
Lonely people around the world are increasingly turning to AI chatbots like ChatGPT and Claude to help them friendship and psychological support. After all, we're in the midst of a loneliness epidemic, and unlike humans, chatbots have an endless amount of time to listen. But one of the pillars of friendship is empathy, the ability to share and understand the feelings of another person. Can a virtual machine living in the cloud generate real empathy?
The answer to this question is complex, says empathy researcher Anat Perry of the Hebrew University of Jerusalem. She spoke on a panel about the human-AI relationship at conference on Mind, Artificial Intelligence and Ethics, organized last week by the Dalai Lama Library in Dharmasala, India. “When he says he feels your pain or shares your experience, he's just pretending,” Perry explained. Chatbots can express cognitive empathy, by taking another person's point of view, and motivational empathy, by signaling that they want to ease the listener's pain, she said. But they can't offer affective empathy, the actual sharing of another person's joy or pain, derived from real life experience.
Perry suspected that most people already understood this and valued empathetic human support more than chatbot support. To test her guess, she conducted experiment in which she deceived her subjects. Perry and her colleagues asked 1,000 people recruited online to share a recent emotional experience. Half the group was told that they would receive a response from ChatGPT, and the other half – from a person. In fact, all the answers were generated by artificial intelligence, but required a lot of empathy. When rating the answers, people said they felt more positive emotions and less negative emotions when they perceived the answerer as a person.
The second experiment found that 40 percent of people were willing to wait up to two years for a response to a human's emotional experience rather than getting an immediate response from a chatbot. Those who chose a person said: “They wanted someone who could truly understand them, share some of their emotions, care for them and maybe even ease their loneliness.”
But that leaves the remaining 60 percent who were more interested in hearing from the chatbot immediately. This is a potentially alarming discovery. While Claude, ChatGPT and other chatbots may offer a temporary solution to humanity's loneliness crisis, the more we turn to machines, the less time we will have for each other. In the end, we can all realize that we don't have a shoulder to lean on or a hand to wipe away our tears. We will find ourselves in the hall of mechanical mirrors.
Main image: Vector Mine / Shutterstock