It was after one friend was shot and another was stabbed, both fatally, that Shan asked: ChatGPT for help. She had tried regular mental health services, but the “talking” experience when she got to know her artificial “friend” made her feel safer, less intimidating and, crucially, more accessible when it came to dealing with the trauma of the deaths of her young friends.
When she started consulting on the artificial intelligence model, the Tottenham teenager joined the estimated 40% of 13- to 17-year-olds. England and Wales affected by youth violence who are turning to artificial intelligence chatbots for mental health support, according to a study of more than 11,000 young people.
It found that both victims and perpetrators of violence were significantly more likely to use AI for such support than other adolescents. The Youth Charity Fund's findings have prompted warnings from youth leaders that at-risk children “need a human, not a bot”.
The results show that chatbots are meeting a demand unmet by conventional mental health services, which have long waiting lists and lack empathy for some young users. The perceived privacy of a chatbot is another key factor driving its use by victims or perpetrators of crimes.
After her friends were killed, 18-year-old Shan (not her real name) started using Snapchat's artificial intelligence and then switched to ChatGPT, which she can communicate with at any time of the day or night with two clicks on her smartphone.
“I feel like it's definitely a friend,” she said, adding that it was less intimidating, more personal and less judgmental than her experience with regular people. National Health Service and charitable mental health support.
“The more you talk to him like a friend, he'll talk to you like a friend. If I say in chat, 'Hey girlfriend, I need some advice.' Chat will respond to me as if it's my best friend, she'll say, 'Hey girlfriend, I've got you, girl.'
The study found that one in four children aged 13 to 17 had used an AI chatbot for mental health support in the past year, with black children doing so at twice the rate of white children. Teens were more likely to seek support online, including through AI, if they were on a waiting list for treatment or diagnosis or had been turned away, than if they were already receiving in-person support.
It's important to note that, according to Shan, the AI was “available 24 hours a day, 7 days a week” and did not inform teachers or parents about what she had disclosed. She felt that this was a significant advantage over telling the school therapist, after her own experience of what she felt was revelation being shared with teachers and her mother.
Boys who were involved in gang activities felt safer asking chatbots for advice on other, safer ways to earn money than from a teacher or parent, who might pass information to police or other gang members, putting them in danger, she said.
Another young person who used artificial intelligence to support mental health, but asked not to be named, told the Guardian: “The current system is so ineffective in offering help to young people. Chatbots give immediate answers. If you have to wait on a waiting list for one to two years to get something, or you can get an immediate response within minutes… that's where the desire to use AI comes from.”
John Yates, chief executive of the Youth Charity, which commissioned the research, said: “Too many young people are struggling with their mental health and are unable to get the support they need. It's no surprise that some are turning to technology for help. We must do better for our children, especially those most at risk. They need a human, not a bot.”
There are growing concerns about the dangers of chatbots when children interact with them for long periods of time. OpenAI, the US company behind ChatGPT, is facing several lawsuits including from the families of young people who committed suicide after long engagements.
IN the case of 16-year-old Californian Adam Reincommitted suicide in April, OpenAI rejected it was called by a chatbot. The company said it is improving its technology “to recognize and respond to signs of mental or emotional distress, de-escalate conversations and direct people to real support.” Startup said in September, it may begin contacting authorities in cases where users begin to talk seriously about suicide.
Hannah Jones, a youth violence and mental health researcher from London, said: “To have this tool that can technically tell you anything – it's almost like a fairy tale. You have a magic book that can solve all your problems. It sounds incredible.”
But she's concerned about the lack of regulation.
“People are using ChatGPT for mental health support, even though it’s not designed for that,” she said. “What we need now is stronger regulations that are evidence-based but at the same time driven by young people. This problem will not be solved by adults making decisions for young people. Young people should be at the helm to make decisions regarding ChatGPT and mental health support using artificial intelligence, because it is very different from our world. We didn't grow up with this. We can’t even imagine what it’s like to be a young person today.”






