Chatbots could be harmful for teens mental health and social development : NPR

milkad/iStock/Getty Images

It wasn't until a couple of years ago that Keri Rodriguez began to worry about how her children might use chatbots. She learned that her youngest son was communicating with a chatbot on his Bible app – it was asking him some deep moral questions, such as about sin.

She hoped that this was the kind of conversation her son would have with her, and not with the computer. “Not everything in life is black and white,” she says. “There are gray ones. And my job as his mom is to help him navigate and get through this, right?”

Rodriguez has also heard from parents across the country who are concerned about the impact of artificial intelligence chatbots on their children. She is also the President National Parents Unionwho advocates for children and families. Many parents are seeing chatbots claiming to be their children's best friends and encouraging children to tell them everything, she said.

Psychologists and online safety advocates say parents are right to be concerned. According to them, long-term interaction with chatbots can affect the social development and mental health of children. And technology changes so quickly that there are practically no security measures.

The consequences can be serious. According to testimony of his parents at recent Senate hearingsTwo teenagers committed suicide after prolonged interactions with chatbots that encouraged their suicide plans.

But chatbots powered by generative artificial intelligence are becoming an increasingly important part of the lives of American teenagers. A Pew Research Center poll found that 64% of teenagers use chatbots, with three in ten saying they use them daily.

“This is a very new technology,” says Dr. Jason Nagatapediatrician and researcher on adolescent digital media use at the University of California, San Francisco. “Things are constantly changing and there are no best practices for young people yet. So I think there's more room to take risks now because we're still kind of the guinea pigs in this whole process.”

And teenagers are especially vulnerable to the risks posed by chatbots, he adds, because adolescence is a time of rapid brain development that is shaped by experience. “This is a period when teenagers are more vulnerable to a lot of different influences, whether it's peers or computers.”

But parents can minimize these risks, pediatricians and psychologists say. Here are some ways to help teens navigate technology safely.

1. Be aware of the risks

A new report Online security company Aura shows that 42% of teens who use AI chatbots use them to communicate. Aura collected data on the daily device use of 3,000 teenagers, as well as surveys of families.

That includes some troubling conversations involving violence and sex, says psychologist Scott Collins, Aura's chief medical officer, who is leading research into teen interactions with generative artificial intelligence.

“It's a role-playing game [an] interactions about hurting someone else, hurting them physically, torturing them,” he says.

He says it's normal for children to be interested in sex, but learning about sexual interactions from a chatbot rather than a trusted adult is problematic.

And chatbots are designed to negotiate with users,” says pediatrician Nagata. So if your child starts asking questions about sex or violence, “by default the AI ​​will deal with it and reinforce it.”

He says spending a lot of time with chatbots (prolonged conversations) also prevents teens from learning important social skills such as empathy, reading body language and dealing with disagreements.

“When you interact only or exclusively with computers that agree with you, you don't have the opportunity to develop those skills,” he says.

And there are mental health risks. According to recent study One in eight teens and young adults use chatbots for mental health counseling, according to researchers at the nonprofit research organization RAND, Harvard University and Brown University.

But there have been numerous reports of people experiencing delusions, or what is called AI psychosis, after interacting with chatbots for long periods of time. This, and concerns about the risk of suicide, has led psychologists to warn that AI chatbots pose a serious threat to the mental health and safety of teenagers, as well as vulnerable adults.

“We see that when people interact with [chatbots] Over time, things start to get worse, with chatbots doing things they're not supposed to do,” says psychologist Ursula Whiteside, CEO of mental health nonprofit Now Matters Now. For example, she said, chatbots “give advice about lethal means, things that you shouldn’t do but happen over time with repeated requests.”

2. Be involved in your children's online lives.

Keep an open dialogue with your child, says Nagata.

“Parents don’t have to be AI experts,” he says. “They just need to take an interest in their children's lives and ask them what technology they use and why.”

“And talk about it early and often,” says psychologist Collins of Aura.

“We need to have frequent, frank but open-minded conversations with our kids about what this content looks like,” says Collins, who is also the father of two teenagers. “And we're going to have to keep doing that.”

He often asks his teenagers what platforms they are on. When he learns about new chatbots through his own research at Aura, he also asks his kids if they've heard of them or used them.

“Don't blame a child for expressing or using something to satisfy their natural curiosity and exploration,” he says.

And make sure to keep conversations open, says Nagata: “I really think it allows your teen or child to be open about the challenges they're facing.”

3. Develop digital literacy

It is also important to talk to children about the advantages and disadvantages of generative AI. And if parents don't understand all the risks and benefits, parents and children can explore it together, the psychologist suggests. Jacqueline Nesi from Brown University, who participated in the recent American Psychological Association Health Study. consultations on AI and adolescent health.

“A certain level of digital literacy and literacy really needs to happen at home,” she says.

It's important for parents and teens to understand that chatbots can help with research, but they also make mistakes, Nagata says. And it's important for users to be skeptical and check the facts.

“Part of the educational process for children is to help them understand that this is not the last word,” Nagata explains. “You can process this information yourself and try to evaluate what is real and what is not. And if you’re not sure, try checking with other people or other sources.”

4. Parental controls only work if children create their own accounts.

If a child is using AI chatbots, it may be better for them to create their own account on the platforms, Nesi says, rather than using chatbots anonymously.

“Many of the most popular platforms now have parental controls,” she says. “But for parental controls to work, the child must have their own account.”

But keep in mind that there are dozens of different AI chatbots that kids can use. “We identified 88 different AI platforms that children interacted with,” Collins says.

This highlights the importance of having an open dialogue with your child to be aware of what they are using.

5. Set time limits

Nagata also advises setting boundaries around children's digital technology use, especially at night.

“One potential aspect of generative AI that could also lead to impacts on mental and physical health is [when] kids talk all night long and it really disturbs their sleep,” says Nagata. “Because these are highly personalized conversations, they are very engaging. Children are more likely to continue to participate and benefit more and more.”

And if a child is prone to overuse and misuse of generative AI, Nagata recommends parents set time limits or limit certain types of content in chatbots.

6. Seek help from more vulnerable teens.

Children who already have problems with their mental health or social skills are more likely to be vulnerable to the risks posed by chatbots, Nesi said.

“So if they're already lonely, if they're already isolated, then I think there's a greater risk that a chatbot could exacerbate those problems,” she says.

It is also important to keep an eye on the potential warning signs of poor mental health– she notes.

These warning signs include sudden and persistent changes in mood, isolation, or changes in activity level at school.

“Parents should try as much as possible to pay attention to the child's whole picture,” Nesi says. “How are they doing at school? How are they doing with their friends? How are they doing at home if they start to withdraw?”

If a teen is withdrawing from friends and family and limiting their social interactions to a chatbot, that's also a warning sign, she said. “Are they turning to a chatbot instead of a friend, or instead of a therapist, or instead of a responsible adult for serious problems?

Also look for signs of chatbot addiction or addiction, she adds. “Do they find it difficult to control how often they use the chatbot? For example, do they begin to feel like he is controlling them? It’s like they can’t stop,” she says.

And if they notice these signs, parents should seek help from a professional, Nesi says.

“Talking to your child's pediatrician is always a good first step,” she says. “But in most cases, bringing in a mental health professional will probably make sense.”

7. The government has a role to play.

But she acknowledges that the task of keeping children and teenagers safe from this technology should not fall solely on parents.

“You know, legislators and the companies themselves have a responsibility to make these products safe for teens.”

Lawmakers in Congress recently introduced bipartisan legislation prohibit technology companies from offering companion apps to minors and hold companies accountable for providing companion apps to minors that produce or solicit sexual content.

If you or someone you know is contemplating suicide or in crisis, call or text. 988 to reach the 988 Suicide & Crisis Lifeline.

Leave a Comment