Teens on Instagram continue to face safety issues on the platform despite introducing enhanced protections for younger users more than a year ago, according to a new study.
A new report shared exclusively with TIME says that safety measures that Instagram's parent company Meta implemented last year have failed to address teen safety concerns. In a study by children's rights groups Parents Together Action, the HEAT Initiative and Design It for Us, which Mehta disputed as biased, nearly 60% of 13- to 15-year-olds reported having encountered unsafe content and unwanted messages on Instagram in the past six months. Nearly 60% of children who received unwanted messages said they came from users they believed to be adults. And nearly 40% of children who received unwanted messages said they came from someone who wanted to start a sexual or romantic relationship with them.
“The most shocking thing continues to be how many kids interact with adults they have no connection with,” says Shelby Knox, director of online safety campaigns at Parents Together, a national parenting organization. “Parents were promised a safe experience. We were promised that adults would not be able to contact our children on Instagram.”
Meta disputed the researchers' findings.
“This deeply subjective report is based on a fundamental misunderstanding of how our teen safety tools work. Worse, it ignores the reality that hundreds of millions of teenagers on teen accounts are seeing less sensitive content, experiencing less unwanted contact, and spending less time on Instagram at night,” Meta spokesperson Lisa Crenshaw said in a statement to TIME. “We strive to continually improve our tools and have important conversations about teen safety, but it doesn’t advance either goal.”
In September 2024 Meta announced significant changes to the platform to improve the safety of young users. Users under 18 years of age will be automatically placed in “Teen Accounts” designed to protect them from harmful content and limit messages from users they do not follow or associate with. When the company introduced Teen Accounts, they promised “Built-in protection for teens, peace of mind for parents.”
A new report suggests otherwise. Researchers found that harmful content and unwanted messages are still so widespread on Instagram that 56% of teen users said they didn't even report it because they were “used to it now.”
“If you don't want your kids to have access to R-rated resources 24/7, you don't want to give them access to Instagram Teens,” says Sarah Gardner, CEO of Initiative Heat, an advocacy organization that pressures tech companies to change their policies to make their platforms safer for kids. “It absolutely does not provide the guarantees that it claims.”
Read more: 'Everything I learned about suicide I learned from Instagram.”
The new report is the second in recent weeks to question the effectiveness of Meta's child safety tools. At the end of September, report Other online safety groups, as confirmed by Northeastern University researchers, found that most of the 47 child safety features Instagram promised were flawed.
In this study first reported According to Reuters, researchers found that of those 47 features, only eight worked as advertised; nine others reduced harm but had limitations, and 30 tools (64%) were either ineffective or no longer available, including sensitive content controls, time management tools, and tools designed to protect children from unwanted contact.
Researchers in that study, which Meta also disputes, found that adults can still message teens who don't follow them, and that Instagram encourages teens to follow adults they don't know. The researchers found that Instagram still recommended sexually explicit content, violent content, and self-harm and body image content to teens, even though these types of posts were supposed to be blocked by Meta's sensitive content filters. They also found evidence that not only were primary school-aged children using the platform (despite Meta's ban on users under 13), but that “Instagram's recommendation-based algorithm actively encouraged children under 13 to engage in risky sexualized behavior” due to the “inappropriate amplification” of sexualized content.
Arturo Bejar, a former senior engineer and head of product at Meta who helped design the study, told TIME that the company's algorithm rewards thought-provoking content, even from kids who don't know what they're doing. “That's not what the minors started out doing, but product design taught them that,” Bejar says. “At this point, Instagram itself becomes a groomer.”
Read more: Social media has led to eating disorders. Now she is suing.
The day after that report was published, Meta announced that it had already placed hundreds of millions of underage users on teen Instagram accounts and was expanding its teen program globally on Facebook and Messenger. They also announced new partnerships with schools and teachers and a new online safety curriculum for high school students.
“We want parents to feel good about their teens' use of social media. We know teens use apps like Instagram to connect with friends and explore their interests, and they should be able to do this without worrying about unsafe or inappropriate experiences,” Instagram CEO Adam Mosseri said. wrote in the deployment blog post. “Teen accounts are designed to give parents peace of mind.”