Schools are facing a growing problem of student use artificial intelligence turn innocent images of classmates into sexually explicit deepfakes.
The consequences of distributing doctored photos and videos can be a nightmare for victims.
The problem facing schools was highlighted this fall when artificial intelligence-generated nude images swept through a Louisiana high school. The two boys were eventually charged, but not before one of the victims was expelled from school for fighting with a boy she accused of creating images of her and her friends.
“While the ability to modify images has been available for decades, the development of artificial intelligence has made it easier for anyone to modify or create such images with little to no training or experience,” Lafourche Parish Sheriff Craig Webre said in a news release. “This incident highlights a serious issue that all parents should discuss with their children.”
Here are the key takeaways from AP history on the rise of artificially intelligent nude images and how schools are responding.
A prosecution involving deepfakes at a Louisiana high school is believed to be the first under a new state law, said Republican Sen. Patrick Connick, the law's sponsor.
The law is one of many across the country aimed at cracking down on deepfakes. By 2025, at least half of the states adopted legislation According to the National Conference of State Legislatures, the solution to the problem is using generative artificial intelligence to create seemingly realistic but fabricated images and sounds. Some laws address simulated child sexual abuse material.
Students have also been prosecuted in Florida and Pennsylvania and banished to places like California. One fifth-grade teacher in Texas was also charged with using artificial intelligence to create child pornography for his students.
Deepfakes began as a way to humiliate political opponents and young starlets. Until the last few years, people needed some technical skill to make them look realistic, says Sergio Alexander, a researcher at Texas Christian University who has written about the problem.
“Now you can do it in an app, download it on social media, and you don’t have to have any technical knowledge,” he said.
He called the scale of the problem staggering. The National Center for Missing and Exploited Children said the number of child sexual abuse images generated by artificial intelligence and transmitted to its cyber feed grew from 4,700 in 2023 to 440,000 in just the first six months of 2025.
Sameer Hinduja, co-director of the Center for Cyberbullying Research, recommends that schools update their policies on AI deepfakes and get better at explaining them. That way, he said, “students won't feel like staff and faculty are completely oblivious, which can make them feel like they can act with impunity.”
He said many parents believe schools are solving the problem when in fact they are not.
“A lot of them are just uninformed and ignorant,” said Hinduja, who is also a professor in the School of Criminology and Criminal Justice at Florida Atlantic University. “We hear about ostrich syndrome and we just bury our heads in the sand and hope it doesn't happen among their young people.”
AI-powered deepfakes differ from traditional bullying in that instead of a nasty text or rumor, there is a video or image that often goes viral and then continues to resurface, creating a cycle of trauma, Alexander said.
Many victims become depressed and anxious, he said.
“They're literally shutting down because it seems like there's no way they can even prove that it's not real—because it actually looks 100 percent real,” he said.
Parents can start the conversation by casually asking their kids if they've seen any funny fake videos online, Alexander says.
“Take a moment to laugh at some of them, like Bigfoot chasing tourists,” he said. After this, parents can ask their children: “Have you ever thought about what would happen if you were in this video, even the funniest one?” And then parents may ask whether a classmate made a fake video, even if it was harmless.
“Based on the numbers, I guarantee they will say they know someone,” he said.
If kids are exposed to things like deepfakes, they need to know they can talk to their parents without any problem, says Laura Tierney, the company's founder and CEO. Social Institutewhich educates people about the responsible use of social media and helps schools develop policies. She said many children fear their parents will overreact or take away their phones.
She uses the acronym SHIELD as a plan of action. The “S” stands for stop and not forward. “H” stands for “team up” with a trusted adult. The “I” is intended to “inform” any social networks on which the image is posted. The “E” is a signal to gather “evidence” such as who is distributing the image, but not to download anything. The letter “L” stands for “limit” access to social networks. The “D” is a reminder to “direct” victims to help.
“The fact that this acronym has six steps, I think shows that this issue is really complex,” she said.
___
Associated Press education coverage receives financial support from several private foundations. AP is solely responsible for all content. Find hotspots standards for working with charitable organizations, list supporters and funded coverage areas on AP.org.






