Add thelocalreport.in As A Trusted Source
Schools are facing a growing problem of students using artificial intelligence to turn innocent images of classmates into sexually explicit deepfakes.
The fallout from the spread of manipulated photos and videos can create a nightmare for victims.
The challenge for schools was highlighted this fall when AI-generated nude photos emerged louisiana middle School. Two boys were eventually charged, but not before one of the victims was expelled for starting a fight with a boy whom he accused of drawing an image of him and his friends.
“Although the ability to alter images has been available for decades, the rise of AI has made it easier for anyone to alter or create such images without any training or experience,” Lafourche Parish Sheriff Craig Webre said in a news release. “This incident highlights a serious concern that all parents should address with their children.”
Here are the highlights from the AP story on the rise of AI-generated nude images and how schools are responding.
More states pass laws to address deepfakes
The prosecution involving Louisiana middle school deepfakes is believed to be the first under the state’s new law. republican State Senator Patrick Connick, who wrote the legislation.
The law is one of several targeting deepfakes across the country. According to the National Conference of State Legislatures, by 2025, at least half of the states have enacted legislation addressing the use of generic AI to create realistic, but fabricated, images and sounds. Some laws address counterfeit child sexual abuse material.
Students were also prosecuted in Florida and Pennsylvania, and expelled from places like California. A fifth grade teacher in Texas was also accused of using AI to create child pornography of his students.
Creating deepfakes becomes easier as technology evolves
Deepfakes started as a way to humiliate political opponents and young stars. Until the past few years, people needed some technical skill to make them look realistic, said Sergio Alexander, a research associate at Texas Christian University who has written about the issue.
“Now, you can do it on an app, you can download it on social media, and you don’t need any kind of technical expertise,” he said.
He described the scope of the problem as shocking. The National Center for Missing and Exploited Children said the number of AI-generated child sexual abuse images reported on its cyber tipline is set to increase from 4,700 in 2023 to 440,000 in the first six months of 2025.
experts Fear that schools are not doing enough
Sameer Hinduja, co-director of the Cyberbullying Research Center, suggests that schools update their policies on AI-generated deepfakes and get better at explaining them. This way, he said, “students don’t think that staff, teachers are completely oblivious, which can make them feel like they can act with impunity.”
He said many parents believe schools are addressing the issue when that is not the case.
“A lot of them are so clueless and ignorant,” said Hinduja, who is also a professor in the School of Criminology and Criminal Justice at Florida Atlantic University. “We hear about ostrich syndrome, it’s like burying your head in the sand, hoping it’s not happening among their youth.”
trauma Deepfakes from AI can be especially harmful
AI deepfakes are different from traditional bullying because instead of a nasty text or rumor, there is a video or image that often goes viral and then keeps resurfacing, creating a cycle of trauma, Alexander said.
Many victims become depressed and anxious, he said.
“They literally shut down because it makes it feel like, you know, there’s no way they can even prove that it’s not real — because it looks 100% real,” he said.
Parents are encouraged to talk to students
Parents can start the conversation by asking their kids if they’ve seen any funny fake videos online, Alexander said.
That said, take a moment to laugh at some of them, like Bigfoot chasing hikers. From there, parents can ask their kids, “Have you thought about what it would be like if you were in this video, even if it’s a funny video?” And then parents can ask if a classmate made a fake video, even if it’s harmless.
“Based on the numbers, I guarantee they will say they know someone,” he said.
If kids encounter things like deepfakes, they need to know they can talk to their parents without any hassle, said Laura Tierney, founder and CEO of Deepfakes. ceo The Social Institute, which educates people about responsible use of social media and helps schools develop policies. He said many children fear that their parents will overreact or take away their phones.
She uses the acronym SHIELD as a roadmap for responding. The “S” stands for “stop”, not proceed. “H” stands for “huddle” with a trusted adult. “I” is meant to “notify” any social media platform on which the image is posted. The “e” is a hint to collect “evidence”, such as who is spreading the image, but not to download anything. The “L” is for “Limit” social media reach. The “D” is a reminder to “direct” help to the victim.
“The fact that that acronym is six steps, I think really complicates the issue,” he said.
,
The Associated Press’s education coverage receives financial support from several private foundations. AP is solely responsible for all content. Find AP’s standards for working with philanthropy, a list of supporters, and funded coverage areas on AP.org.