Skip to content

Radnor and Council Rock students made AI deepfakes of classmates. Parents say the schools failed to protect their daughters.

As students create and share so-called deepfakes of classmates, schools say they have no role in criminal investigations, and are limited in their ability to police students off campus.

So-called AI deepfakes — pictures of a real person manipulated with artificial intelligence, sometimes with “nudify” features that can convert clothed photos into naked ones — have become the talk of school hallways and Snapchat conversations in some area schools.
So-called AI deepfakes — pictures of a real person manipulated with artificial intelligence, sometimes with “nudify” features that can convert clothed photos into naked ones — have become the talk of school hallways and Snapchat conversations in some area schools.Read moreSteve Madden

One night in early December, the phones of Radnor High School students started buzzing. Some freshmen girls were getting disturbing messages: A male classmate, they were told, had made pornographic videos of them.

When one of the girls walked into school the next morning, “she said everyone was staring at her,” said her mother, who requested anonymity to protect her daughter’s identity. “All the kids knew. It spread like wildfire.”

So-called AI deepfakes — pictures of a real person manipulated with artificial intelligence, sometimes with “nudify” features that can convert clothed photos into naked ones — have become the talk of school hallways and Snapchat conversations in some area schools.

As Pennsylvania lawmakers have pushed new restrictions cracking down on deepfakes — defining explicit images as child sexual abuse material, and advancing another measure that would require schools to immediately alert law enforcement about AI incidents — schools say they have no role in criminal investigations, and are limited in their ability to police students off campus.

But some parents say schools should be taking a more proactive stance to prepare for AI abuse — and are failing to protect victims when it happens, further harming students who have been violated by their peers.

In the Council Rock School District, where AI-generated deepfakes were reported last March, parents of targeted girls said administrators waited five days to contact the police about the allegations and never notified the community, even after two boys were charged with crimes.

“They denied everything and kind of shoved it under the rug and failed to acknowledge it,” said a mother in Council Rock, who also requested anonymity to protect her daughter’s identity. “Everybody thought it was a rumor,” rather than real damage done to girls, the mother said.

Council Rock spokesperson Andrea Mangold said that the district “recognizes and understands the deep frustration and concern expressed by parents,” and that a police investigation “began promptly upon the district’s notification.”

Mangold said current laws were “insufficient to fully prevent or deter these incidents,” and that the district was “limited in what we know and what we can legally share publicly” due to student privacy laws.

In Radnor, parents also said the district minimized the December incident. A district message last month said a student had created images of classmates that “move and dance,” and reported that police hadn’t found evidence of “anything inappropriate” — even though police later said they had charged a student with harassment after an investigation into alleged sexualized images of multiple girls.

A Radnor spokesperson said the alleged images were never discovered and the district’s message was cowritten by Radnor police, who declined to comment.

The district “approaches all student-related matters with care and sensitivity for those involved,” said the spokesperson, Theji Brennan. She said the district was limited in what it could share about minors.

In both Radnor and Council Rock, parents said their daughters were offered little support — and were told that if they were uncomfortable, they could go to quiet rooms or leave classes early to avoid crossing paths with boys involved in the incidents.

“She just felt like no one believed her,” the Radnor mother said of her daughter.

How an investigation unfolded in Radnor

In Radnor, five freshmen girls first heard they were victims of deepfakes on Dec. 2, according to parents of two of the victims who requested anonymity to protect their daughters’ identities. They said boys told their daughters that a male classmate had made videos depicting them sexually.

In a Snapchat conversation that night, one boy said, “‘Nobody tell their parents,’” a mother of one of the victims recalled. Reading her daughter’s texts, “it quickly went from high school drama to ‘Wow, this is serious.’”

The girls and their parents never saw the videos. In an email to school officials the next morning, parents asked for an investigation, discipline for the students involved, and efforts to stop any sharing of videos. They also asked for support for their daughters.

School administrators began interviewing students. The mother of one of the victims said her daughter was interviewed alone by the male assistant principal — an uncomfortable dynamic, given the subject matter, she said.

One mother said the principal told her daughter it was the boys’ word against hers, and that he was “so glad nothing was shared” on social media — even though no one knew at that point where videos had been shared, the mother said.

The principal said the school had no authority over kids’ phones, so the girl and her family would need to call the police if they wanted phones searched, the mother said.

Brennan, the Radnor spokesperson, said that administrators contacted Radnor police and child welfare authorities the same day they spoke with families. “The district’s and the police department’s investigations have found no evidence that the images remain or were shared, posted, or otherwise circulated,” she said.

The male classmate acknowledged making videos of the girls dancing in thong bikinis, the parents said police told them. But the app he used was deleted from his phone, and the videos weren’t on it, the police told them.

The parents didn’t believe the admission.

“I don’t think a 14-year-boy would report a TikTok video of girls in bikinis,” said one of the mothers, who said her daughter was told she was naked and touching herself in videos.

The police told parents they didn’t subpoena the app or any social media companies, making it impossible to know what was created.

Radnor Police Chief Chris Flanagan declined to comment, as did the Delaware County District Attorney’s Office.

In a message sent to district community Jan. 16 announcing the end of the police investigation, officials said a student, outside of school hours, had taken “publicly available” photos of other students and “used an app that animates images, making them appear to move and dance.”

“No evidence shared with law enforcement depicted anything inappropriate or any other related crime,” the message said.

A week later, the police released a statement saying a juvenile was charged with harassment after an investigation into “the possible use of AI to generate non-consensual sexualized imagery of numerous juveniles.”

Asked why the district’s statement had omitted the criminal charge or mention of sexualized imagery, Brennan said the statement was also signed by Flanagan, who declined to comment on the discrepancy.

Brennan said the district had provided ongoing support to students, including access to a counselor and social worker.

Parents said the district had erred in failing to initiate a Title IX sexual harassment investigation, instead telling parents they needed to file their own complaints.

“They kept saying, ‘This is off campus,’” the mother said. But “my daughter could not walk around without crying and feeling ashamed.”

Parents say girls were ‘not supported’ in Council Rock

In Council Rock, a girl came home from Newtown Middle School on March 17 and told her mother a classmate had created naked images of her.

“I’m like, ‘Excuse me? Nobody contacted me,’” said the mother, who requested anonymity to protect her daughter’s identity. She called the school’s principal, who she said told her: “‘Oh my God, I meant to reach out to you. I have a list of parents, I just have not gotten to it’ — you know, really downplaying it.”

The mother and other victims’ parents later learned that administrators were alerted to the images on March 14, when boys reported them to the principal. But instead of calling the police, the principal met with the accused boy and his father, according to parents. Police told parents they were contacted by the school five days later. The Newtown police didn’t respond to a request for comment.

Mangold, the Council Rock spokesperson, declined to comment on the specific timing of the school’s contact with police.

Police ultimately obtained images after issuing a subpoena to Snapchat; in total, there were 11 victims, the parents said.

Through the Snapchat data, police learned that a second boy was involved, the parents said, which made them question what was created and how far it spread.

Parents said they believe there are more pictures and videos than police saw, based on what their daughters were told — and because the delayed reporting to police could have given boys an opportunity to delete evidence.

“That’s kind of what the fear of our daughters is — like, what was actually out there?” said one mother, who also requested anonymity to protect her child’s identity.

Manuel Gamiz, a spokesperson for the Bucks County district attorney, said Newtown Township Police had charged two juveniles with unlawful dissemination of sexually explicit material by a minor. Gamiz said the office couldn’t provide further information because the case involved juveniles.

Juvenile cases are not public, but victims’ parents said both boys were adjudicated delinquent. While the boys had been attending Council Rock North High School with their daughters, the district agreed to transfer both after their cases were resolved, according to a lawyer representing four of the parents, Matthew Faranda-Diedrich.

“How can you let this person be roaming the halls?” said Faranda-Diedrich, who said it took formal demand letters in order for the district to transfer the boys.

He accused the district of mishandling the incident and “protecting the institution” rather than the victimized girls.

“They’re putting themselves above these students,” Faranda-Diedrich said.

Parents said school leaders warned their daughters against spreading rumors, and never sent a districtwide message about the incident.

“These girls were victims,” one of the mothers said, “and they were not supported.”

She and the other mothers who spoke to The Inquirer said the incident has deeply affected their daughters, from anxiety around what images may have been created — and how many people saw them — to a loss of trust in school leaders.

Some of the girls are considering switching schools, one mother said.

State law changes and a debate around education about deepfakes

In Pennsylvania, AI-generated sexual images of minors are now classified as child sexual abuse material and people can also be charged with digital forgery for creating them.

Those changes came in 2024 and 2025, after a scandal over deepfakes of nearly 50 girls at a Lancaster private school.

Another bill that passed the state Senate unanimously in November would require school staff and other mandated reporters to report AI-generated explicit images of minors as child abuse — closing what prosecutors had cited as a loophole when they declined to bring charges against Lancaster Country Day School for failing to report AI images to the police. That legislation is now pending in the House.

Schools can also do more, said Faranda-Diedrich, who also represented parents of victims in the Lancaster Country Day School incident. He has pressed schools to conduct mandated reporter training for staff. “By and large they refuse,” he said.

In Radnor, parents urged the school board at last week’s committee meeting to make changes.

Luciana Librandi, a parent of a freshman who said she had been “directly impacted by the misuse of generative AI,” called for timelines for contacting police following an AI incident, safeguards during student questioning, and annual education for students and parents on AI.

Others called for the district to communicate the criminal charge to families, to enforce existing policies against harassment, and to independently review its response to the recent incident.

Radnor officials said they are planning educational programming on the dangers of making AI images without a person’s consent.

There’s some debate on whether to teach children about “nudify” apps and their dangers, said Riana Pfefferkorn, policy fellow at the Stanford Institute for Human-Centered AI, who has researched the prevalence of AI-generated child sexual abuse material. Alerting kids to the apps’ existence could cause them “to make a beeline for it,” Pfefferkorn said.

But widely publicized controversy over Elon Musk’s Grok AI chatbot producing sexualized images of women and children may have tipped the scale in favor of more proactive education, she said.

While “this isn’t something that is epidemic levels in schools just yet,” Pfefferkorn said, “is this a secret we can keep from children?”

One of the victims’ parents in Radnor said education on the topic is overdue.

“It’s clearly in school,” the mother said. “The fact there’s no video being shown on the big screen in your cafeteria — we don’t live in that world anymore.”