Teenage girls in the United States who are increasingly targeted or threatened with fake nude photos created with artificial intelligence or other tools have limited ways to seek accountability or recourse, while schools and legislatures states are struggling to catch up on new technologies, lawmakers say. legal experts and a victim who is now advocating for a federal bill.
Since the start of the 2023 school year, cases involving teenage girls who were victims of fake nude photos, also known as deepfakes, have increased around the world, including in high schools in New Jersey and the state of Washington.
Local police departments are investigating the incidents, lawmakers are rushing to pass new measures that would impose sanctions against the creators of the photos, and affected families are pushing for answers and solutions.
Unrealistic deepfakes can be made with simple photo editing tools that have been around for years. But two school districts told NBC News they believe fake photos of teens that affected their students were generated by AI.
AI technology is increasingly available, such as stablecasting (open source technology capable of producing images from text messages) and “face swap” tools that can put a person’s face victim instead of that of a pornographic artist in a video or photo. .
Apps purporting to “undress” clothed photos were also identified as possible tools used in some cases and were found available for free on app stores. These modern deepfakes may be more realistic and harder to immediately identify as fake.
“I didn’t know how complex and scary AI technology was,” said Francesca Mani, 15, a sophomore at New Jersey’s Westfield High School, where more than 30 girls learned the 20 October that they could have been represented explicitly. -manipulated images.
“I was shocked because the other girls and I were betrayed by our classmates,” she said, “which means it can happen to anyone and everyone. “
Politicians and legal experts say there are few, if any, avenues for recourse for victims of AI-generated pornography and deepfake, which often attaches a victim’s face to a naked body.
Photos and videos can be surprisingly realistic, and according to Mary Anne Franks, a legal expert on nonconsensual sexually explicit media, the technology to make them has become more sophisticated and accessible.
A month after the incident at Westfield High School, Francesca and her mother, Dorota Mani, said they still do not know the identity or number of people who created the images, how many were made or if they still exist. It’s also unclear what punishment, if any, the school district handed out.
The Town of Westfield directed comment to Westfield Public Schools, which declined to comment. Citing privacy, the school district previously told NBC New York it would “not release any information about the students accused of creating fake nude photos, nor the discipline they face.”
Superintendent Raymond Gonzalez told the outlet that the district “will continue to strengthen our efforts by educating our students and establishing clear guidelines to ensure these new technologies are used responsibly in our schools and beyond.”
In an email obtained by NBC News, Mary Asfendis, the high school’s principal, told parents on October 20 that she was investigating students’ allegations that some of their peers had used AI to create pornographic images to from original photos.
At the time, school officials believed any images created had been deleted and were not being released, according to the memo.
“This is a very serious incident,” Asfendis wrote, urging parents to discuss their use of technology with their children. “New technologies have made image falsification possible and students need to know the impact and harm these actions can cause to others. »
Although Francesca did not see the image of herself or anyone else, her mother said the Westfield principal told her that four people had identified Francesca as a victim. Francesca filed a police report, but neither the Westfield Police Department nor the district attorney’s office responded to requests for comment.
New Jersey State Senator Jon Bramnick said law enforcement had expressed concerns to him that the incident would only escalate into an “allegation of cyber harassment.” , even though it should actually rise to the level of a more serious crime.”
“If you attach a naked body to a child’s face, that to me constitutes child pornography,” he said.
The Republican lawmaker said state laws currently fail to punish content creators, even though the harm inflicted by real or manipulated images may be the same.
“It victimizes them in the same way as people who traffic in child pornography. This is not only offensive to the young person, it defames them. And you never know what’s going to happen to that photo,” he said. “We don’t know where it is once transmitted, when it will come back to haunt the girl.”
A bill pending in New Jersey, Bramnick said, would ban deepfake pornography and impose criminal and civil penalties for non-consensual disclosure. Under the bill, a person convicted of the crime would face three to five years in prison and/or a $15,000 fine, he said.
If passed, New Jersey will join at least 10 other states that have passed legislation targeting deepfakes, according to Franks, a law professor and president of the Cyber Civil Rights Initiative, a nonprofit group that fights against non-consensual pornography.
State laws targeting deepfakes vary widely in scope. Some of them, like those in Texas and Wyoming, make non-consensual pornographic deepfakes a criminal offense. Other states, such as New York, have laws that only allow victims to file a civil lawsuit.
Franks said the laws are “all over the place,” not comprehensive, and the constitutionality of the laws has been called into question.
“So you have a whole host of criminal charges, which are going to be difficult in these cases because the perpetrators are going to be juveniles, which raises its own questions,” she said.
“Probably just the tip of the iceberg”
It is unclear how many young people have been victims of AI-generated nudes.
The FBI said it was difficult to calculate the number of minors who were sexually exploited. But the agency said it has seen an increase in the number of open cases involving crimes against children. There have been more than 4,800 cases in 2022, compared to more than 4,100 the previous year, the FBI told NBC News.
“The FBI takes crimes against children seriously and works to investigate the facts of each allegation in a collective effort with our state, local and tribal law enforcement partners,” the agency said, adding that victims can face significant challenges when trying to stop the crime. distribute the image or have it removed from the Internet.
Franks said there will likely be many more incidents and they will only increase.
“Whatever we hear about that is bubbling to the surface is probably just the tip of the iceberg,” she said. “It’s probably happening quite often right now, and the girls just haven’t found out or found out about it yet, or the school is covering it up.”
At Issaquah High School in Washington state, a school district representative said a mid-October incident “involving fake AI-generated images of students” continued to affect the student body.
In the Spanish town of Almendralejo, mothers say dozens of their school-age daughters have been victims of AI-generated nude photos, created with an app that can “undress” clothed photos. Local police in New Jersey, Washington and Spain are investigating the high school cases.
In a public service announcement issued in June, the FBI warned that technology used to create fake, nonconsensual pornographic photos and videos was improving and being used for harassment and sextortion.
Meanwhile, the National Association of Attorneys General called on Congress in September to study the effects of AI on children and propose legislation that would protect them from such abuse.
In a letter signed by 54 state and territory attorneys general, the group expressed concern that “AI is creating a new frontier for abuse that makes prosecution more difficult.”
“We are in a race against time to protect our nation’s children from the dangers of AI,” the letter said.
Francesca and her mother said they plan to travel to Washington, D.C., in December to personally urge members of Congress to take action, as they continue to advocate for updated policies within the school system and demand accountability for what happened.
“We all know this is not an isolated incident,” Dorota Mani said. “This will never be an isolated incident. This will continue to happen all the time. We need to stop pretending it’s not important.”
The increase in incidents targeting high school girls follows the proliferation of deepfake AI applications and deepfake pornography websites where this type of material is created, shared and sold.
A 2019 report from Sensity, an Amsterdam-based company that tracks AI-generated media, found that 96% of deepfakes created to date were sexually explicit and featured women who had not consented to their creation. Many victims are unaware of the existence of deepfakes.
Franks said there is nothing parents and children can do to stop fakes using their images from being created. Instead, Franks said schools and local law enforcement need to set an example for perpetrators in cases that affect the general public, to discourage others from creating deepfakes.
“If you could imagine a big, dramatic response from the New Jersey school or the New Jersey authorities to make an example of this case, very strict penalties, people going to jail, you might be discouraged.” , Franks said. .
“In the absence of that, it will just become another tool that men and boys use against women and girls to exploit and humiliate them, and about which the law has virtually nothing to say.”