States Race To Restrict Deepfake Pornography As It Becomes Easier To Create

Printed from:

By Madyson Fitzgerald

After a 2014 leak of hundreds of celebrities’ intimate photos, Uldouz Wallace learned that she was among the public figures whose images had been stolen and disseminated online.

Wallace, an actress, writer, and social media influencer, found out the images were ones her ex had taken without her consent and had threatened to leak.

Over the next few years, Wallace spent loads of money paying private companies to take down the images, she said. It wasn’t until later that she found out that those same photos had been used to make fake pornographic images of her.

“It’s just ridiculous the amount of time that people have and how much they’re profiting from these kinds of things,” Wallace told Stateline. “For them to sit there and create so much fake content of someone that clearly doesn’t want anything of that sort? Without consent? It’s just crazy to me.”

Mortified, Wallace was reluctant to share her story — at first. But in 2022, she went public with it and now she heads a nonprofit organization, Foundation Ra, that supports people who have become victims of manipulated or artificial intelligence-generated sexual images.

“I thought, ‘At what point is somebody going to do something about this?’ ” she said. “And that’s when I decided to share my story and try to change the law.”

As more people, including minors, become victims of deepfake pornography and the industry that’s growing out of it, state lawmakers are pursuing legislation to deter the unauthorized creation and dissemination of digitally altered images.

Deepfakes — digitally altered photos and videos that can make someone appear to be, or be doing, just about anything — have proliferated on the Internet. Examples range from simple face swaps done using readily available software to a person grafting Tom Cruise’s face and voice onto the person’s body for content on a TikTok account.

In 2023, the total number of deepfake videos online was 95,820, up 550 percent from 2019, according to a report by Home Security Heroes, a group that researches online security. Pornography made up 98 percent of them.

The issue made international headlines in January, when fabricated sexually explicit images of pop star Taylor Swift that had been created by a free Artificial Intelligence generator went viral, prompting lawmakers in several states to introduce legislation to combat deepfake porn, including Missouri’s Taylor Swift Act.

Several years ago, special equipment was needed to make a deepfake video. That’s no longer true, said Marc Berkman, chief executive officer of the Organization for Social Media Safety, a national nonprofit organization dedicated to social media safety.

“This is a clear public policy issue,” Berkman said. “This is a behavior that we recognize causes harm, does not conform to societal values, relies on new technology, and so there should be a public policy response.”


Adding To Existing Laws

Indiana, Texas, and Virginia in the past few years have enacted broad laws with penalties of up to a year in jail plus fines for anyone found guilty of sharing deepfake pornography. In Hawaii, the punishment is up to five years in prison.

Many states are combatting deepfake pornorgrpahy by adding to existing laws. Several, including Indiana, New York, and Virginia, have enacted laws that add deepfakes to existing prohibitions on so-called revenge porn, or the posting of sexual images of a former partner without the person’s consent. Georgia and Hawaii have targeted deepfake porn by updating their privacy laws.

Other states, such as Florida, South Dakota, and Washington, have enacted laws that update the definition of child pornography to include deepfakes. Washington state’s law, which was signed in March by Governor Jay Inslee, a Democrat, makes it illegal to be in possession of a “fabricated depiction of an identifiable minor” engaging in a sexually explicit act — a crime punishable by up to a year in jail.

Washington state Representative Tina Orwall, a Democrat, said that she and her colleagues wanted to act right away because it can be hard to keep up with this kind of technology.

“It [technology] just moves so fast,” she said. “Deepfakes and AI have been around, but now it seems like it’s accelerated. We’re just concerned about how we can protect people from the parts that are harmful.”

Deepfake pornography bills also are advancing in other states, including Illinois, Missouri, New Jersey, and Ohio.

“States need to have their own laws that empower local law enforcement to be able to step in and act in these circumstances,” said Illinois state Senator Dan McConchie, a Republican, who is sponsoring a bill that would prohibit the creation of deepfakes that feature minors engaged in sexual activity. “We can’t wait for an overtaxed federal judiciary to hopefully get around to it at some point.”

There are no federal laws banning deepfake porn, but several bills have been introduced in Congress, including the AI Labeling Act of 2023 and the DEFIANCE Act of 2024. Neither has moved out of committee.


High School Victims

In 2023, sophomores at Westfield High School in New Jersey created and spread deepfake pornorgraphic images of Francesca Mani and other classmates without their consent, according to school officials. As a response, school principal Mary Asfendis sent a letter notifying the school community of the incident and inviting students to seek support from the school’s counselors. The school also launched an investigation, Mary Ann McGann, coordinator of school and community relations, wrote in an email message to Stateline.

Francesca and her mother, Dorota, have been advocating for legislation that would protect girls in the future, Dorota Mani said in an interview.

Since the Westfield High incident, there have been news reports of middle- and high-school students in CaliforniaFlorida, and Washington state becoming victims of deepfake pornography. The students — primarily girls — were targeted by their classmates, according to the reports.

The American Legislative Exchange Council, a conservative public policy organization, is promoting model language for state lawmakers to use that would target individual actors rather than technology developers. The Stop Deepfake CSAM Act is intended to supplement laws against child pornography, while the Stop Non-Consensual Distribution of Intimate Deepfake Media Act aims to bolster revenge porn laws. (CSAM stands for Child Sexual Abuse Material.)

“Artificial intelligence is a tool that can be used for good or used for ill,” said Jake Morabito, who heads a technology task force at the organization. “What we should be focusing on is harmful conduct use with AI. So, we should go after the bad actors and the harmful conduct, but don’t go after the people who are making the software.”

In Virginia, legislators realized that a revenge pornorgaphy law enacted in 2014 was not enough to protect people who had been harmed by deepfake porn. As a result, state Delegate Marcus Simon, a Democrat, helped pass an amendment in 2019 to include images that were artificially created.

“What duties do we owe to each other as good digital citizens?” Simon asked. “And what are the remedies for violating people? All of that will need to be worked out.”


New to NewBostonPost? Conservative media is hard to find in Massachusetts. But you’ve found it. Now dip your toe in the water for two bucks — $2 for two months. And join the real revolution.