What is AI-Facilitated Sexual Violence (AIFSV)?
​
AI-facilitated sexual violence refers to creating, distributing, or threatening to distribute sexually explicit AI-generated content of individuals without their consent.
​​
Deepfake Pornography
-
Deepfakes are AI-generated media in which a person’s face or body is digitally inserted into a different image or video-typically a pornographic one. These videos and images are becoming increasingly difficult to detect and can be produced in minutesIn 2023, CBS News reported over 21,000 deepfake porn videos online, a 460% increase from the year before.
​
-
99% of deepfake pornography depicts women.
​
-
According to Sensity AI, by mid-2024, millions of non-consensual deepfake nudes had been shared in public Telegram groups and on image forums, many involving teens or celebrities (source).
​
-
Teenage students have been targeted by classmates who create fake nudes to bully, shame, or extort them.
-
Public female figures such as journalists, streamers, and influencers are often harassed with fake porn circulated on Reddit, Discord, and 4chan.
-
Trans people and LGBTQ+ creators have been increasingly targeted in anti-queer harassment campaigns.
​
How Is It Spreading?
-
Public AI tools (some requiring no technical skill) allow anyone to create deepfakes from a few images.
-
Online forums, Telegram channels, and private Discord servers trade in deepfake porn, sometimes with tens of thousands of members.
-
Some sites charge users to upload a person’s photo and receive back a pornographic fake within minutes.
​
What Is AI-Generated Sextortion?
Sextortion is when someone threatens to share sexual images of a person unless they comply with demands-usually for more images, sexual acts, or money. With AI, predators no longer need real nude photos, they can use generative AI tools to create realistic, fake sexual images of minors from innocent social media photos.
​
-
In 2023, the U.S. FBI warned that predators are using AI to create fake explicit images of minors and then threatening to release them unless the victim complies.
-
Any photo, even fully clothed, can now be used to generate a fake nude or sexual image.
-
In a survey of U.S. minors (ages 9–17), 10% said they knew someone using AI to make sexual images of peers.
​
What Is AI-Driven Grooming?
Grooming is when a predator builds trust with a child to manipulate them into sexual contact or content. AI allows abusers to scale this process through chatbots, fake profiles, and voice cloning.
​
-
Predators can now use AI chatbots programmed to imitate teenagers and message dozens of children at once, building rapport automatically.
-
Voice cloning tools can mimic a parent’s or friend’s voice to trick a child into trusting the predator. Deepfake images and fake social media profiles are being used to catfish kids, often with avatars that appear to be real teens.
​​
AIFSV News and Developments:
Subscribe to the newsletter to receive monthly news and policy updates.
​
1. Cyberstalking Using AI Chatbots
-
James Florence Jr., a 36-year-old from Plymouth, Massachusetts, engaged in a decade-long cyberstalking campaign against multiple victims. Between 2014 and 2024, Florence programmed AI-driven chatbots to impersonate his victims, providing these chatbots with personal information such as employment history, education, and family details. These chatbots then interacted with unsuspecting individuals, divulging sensitive information and luring strangers to the victims' residences under false pretenses. Florence was arrested in September 2024 and later agreed to plead guilty to seven counts of cyberstalking and one count of possession of child pornography.
2. Deepfake Pornography Abuse
-
In 2024, Hannah Grundy, a 35-year-old high school teacher from Sydney, discovered that explicit deepfake pornographic images of her were circulating online. These images featured her face superimposed onto explicit content and were accompanied by personal details and fabricated rape fantasies. The perpetrator was identified as Andrew Hayler, a longtime friend and former colleague. Hayler had spent years digitally altering photos of Grundy and other women he knew, posting them to pornographic websites. Grundy's discovery led to significant psychological trauma and financial costs as she pursued legal action. Hayler was charged with offenses related to 26 women and pleaded guilty to all charges.
3. New Jersey Criminalizes Deceptive AI-Generated Media
-
In April 2025, New Jersey Governor Phil Murphy signed legislation making the creation and distribution of deceptive AI-generated media, commonly known as deepfakes, a criminal offense. The law imposes penalties of up to five years in prison and fines up to $30,000 for individuals found guilty of producing or sharing such content with malicious intent. The legislation was partly inspired by the experience of Francesca Mani, a high school student who became a deepfake victim and advocated for legal protections after discovering no existing laws addressed her situation. This move positions New Jersey among at least 20 states enacting measures to combat the misuse of generative AI technologies.
5. Exposure of AI Image Generator's Database Reveals Harmful Content
-
In March 2025, security researcher Jeremiah Fowler uncovered an unsecured database belonging to South Korea-based AI image-generation company GenNomis. The database contained over 95,000 records, including explicit AI-generated images and child sexual abuse material. Some images even depicted celebrities de-aged to look like children. Despite GenNomis' guidelines against explicit and illegal activities, the exposure revealed inadequate moderation and protection measures. Following the discovery, GenNomis secured the database but did not publicly address the findings.
6. High School Student Creates Deepfake Pornography of Classmates
-
In January 2025, a teenage student at a high school in southwest Sydney was investigated for allegedly creating and distributing deepfake pornographic images of female classmates. The student reportedly used AI tools to generate explicit images by superimposing the faces of classmates onto pornographic content. These images were then circulated using fake social media accounts, causing significant distress among the victims.
​​
7. San Francisco's Legal Action Against AI "Nudify" Websites
-
In August 2024, San Francisco City Attorney David Chiu filed a lawsuit against 16 websites that use AI to create non-consensual, fake nude images of women and girls. These sites allow users to "nudify" or "undress" photos of individuals, violating state and federal laws prohibiting deepfake pornography and revenge pornography. The lawsuit represents a significant step in addressing the proliferation of AI-generated explicit content and seeks to hold the operators of these websites accountable for facilitating the creation and spread of deepfake pornography.