top of page

What is AI-Facilitated Sexual Violence (AIFSV)?

​

AI-facilitated sexual violence refers to creating, distributing, or threatening to distribute sexually explicit AI-generated content of individuals without their consent. 

​​

Deepfake Pornography

​

​

  • According to Sensity AI, by mid-2024, millions of non-consensual deepfake nudes had been shared in public Telegram groups and on image forums, many involving teens or celebrities (source).

​

  • Teenage students have been targeted by classmates who create fake nudes to bully, shame, or extort them.
     

  • Public female figures such as journalists, streamers, and influencers are often harassed with fake porn circulated on Reddit, Discord, and 4chan.
     

  • Trans people and LGBTQ+ creators have been increasingly targeted in anti-queer harassment campaigns.

​

How Is It Spreading?

  • Public AI tools (some requiring no technical skill) allow anyone to create deepfakes from a few images.
     

  • Online forums, Telegram channels, and private Discord servers trade in deepfake porn, sometimes with tens of thousands of members.
     

  • Some sites charge users to upload a person’s photo and receive back a pornographic fake within minutes.

​

What Is AI-Generated Sextortion?

Sextortion is when someone threatens to share sexual images of a person unless they comply with demands-usually for more images, sexual acts, or money. With AI, predators no longer need real nude photos, they can use generative AI tools to create realistic, fake sexual images of minors from innocent social media photos.

​

​

What Is AI-Driven Grooming?

Grooming is when a predator builds trust with a child to manipulate them into sexual contact or content. AI allows abusers to scale this process through chatbots, fake profiles, and voice cloning.

​

​​

AIFSV News and Developments:

 

Subscribe to the newsletter to receive monthly news and policy updates.

​

1. Cyberstalking Using AI Chatbots

2. Deepfake Pornography Abuse

  • In 2024, Hannah Grundy, a 35-year-old high school teacher from Sydney, discovered that explicit deepfake pornographic images of her were circulating online. These images featured her face superimposed onto explicit content and were accompanied by personal details and fabricated rape fantasies. The perpetrator was identified as Andrew Hayler, a longtime friend and former colleague. Hayler had spent years digitally altering photos of Grundy and other women he knew, posting them to pornographic websites. Grundy's discovery led to significant psychological trauma and financial costs as she pursued legal action. Hayler was charged with offenses related to 26 women and pleaded guilty to all charges. 
     

3. New Jersey Criminalizes Deceptive AI-Generated Media

5. Exposure of AI Image Generator's Database Reveals Harmful Content

  • In March 2025, security researcher Jeremiah Fowler uncovered an unsecured database belonging to South Korea-based AI image-generation company GenNomis. The database contained over 95,000 records, including explicit AI-generated images and child sexual abuse material. Some images even depicted celebrities de-aged to look like children. Despite GenNomis' guidelines against explicit and illegal activities, the exposure revealed inadequate moderation and protection measures. Following the discovery, GenNomis secured the database but did not publicly address the findings. 
     

6. High School Student Creates Deepfake Pornography of Classmates

​​

7. San Francisco's Legal Action Against AI "Nudify" Websites

Get Involved

If you are interested in helping Educated Consent grow, contributing to projects, or partnering with us, get in touch!

​

educatedconsent@gmail.com

​

bottom of page