A survey of more than 7,000 people in Australia, the UK and the US found that 3.2% of the population reported engaging in the creation, sharing and/or threat of sharing sexual deepfakes. Men, young adults, non-white respondents, and people with disabilities were more likely to engage in this behavior. 18% of people viewed these images intentionally, mostly out of curiosity. This study Computers in human behavior.
Sexual deepfakes are synthetic sexual images, videos, or audio recordings created or altered using AI or other digital tools. These are usually created to make it appear as if a real person is naked, engaging in sexual acts, or saying sexual things, even though that is not actually happening. Sexual deepfakes can use a real person’s face, body, voice, and likeness and combine them with fabricated sexual content.
Many sexual deepfakes are non-consensual, meaning the person depicted did not consent to the creation or sharing of the material. Non-consensual sexual deepfakes can be used for harassment, humiliation, intimidation, revenge, and sexual exploitation. Even if viewers know the content is fake, they can damage an individual’s reputation, privacy, safety, relationships, and mental health.
Although the sexual content displayed in such deepfakes is not real, the harm they cause can still be very real if the deepfakes depict a real, identifiable person. For this reason, sexual deepfakes are increasingly being treated as a serious legal and ethical issue in many jurisdictions. In research and policy development, these are typically described as synthetic sexualized media depicting an identifiable person without their consent.
Study author Rebecca Umbach and her colleagues wanted to find out how often people engage in what they call sexual abuse based on AI-generated images. This behavior includes non-consensual creation of AI-generated intimate images (i.e. sexual deepfakes), non-consensual sharing of AI-generated intimate images, and threats to share AI-generated intimate images. The study authors also looked at how many people view such images and how often. More specifically, they were interested in content generated using a variety of platforms, from those that use AI to digitally remove clothing and generate explicit synthetic content, to more sophisticated deepfake generators and custom-built models.
They conducted an online survey. Participants in the study were 7,231 respondents from Australia, the UK and the US, recruited by Sago, a leading market research company with its own online panel. Approximately 2,400 people responded from each of the three countries. The study authors said they selected these countries based on evidence of high “deepfake porn” traffic. Approximately 50-51% of study participants were women. 12-13% of participants identified as LGBTQ+. 18-20% were disabled.
The survey directly asked participants whether they engaged in the nonconsensual creation, sharing, or threat of sharing digitally altered sexual images. For example, participants were asked, “How many times, since you turned 18, have you posted, sent, or shown a fake or digitally altered nude/sexual image (photo or video) of someone (over 18) without their permission?”
The survey also asked about participants’ demographic data, their relationship with the person in the sexual content (e.g., “former sexual partner,” “family member,” “acquaintance,” etc.), and motivations for their actions. Participants were also asked whether they had ever intentionally viewed AI-generated nude or sexual photos or videos of celebrities, public figures, influencers, or ordinary people. Those who said they intentionally saw or looked at such images were asked why they saw the image, why they thought the image was generated by AI, and how they felt when they saw the image.
Results showed that 3.2% of participants engaged in at least one of the three behavioral study authors considered to be sexual abuse based on AI-generated images. In other words, 3.2% of people reported creating, sharing, or threatening to share a sexual deepfake. This rate varied by country: 6.1% in the UK, 3.5% in Australia, and 2.6% in the US.
In addition to this, 1.4% of respondents reported that they had created, shared, or threatened to share digitally altered sexual images without the use of AI, and 0.5% reported that they were unsure about AI’s involvement in the images they were manipulating. Additionally, 0.3% of participants reported that they had threatened to share digitally altered images that did not actually exist.
Further analysis showed that men, younger adults, non-white participants, and participants with disabilities were more likely to engage in these behaviors. (Initially, it appeared that people with lower education levels were more likely to participate in AI-IBSA, but statistical modeling showed that this relationship disappeared when researchers controlled for other demographic factors. Similarly, adjusting for the data eliminated the racial disparity between white and non-white respondents among UK participants.) In most cases, participants reported creating sexual deepfakes because they wanted to experiment with the technology and to show off. Sharing was most often described as being done “for fun/as a joke.”
26% of those who shared the image and 22% of those who created the image said they wanted to destroy the target’s reputation. 12% of creators and 20% of sharers reported doing it for financial gain. In most cases, the perpetrators targeted current or former sexual partners. Interestingly, participants more often reported sharing deepfake sexual images of men (56%) than women (41%).
18% of participants reported intentionally viewing sexual deepfake images. Men were 3.6 times more likely than women to intentionally view sexual deepfake images (29% vs. 8%). Similarly, younger adults, LGBTQ+ individuals, non-white participants, and participants with disabilities were more likely to view sexual deepfakes intentionally. The main motive for viewing such images was curiosity, followed by sexual gratification and entertainment.
However, this study revealed significant gender differences in emotional responses to content. Men were significantly more likely to report feeling amused and excited, while women were much more likely to feel empathy for the depicted characters, sadness for the world, and disgust for the author.
“These findings suggest that in addition to preventing the creation of non-consensual AI-generated sexual images, sociotechnical interventions are needed to address the seemingly normalized consumption of these images,” the study authors concluded.
This study contributes to scientific understanding of the behaviors associated with sexual deepfake images. However, all data used in the study were self-reported, leaving room for reporting bias to influence the findings.
The paper, “AI-generated image-based sexual abuse: Perpetration and consumption across three geographies,” was authored by Rebecca Umbach, Nicola Henry, Renee Shelby, Gemma Stevens, and Kwynn Gonzalez-Pons.

