Photo by Solen Feyissa on Unsplash
There is much concern and discourse around AI-facilitated image-based abuse. AI offers endless possibilities in manipulating real images or creating them from scratch in ways that could intentionally or otherwise harm the persons who are depicted in the imagery without their knowledge or consent. While this abuse is another tool of the techno-patriarchy, the most alarming aspect of it is that these technologies are already cheap or free of cost, sophisticated, and freely accessible, so the abuse continues to increase. A 2023 report by research firm Grafika shows that AI-generated non-consensual imagery had firmly moved from the margins of niche online forums to a business monetised at scale. A May 2025 study by the Oxford Internet Institute identified approximately 35,000 publicly downloadable deepfake generators, 96% of which targeted identifiable women, and many of which were intended to generate non-consensual nude or sexual imagery. The generators were “downloaded almost 15 million times cumulatively since November 2022.”
Previously, generating such manipulated content would need expert-level understanding of editing software. However, with AI tools, it is now possible to create high-quality manipulated images and videos of someone without their knowledge or consent using only a single photo of theirs and a few seconds of recording of their voice.
Sexually explicit deepfake images and videos
The majority of the sexually explicit deepfake images and videos victims tend to be women and gender-diverse persons. Often, women who are seen as successful, outspoken, expressive or assertive are targeted. While women and gender-diverse journalists, activists, politicians and human rights defenders are usually targeted out of political motives, those in the entertainment industry are subjected to deepfakes for sexual gratification, voyeurism, objectification, online engagement (views, clicks, followers and subscriptions), and by extension, monetary gain. A 24-year-old digital marketer, arrested last year for creating and publishing a deepfake video of the Indian actor Rashmika Madhanna, told the police he posted the video in order to gain more followers on Instagram. A Channel 4 investigation in 2024 found deepfake sexually explicit videos of 4,000 famous individuals, while another one by India Today found such content circulating widely on X and Instagram.
GenAI apps and reach
“Nudify”, “body scanner” or “naked scanner” apps and chatbots are easily available on app stores and on platforms like Hugging Face, enabling generation of sexually explicit images at scale, making it possible to use generative AI to convert anyone’s photo into a nude image of that person. Many are free of cost, describing themselves as offering to “undress anyone for free”. While it is technically possible for someone to use commonly available image-editing tools to depict someone as nude or in a sexually explicit context, these apps make it easier, cheaper, quicker, and more accessible for their users to manufacture these images, which are then used for abuse, harassment or financial gains through subscriptions or displaying advertisements alongside it.
While it is technically possible for someone to use commonly available image-editing tools to depict someone as nude or in a sexually explicit context, these [AI] apps make it easier, cheaper, quicker, and more accessible for their users to manufacture these images.
Telegram is home to a large ecosystem of bots, groups and channels dedicated solely to the creation and/or distribution of image-based abuse, including AI-generated content. In 2020, an investigation by Wired found “at least 50 bots that claim to create explicit photos or videos of people with only a couple of clicks.” The bots listed “more than 4 million ‘monthly users’ combined,” according to Wired. A recent investigation by 404Media found a Telegram bot that generates non-consensual deepfake videos of men ejaculating on women’s faces. The bot gathered more than 100,000 active users in a few weeks. Despite Telegram’s public claims of a "zero-tolerance policy" for illegal pornography and efforts to remove millions of groups and channels, enforcement is inconsistent, and these bots continue to proliferate.
Siddharth Pillai of RATI Foundation in India noted in a RightsCon 2025 session, “When nudify apps were used by minor boys to bully other boys, we saw them [images created by the app] progressively improve. Earlier female bodies were attached to the [nudified] images of men by the app. Then, the bodies became extremely amorphous. Then, there were images of male bodies with six-pack [abs]. So, you see the bias in technology played out in the way that men’s bodies were churned into the nudify app.” This is an indicator of how the app was trained to pivot to misogynistic content, and by extension, to gender stereotypes. A Wired investigation in 2024 found nudify websites that allowed users to log in while incorporating single sign-on (SSO) systems from major tech companies like Google, Apple, Discord, Twitter, Patreon, and Line. These websites even allowed users the option to share the nudified images to other platforms including Telegram and Instagram. The terms of service of all of these companies state that developers cannot use these services in ways for harassment, privacy violations or other kinds of abuse.
The developers of nudify apps and bots are generally individuals or small-scale entities, often operating in a legal grey area and staying anonymous or pseudonymous to avoid legal or social repercussions. These apps are an example of the sort of AI tools that should not be allowed to exist. However, pseudonymous or fly-by-night developers are not the only bots with such capabilities. Twitter’s chatbot Grok was recently reported to have responded to numerous users’ prompts of “remove her clothes” with images of real women in underwear or swimwear. At the time of writing this article, the bot seems to have stopped generating these images, citing ethical and privacy concerns.
Face-swapping
Face-swapping involves replacing or superimposing one person's face onto that of another in static images or videos. A technology widely used in the advertising, film and entertainment industry, is now freely and publicly available for everyone because of AI. Open source code repositories like Deep-Live-Cam and services like Facecam.ai allow their users to upload a photo of anyone’s face that replaces the face of the user in a video in real-time while maintaining realistic expressions and movements. This can lend itself to scams where the scammer’s face is replaced by that of the target, sextortion, non-consensual pornography, impersonation, fraud, and other kinds of deception. The free version of Facecam.ai offers 2.5 minutes of video containing a watermark. The paid one generates 10 hours of non-watermarked video for a subscription cost of USD 20 per month.
Curiously, Facecam.ai was taken down by its developer in September last year after receiving much criticism on social media for its potential for misuse but has now been resurrected. These services also tend to be inexpensive for the developers to operate, possibly explaining their wide proliferation on the public internet. While face-swap filters also exist on platforms like Snapchat, the resultant images look animated and not photorealistic, making the viewer aware that the imagery is manipulated.
The developers of nudify apps and bots are generally individuals or small-scale entities, often operating in a legal grey area and staying anonymous or pseudonymous to avoid legal or social repercussions.
There is little by way of technical safeguards possible against the misuse of these tools. Say, to detect non-consensual pornography, the developer introduces a step to perform an automated check for human nudity during different times in the video and the video is stopped to disallow nudity when it is detected. Present-day human nudity filters have poor accuracy, especially against the vast diversity of human bodies. Chances are high of a false negative result (the filters fail to detect nudity) or a false positive one (for example, obese persons get erroneously marked as nude because of physical attributes such as folds or a large volume of skin).
Another aspect is that nudity in these videos could serve a legitimate purpose such as an individual porn performer or sex worker masking their face to prevent misuse of the videos they create in a professional capacity for the exclusive use of their subscribers and clients.
In the absence of regulation, policies, and forethought about safety, these technologies have already become commonplace enough to be available with a simple web search.
Disinformation and reputational harm
For manipulated images to be harmful and to have real-life negative consequences, they need not be sexual or nude in nature, and thus would not qualify for safeguards, legal remedies or content moderation practices against “intimate image abuse”. Several tools and services (Flux and Vidu, to name a few) enable their users to create photorealistic images and videos of two real persons in the act of hugging, kissing, making a romantic proposal, etc. On account of social and cultural expectations or diktats in some parts of the world, a non-consensual video or image, or an individual consuming alcohol would be enough to harm their reputations and relationships, and cause them social, physical, psychological or financial injury. Certain images could be construed as sexually suggestive even when no nudity or sexually explicit act is involved, for example, a fully clothed woman without her headscarf. Some AI services offer the capability to remix real photos into a new image or video while maintaining the original style and quality, all of which would be difficult for the victim to refute because they involve a degree of real photos of real people and real events. These images are used to harass, defame or extort from the persons depicted in them. As for the AI tools that allow the creation of realistic images using text prompts, some of them have guardrails such as disallowing the creation of nude images; some do not.
A potential legal remedy is personality rights – an individual's legal rights to their image, likeness, name, voice, or other distinctive personal attributes, especially regarding commercial exploitation.
Solutions
In a recent paper, researchers McGlynn and Toparlak provide a comprehensive analysis of deepfake non-consensual “porn” content, emphasising on the creation and solicitation of deepfakes and laying out the existing and proposed legislation criminalising them in different parts of the world. In the US (The Take It Down Act, laws in California, and proposed laws in Florida and Minnesota), UK, Australia, South Korea, and Taiwan, for example, there is legislation to curb non-consensual deepfake sexualised images. However, most of the global majority world does not have actionable safeguards against them.
A potential legal remedy is personality rights – an individual's legal rights to their image, likeness, name, voice, or other distinctive personal attributes, especially regarding commercial exploitation. This safeguards individuals from having their personal image or identity publicly used or disclosed without consent, protecting them from unwanted exposure or intrusion.
Entities that provide internet infrastructure and services like single sign-on should ensure via policy and practice that their offerings are not used for abusive purposes. In addition, app stores and platform libraries like Google, Telegram and others must have safety policies, terms of service and reporting mechanisms to prevent the platform from being used to host and distribute nudify apps and bots, and for quick takedown in case a violation is reported.
An article by researcher Riley Wong elucidates on some technical privacy protections against image-based abuse. Tarunima Prabhakar, co-founder Tattle Civic Technologies, during a Rightscon 2025 session emphasised on the introduction of safety-by-design in AI systems.
These are only some of the approaches currently being considered, developed and implemented in the fight against image-based abuse. The only way this abuse can be curtailed is by considering safety as a priority and not an afterthought.
- 455 views





Add new comment