Photo by Cash Macanaya on Unsplash
Over the last couple of months, advertisements of deepfake kissing apps have started getting frequent on social media platforms. These apps boast about being easy to use, and claim to enable you to “kiss anyone you want - no consent required,” in just about a matter of minutes.
This is part of a larger trend of Generative AI apps that enable creating sexualised and intimate content of people without their consent. Apps to “nudify” anyone have also been on the rise leading to the increase in sextortion related cases. Aside from being creepy and unethical, such videos and deepfakes would be straight up dangerous in many cultural contexts. In countries like Pakistan, where users like myself have also seen these apps being advertised regularly, and where gender-interactions are still quite conservative, women who fall victim to these deepfakes can face severe consequences in society and within their homes.
It’s critical to not only question the existence of these apps, but also the fact that Facebook is allowing them to advertise on its platforms, whereas Google lets them stay on the Play Store to be downloaded. Experts have warned about the ethical concerns around Generative AI and the commercialisation of this technology that can have serious consequences for people.
Clara Lin Hawking, an AI governance expert from Spain, points out that research shows gendered targeting in the way these apps are marketed. “What we are seeing in terms of trends is by far people who are most exposed to promotions of these applications are young males, and by far 90% of the victims used in these apps are girls and young women - so we clearly see this is engagement promoted towards young males,” she says, adding, “The consequences are very severe because the truth of the matter is once something is created or put online, even if the victim manages to get it removed once, it’s online it’s online for the rest of their life and impacts a person’s ability to be safe.”
Emma Pickering, Head of Tech and Economic Abuse Strategy program at Refuge – a UK based domestic abuse charity, shares that she has seen how abusers use these apps to easily create deepfakes to manipulate and extort their victims, and pressure them to come back to them. She points out that these apps are becoming common amongst young children in schools.
It’s critical to not only question the existence of these apps, but also the fact that Facebook is allowing them to advertise on its platforms, whereas Google lets them stay on the Play Store to be downloaded.
“It’s dehumanising [to] women, we’re hearing young boys in schools and primary schools saying ‘women are just objects’, ‘i want women to be eradicated’,” she shares, adding that the impact on girls is very different.
Emma says that young girls are starting to be hesitant in engaging in public spaces because they have seen that their visibility in public would lead to these kinds of harm. She asks, “So what are we creating as a landscape of the employment market for young people in the next 20 years?”
Research shows that online gender based violence has a direct impact on women and girls’ public and social life as much as it impacts their personal wellbeing. There are anecdotes and incidents of women rejecting any contact with the outside world out of fear of being subjected to violence or because of the concerns of the violence further perpetuating. So the question that Emma asks is a critical one as these apps are not only impacting girls’ current access to the world, but also jeopardising their future.
Dr. Dominic Lees, an academic and researcher who focuses on GenAI and its impact in the screen industries, also seconds Emma’s observation and says the majority of this technology is used by men against women.
He says, “Generative AI technology online is unfortunately an unregulated space. How we make ethics work in that space is much more difficult, especially due to the reluctance of platforms to bring in any formal guidelines or restrictions when it comes to governing these tools on how to make this content.” Emma also agrees and adds, “The problem is [that this tech is] evolving faster than governments and regulatory bodies can keep up with it. It’s also been evolving at such a pace that by the time we have a new piece of legislation, the technologies have changed so it’s outdated."
With little to no policy existing across the globe in this regard, the people who are the most at risk of falling victim to deepfakes are often women and gender minorities and vulnerable communities, who may not have the resources or knowledge needed to pursue such matters with the police or in courts.
Young girls are starting to be hesitant in engaging in public spaces because they have seen that their visibility in public would lead to these kinds of harm.
Nicola Cain, lawyer and CEO of Handley Gill Limited that offers legal advice and consultancy on data protection, including AI , says that GenAI apps do not account for the nuances of the consent required to use someone’s photos. She adds, “Part of the problem is you’ve got these generative AI apps and they’ll ask for permission, but for example even if you have permission to have a photo of someone, you don't necessarily have permission to modify them.”
Laura Haaber Ihle is an AI ethicist, and the vice president of the Abu-Dhabi Responsible AI Foundation – a collaboration between Microsoft, G42, and Mohamed bin Zayed University of Artificial Intelligence, to promote ethical AI standards and governance. She points out that one of the reasons why regulation in these spaces is still so slow is because “people who are designing [these laws] are not the people who are the most vulnerable,” adding that they don’t take into account the risks posed to those vulnerable groups either. So not only are the regulations inadequate, they are also disconnected from the critical needs of those who are the most affected.
In the context of more conservative countries like Pakistan, or even India or Saudi Arabia, where there’s still a lot of taboos and control around women’s behaviour and clothing, a deep fake of a woman in a compromising situation could lead to as extreme a consequence as death.
This is why it’s important to question not only the existence of these apps, but also why they are allowed to market their services on global platforms when their impact is life-threatening.
Laura agrees, adding that the specific needs of local contexts mean international policies are not the most effective way forward. What she does believe will help “is to have global standards, and technical standards, so you can measure up.”
Generative AI technology online is unfortunately an unregulated space. How we make ethics work in that space is much more difficult, especially due to the reluctance of platforms to bring in any formal guidelines or restrictions.
She also thinks that there is potential for a better future, given the progress in awareness and interest around ethics she’s seen in this space.
She says that initially people were not interested to hear about AI ethics, calling it “very soft and mushy” and said that it was not relevant in the tech industry. Laura has seen this narrative shifting in the past few years. “Now there's an expectation that you have responsible AI and you have guidelines and ethics.”
While there’s still a lot of work to be done, and these issues rarely have simple answers, what is important and urgent is to put greater pressure on companies who are making and allowing this technology, and governments who have the power to regulate these spaces, and demand better.
- 240 views





Add new comment