AI-generated illustration

“I felt numb – not sure what to do. How did deepfake images of me end up on a porn site?” This was Helen Mort's first reaction when the British novelist discovered that AI-generated images of her had been posted on a pornographic website. She told The Guardian that the perpetrator had collected pictures taken during family occasions, weddings, pregnancy, and professional settings, and used them to create fake images of her engaged in violent sexual acts. This is referred to as a deepfake, meaning: “the manipulation of existing media (image, video and/or audio) or generation of new (synthetic) media using Deep Learning-based approaches.” Mort wrote in The Guardian, “I hoped the law would somehow safeguard me,” but at the time of the aggression in 2020, regulations on new technology lagged and she was told by the police that there was nothing they could do. 

Rebecca Scheffler* had a similar experience in 2017, when she became a victim of doxing, as reported in Vox. Doxing (or doxxing) consists of publishing a person’s personal information such as their name, phone number, address, workplace, or financial details without the person’s consent. Scheffler's personal details were repeatedly published as sex ads on Craigslist. The consequence: a wave of threatening and degrading messages, assault threats, and unsolicited pictures of genitalia. The victim received messages like, “I saw your ad on Craigslist looking for oral. I'm available tonight,” and “I want you to rape me.” After months of harassment, she feared for her safety and contacted the local police department. She received sympathy, but like Mort, the local authorities informed her that they could not take any action as the perpetrator remained unknown, and she had not been physically threatened.

What links these stories together? Both Mort and Scheffer were victims of different crimes, but both suffered digital harms that were relatively new, and their dangers so unknown that they were not captured by existing legal frameworks. But they are far from alone in their experiences: one in ten women in Europe have been victims of cyber violence by the age of 15. The cases of these women raise the question of how to address new forms of technology-facilitated gender based violence (tfGBV). How can we be sure that legal frameworks are in tune with these new forms of aggressions?

Regulating new digital dangers can be challenging. This is especially true when they are unforeseeable at the time of drafting the policy. There are two possible solutions to this dilemma. One view advocates for applying existing policies that are formulated broadly to ensure expanded implementation when required. The other view calls for introducing policies specific to new forms of technology-facilitated violence. However, this discussion has not been approached from a victim-centred perspective which places the “rights, needs and concerns” of victims at the heart of policy formulation. I analyse the effects of these two approaches on victims of new forms of technology-facilitated violence from a victim-centred perspective, focusing on women's experiences. To do this, I looked at the texts that shape these policies: Istanbul Convention, which serves as an example for a broad approach, the General Recommendation No.1 on the digital dimensions of violence against women (GREVIO-GT-DD), and debated Directive on Combating Violence against Women and Domestic Violence, which adopted a specific approach. These two texts represent milestones to Europe’s fight against violence against women, and are transferred in other geographies as well.

Broad Policy Formulation and its Impact on Victims of Emerging Forms of TFGBV

The first approach suggests that broadly formulated policy can be applied to new scenarios to solve the issues that come with evolving realities. The original draft of the Istanbul Convention in 2011 did not include any references to technology-facilitated violence against women. However, the broad formulation was argued to apply to new forms of gender-based violence, such as technological manifestations, without the need for updates. Helen Mort and Rebecca Scheffler's cases demonstrate that this is not entirely accurate.

Despite its broadness, the original Convention's applicability was limited to forms of technology-facilitated violence that could be understood through existing offline manifestations of violence.

The national implementation of the Istanbul Convention reveals that it only addressed certain forms of technology-facilitated violence against women. In effect, the Convention applied to digital violence related to offline forms of violence explicitly mentioned in the text. Cyberstalking, online hate speech and child pornography have offline equivalents. Therefore, they are tangible despite the new use of technology as a medium. The reason for this is that the same behaviours that are deemed unacceptable in person are also considered unacceptable online. The terms for these forms of technology-facilitated violence against women are even based on their offline counterparts: cyber + stalking. Despite its broadness, the original Convention's applicability was limited to forms of technology-facilitated violence that could be understood through existing offline manifestations of violence. 

Mort who was a victim of deepfakes, and Scheffler who was doxxed online, were subjected to new forms of aggressions which were primarily carried out online. They lack a clear offline equivalent, making them intangible and difficult to understand in the context of the original broadness of the Istanbul Convention. Mort’s personal account reflects this: “Most people hadn’t really heard of deepfakes before and I was too exhausted to keep explaining.” Although the Convention's phrasing was broad, none of the states that ratified it mention deepfakes. Mort's case would consequently fail to be considered.

Similarly, in the 49 state reports, there is only one mention of the aggression that Scheffler became a victim of: doxxing. However, the report did not use the term “doxxing” to describe the aggression, highlighting the lack of awareness towards this relatively new form of violence. This shows that broad policy formulation is not a guarantee for broad applicability. Mort and Scheffler would experience secondary victimisation under the broadly worded Convention. They were first victims of the crime, and then victims of the failure of policies to address intangible and new forms of technology-facilitated violence.

In addition, the original Convention’s only application to tangible technology-facilitated violence, avoids addressing the issue of responsibility. Both Mort and Scheffler were victims of an attack by a perpetrator. However, unlike gender-based violence in the physical world, these aggressions involved a third party: the platforms. In the case of Mort, it was the porn site where her fake photos were shared, while for Scheffler, it was Craigslist where her personal information was published. The responsibility of both platforms is not adequately addressed by the Convention due to the vagueness of its broad wording. 

Using broad language does not necessarily lead to effective wider application. The absence of explicit references to technology-facilitated violence leads to the neglect of new forms of abuse that cannot be understood through physical equivalents. The legal framework of the original Istanbul Convention does not cover deepfakes and doxxing, nor does it address the issue of platforms’ responsibility. Under this approach, victims of new digital aggressions, like Mort and Scheffler, are too often abandoned to their fate. However, would a specific policy approach benefit victims of new forms of violence?

Using broad language does not necessarily lead to effective wider application. The absence of explicit references to technology-facilitated violence leads to the neglect of new forms of abuse that cannot be understood through physical equivalents.

Degrees of Policy Specificity and its Accountability for (Future) Victims

The counter approach to the broad policy formulation is the introduction of specific policies to tackle technology-facilitated violence. The Istanbul Convention was extended in 2021 to include digital forms of violence against women through the “General Recommendation No. 1 on the Digital Dimension of Violence against Women (GREVIO-GT-DD)”. This version represents an example of the specific policy approach to tackling the emergence of the forms of technology-facilitated violence. The Directive on Combating Violence against Women and Domestic Violence also addresses cyber manifestations in a specific way. 

Contrary to the original Istanbul Convention, GREVIO-GT-DD provides a specific definition of technology-facilitated violence. However, despite the specific approach, the definition is composed of broad components: all harmful activities carried out online or through information technologies against women and girls. The construction of the definition ensures its inclusion of future forms of technology-facilitated violence without the need for updates. Regardless of the novelty of the crimes experienced by Mort and Scheffler, GREVIO's definition is comprehensive enough to account for deepfakes and doxxing. Additionally, the GREVIO-GT-DD's Glossary names both forms of crimes showing the benefits of specificity for the tackling of new forms of technology-facilitated violence.

GREVIO-GT-DD also addresses the inherent problems of regulating technology-facilitated violence against women by recognising the responsibility of platforms. The text outlines various suggestions for governments to enforce on social media platforms. These include content moderation, providing accessible legal information on the illegality of non-consensual image sharing, and avoiding gender biases in technological design. Under this policy design, Mort and Scheffler could have requested the platforms’ assistance in removing harmful content, including footage, ads, messages, and comments. Thus, the specific policy approach acknowledges the challenges unique to technology-facilitated violence. The benefits of the specific approach highlight the need to shift the focus of the debate. We should discuss how specific policies should be, rather than debating whether we should have a broad or specific approach. Defining the required level of specificity is crucial in addressing current technological risks while also accounting for potential unknown risks, such as the aggressions experienced by Mort and Scheffler. In other words, how should we define the activities, tools and consequences under the term of technology-facilitated violence to ensure justice for Helen Mort, Rebecca Scheffler and other victims?

The negotiations preceding the adoption of the EU Directive on Combating Violence against Women and Domestic Violence show how specificity can harm victims of new forms of tfGBV. During the negotiations, the Council proposed criminalising technology-facilitated violence if it is likely to cause “serious harm”. But what is serious harm? How harmful does an aggression have to be to a victim for it to be considered a ‘serious harm’? The formulation ultimately challenges the harmfulness of technology-enabled violence. This is part of the broader problem of technology-facilitated violence being perceived as “not real”, which forces the victims to “prove” the harmful consequences of the aggressions. As Mort reports: “It was also difficult to explain how photographs that weren’t “real” could have had such an impact on me. But what is a “real” image?” The writer explains how she relied on a combination of therapy and medication to overcome the anxiety caused by the deepfakes. Rebecca Scheffler reports on the distress she felt as a result of doxxing“Craigslist ads weren’t just words on a screen,” she states. One of the sex ads described her as “weird-looking virgin and mediocre writer with a rape fantasy.” The messages that followed it forced her to relive the trauma of a prior sexual assault. Tragically, Mort and Scheffler are not the exception but the rule. The growing body of psychological research on the impact of technology-facilitated violence demonstrates a link between cyber aggressions, self-harm, depression, anxiety and even suicide among victims. The two women’s stories along with countless other recorded instances of impact of this violence on victims show that the harmfulness of technology-enabled violence is not up for debate. These aggressions may occur online, but their consequences are more than real.

Other instances of specific wording have been problematic, especially from the perspective of a victim. Feminist groups, like the European Women’s Lobby, have argued that the Directive should refer to “other end-users” rather than “several end-users” or “the public”. The former is more inclusive than the term “public”. To illustrate, if Mort's deepfake pornographic content had been shared “only” with one person in a private chat, the offence might not have been covered by the wording. This is not the only discussion in which Mort's case demonstrates the importance of language. The focus on “sexually explicit material” and “sexually explicit activity” would help to make the case that the non-consensual pornographic footage of Mort was a criminal offence. However, if the deepfakes “only” showed Mort naked and not engaged in sexual activity, this aggression might not be captured. Feminist groups have thus been advocating for the more comprehensive wording: “intimate material”.

The specific policy approach acknowledges the challenges unique to technology-facilitated violence. The benefits of the specific approach highlight the need to shift the focus of the debate.

The final debate concerns which forms of technology-facilitated violence should be included in the Directive. The Parliament has proposed including cyberflashing as an offence. Cyberflashing is when someone sends an unsolicited image of their naked body, especially their genitals to another person. In response to the non-consensual sex advertisement, Scheffler recounts the distressing experience of receiving unsolicited pictures of the penis of a man her father’s age. Excluding certain forms of technology-facilitated violence, such as cyberflashing, as crimes at the time of writing the directive will render it outdated from the outset.

The GREVIO-GT-DD and the Directive on Combating Violence against Women and Domestic Violence illustrate the trade-offs between different levels of specificity for the protection of current and future victims. It is important to strike a balance when it comes to the level of specificity. The cases of Helen Mort and Rebecca Scheffler highlight the need for a policy approach that outlines the means, activities, and consequences of technology-facilitated aggression, leaving room in the formulation for unknown risks, in order to ensure justice for victims of new crimes. The GREVIO-GT-DD is an example of a specific approach that includes the degree of openness to take into account new forms of violence.

Conclusion

Mort and Scheffler were victims of technology-facilitated aggression with terrible consequences for their wellbeing. The legal frameworks in place at the time were not equipped to handle these new forms of violence, leaving them without justice. Regulating the risks of new technologies in a timely manner is challenging. Advocates for both broad and specific policy approaches claim to have the solution to this problem. However, they do not always consider the experience of the victims.

It is crucial to consider the limitations of the language used to define the medium, activities, and consequences in the criminalisation of technology-facilitated violence.

These cases also shed a new light on policy developments at the EU level. Both stories demonstrate how broad policy formulation fails to address new forms of technology-facilitated violence that are intangible under our current understanding of violence. They therefore highlight the importance of determining the degree of specificity that will ensure comprehensive and sustainable policies that provide justice for victims of new and future forms of technology-facilitated violence. The importance of compositional definitions that leave room for future technology-facilitated aggressions is highlighted by GREVIO-GT-DD and the debated Directive on Combating Violence against Women and Domestic Violence. Therefore, it is crucial to consider the limitations of the language used to define the medium, activities, and consequences in the criminalisation of technology-facilitated violence. This approach considers the potential risks that new technologies may pose to women and gender diverse folks. It aims to prevent their secondary victimisation.

Add new comment

Plain text

  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <br><p>