Facebook, Google, TikTok and Twitter released their commitments to tackle online abuse and improve women’s safety on their platforms at the UN Generation Equality Forum in Paris.
Amid a global context of rampant gender-based violence on social media platforms, the commitments represent a concrete and necessary step for every tech company to meet its responsibilities to respect human rights in the context of gender-based violence and abuse against women on its platform.
Amid a global context of rampant gender-based violence on social media platforms,
1- Women’s experiences of gender-based violence and abuse online should not be seen as a series of isolated incidents, but rather treated as part of the wider context of systemic inequalities that are embedded, reflected and reinforced in codes. Building better ways for women to use safety tools will not alter the fact that women still face structural challenges to their full participation in virtual spaces where gender-based violence is normalised, amplified and weaponised.
2- Safety tools should focus on affected women and address their specific realities. There cannot be a one-size-fits-all approach to how “help yourself and take control of your tech” plays out in different scenarios. Safety tools need to be contextualised and intersectional, and as inclusive as possible of users' needs that are varied and shaped by their race, culture, language, gender, disability and other identities.
Safety tools need to be contextualised and intersectional, and as inclusive as possible of users' needs that are varied and shaped by their race, culture, language, gender, disability and other identities.
3- Entrenched gender bias in algorithms puts non-binary people at greater risk of abuse and marginalisation. Therefore, experiences of people of diverse genders and sexualities cannot be treated as a monolith with those of cis women.
4- The multifaceted nature of experiences of gender-based violence and abuse online makes it hard for women to identify threats that include a variety of physical or sexual violence targeting one or more aspects of a woman’s identity.
Entrenched gender bias in algorithms puts non-binary people at greater risk of abuse and marginalisation.
5- No particular attention is given to the cases of women human rights defenders (WHRDs) whose work is heavily reliant on technological tools and infrastructures. Here it is important to note that WHRDs recently issued a list of demands calling on social media companies to develop and implement policies to end stigma against WHRDs and ensure that rights to privacy are respected through measures that are necessary and proportionate.
6- Testing is good, but according to Amnesty’s Twitter Report Card, these controls have not been successful in the past. Testing should be done widely, in multiple languages, on multiple devices, in different realities and contexts with different women
7- Transparency reporting mechanisms need to become more disaggregated by category and include additional information on the average time it takes for moderators to respond, the number of moderators employed per region, and language to respond to reports of abuse on the platform.
Although Facebook, Google, TikTok and Twitter have brought welcome improvements to their platforms, we do not foresee or anticipate any structural change. Privacy and security are still major concerns for women on social media platforms,
Although Facebook, Google, TikTok and Twitter have brought welcome improvements to their platforms, we do not foresee or anticipate any structural change.
Written by: Marwa Azelmat (APC), for further information reach out to email@example.com
Debarati Das, Point of View (POV)
Erika Smith (APC)
Erin Williams, Global Fund for Women (GFW)