Photo by Alejandro Morelos on Unsplash
The rapid integration of artificial intelligence (AI) across economic, social, political and defence systems has caused significant harm in the structures that govern technology. For those of us committed to an intersectional feminist vision of the internet, it is clear that AI, as it exists today, does not simply reflect the world’s inequalities but exacerbates them. AI, far from being neutral, is built upon, shaped by, and reinforces the same structures of racism, sexism, colonialism, ableism, and economic injustice that intersectional feminist advocacy seeks to dismantle.
Digital technologies have always mirrored societal power imbalances. AI intensifies this reality by replicating biases at scale, and embedding them seamlessly into decision-making processes that profoundly impact marginalised communities. As Dr. Joy Buolamwini and Dr. Timnit Gebru’s research, Gender Shades, demonstrates facial recognition systems have far higher error rates for women with darker skin tones than for lighter-skinned men, a fact that is not incidental but reflective of who is included and excluded in training data sets and design processes. Safiya Umoja Noble, in her book Algorithms of Oppression, further discusses how search engines themselves operationalise racism and sexism through the ways they organise information. She highlights that data discrimination against people of colour, notably Black women, is an inherent issue with search engines, which is then intrinsic to the datasets used to train AI tools and systems.
Additionally, AI-driven systems amplify forms of surveillance that disproportionately target marginalised groups. A 2021 research by Article19, Emotional Entanglement, shows how China’s AI-powered social credit system and other surveillance mechanisms disproportionately impact ethnic minorities such as Uyghurs, making the authoritarian potential of AI all the more visible. Similarly, in the context of conflicts, predictive policing using AI-driven data analytics have been used by the military forces to identify who to attack, as has been the case in the ongoing Israeli genocide in Palestine. In cases of autocratic governments, these mechanisms have been used against protesters advocating for social justice and human rights. These AI tools do not exist in a vacuum; they are strategically funded, skillfully developed with a purpose, and wilfully deployed by governments and corporations without meaningful accountability to the people they affect. The 2025 Paris AI Summit reflected this partnership, as tech CEOs called for soft laws and self-regulation of the AI industry, asking governments and the public to trust them – a proposal state leaders at the conference were quick to align with.
Without fundamentally transforming the conditions under which AI is designed and governed, efforts to "fix" bias merely sustain the status quo.
As feminist digital rights advocates, it is crucial to recognise that AI development and deployment today occur within a global capitalist framework that privileges profit over rights, opacity over transparency, and efficiency over equity. Ethiopian scientist, Abeba Birhane argues that AI systems are not just biased because of flawed data, they are products of social ecosystems that valorise efficiency, predictability, power and control over care, relationality, and justice. She reminds that a pattern in algorithmic injustice and AI fairness research suggests that, “the more you are at the bottom of the intersectional level and further away from [the categorisation of] stereotypical white cisgendered male, the bigger the impacts are on you – whether it’s classification or categorisation or scored by any [algorithmic systems].” Therefore, demands for "ethical AI" that simply propose technical fixes miss the deeper structural problems. Without fundamentally transforming the conditions under which AI is designed and governed, efforts to "fix" bias merely sustain the status quo.
Further, the language of “ethics” in AI in particular, and technology in general, often becomes a tool for corporate self-regulation rather than true accountability. The corporate-led ethics and oversight boards often lack real power, and fail to include voices from the margins. These ethics boards are performative, typically advisory in nature with no enforcement power, and remain structurally weak by design. As AI tools are rapidly developed within the same profit-driven companies, ethics are routinely sidelined, leaving ethics board members unable to meaningfully intervene or keep-up with the pace of development. Feminist researchers Joy Buolamwini and Inioluwa Deborah Raji have exposed how tech companies suppress critical research when it threatens to unveil systemic issues of racism and exploitation within AI systems. Feminist interventions must move beyond ethics to demand justice, transparency, community accountability, and redistribution of power.
One core feminist critique is that AI, in its current form, imagines intelligence and knowledge in narrow, Western, capitalist terms, devaluing or appropriating the plethora of work done by and in the global Majority world based on community knowledge, indigenous epistemologies, and relational forms of intelligence. The pursuit of "smarter" AI often rests on violent extractive processes: the mining of data from marginalised communities without consent, the ecological destruction from AI's resource-heavy models, and the extraction of cheap labour, often by women and marginalised, underpaid and overworked workers, to train and moderate AI systems. For example, technology supply chains rely heavily on precarious workers from the global South, revealing the colonial continuities embedded within its production.
A feminist digital rights approach to AI must start by accepting the reality that AI as currently developed is not neutral, and is not automatically beneficial. This acceptance must also be coupled with questioning whether its development can be altered to eliminate harm and biases. The pursuit of feminist AI, and whether it is a possibility, must be grounded in a recognition of the systemic harms AI causes, and a refusal to merely seek inclusion into unjust systems. It must aim for transformation rather than assimilation.
The pursuit of feminist AI, and whether it is a possibility, must be grounded in a recognition of the systemic harms AI causes, and a refusal to merely seek inclusion into unjust systems. It must aim for transformation rather than assimilation.
A feminist AI vision would center gender justice, digital rights, environmental justice, labour justice, knowledge recognition, and the lived realities of those most affected by technological harms. It would involve:
- Examining what AI actually entails, how it impacts diverse communities in reality, challenging its assumed neutrality and insistence as an inevitability, and exposing the biases integrated in the technology itself;
- Rejecting AI applications that violate fundamental rights, including those that facilitate dehumanisation, surveillance, profiling, and predictive policing, as well as enable violence, manipulation and control;
- Demanding meaningful consent and agency over personal and community data, opposing exploitative data extraction practices;
- Building models of AI governance that prioritise collective and participatory development, ownership, transparency, and accountability;
- Enabling resistance to the unchecked automation of processes and tasks by technologies imposed without assessing their human impact, while fostering spaces for collective dialogue, dissent, transparency, and accountable decision-making;
- Promoting AI research and development led by communities in the global South, indigenous communities, LGBTQIA+ groups, people with disabilities, and other marginalised groups, ensuring that technological futures are not monopolised by a privileged few. Researcher Sasha Costanza-Chock in their work on design justice emphasises on the need to involve disabled people in design systems developed for them, “[A] lesson from disability activism is that involving members of the community that is most directly affected by a design process is crucial, both because justice demands it and also because the tacit and experiential knowledge of community members is sure to produce ideas, approaches, and innovations that a nonmember of the community would be extremely unlikely to come up with.”
- Ensuring that the material impacts of AI production, including environmental degradation and labour exploitation, are addressed as feminist and human rights issues;
- Creating spaces for alternative imaginaries of AI, ones that center care, relationality, and collective well-being over competition, domination, and profit.
Such an approach requires radically rethinking power structures within the technology ecosystem. It demands challenging the dominance of a few corporate actors, largely based in the global North, who currently define AI’s trajectories. It also demands regulatory frameworks that move beyond technical audits to enforce human rights-centered standards, shaped by the needs and voices of those historically excluded from technological governance. More critically, any such approach must confront the very infrastructures on which these technologies are built, and work to dismantle the embedded biases and structural harms that remain central to how AI functions today.
Above all, intersectionality and representation must remain central. A feminist AI cannot simply address "gender bias" in isolation from race, class, caste, disability, nationality, geography, and other axes of oppression. As Ruha Benjamin, the Founding Director of the Ida B. Wells JUST Data Lab, notes in her book, Race After Technology, technologies that seem "neutral" often deepen racial and economic disparities precisely because they overlook systemic inequality as a core feature to dismantle.
Given the inherent structures of power embedded in AI, feminist engagement with AI must be strategic. It must acknowledge that not all AI is salvageable or reformable. In some cases, abolition, i.e. the refusal to develop or deploy certain forms of AI, may be the most feminist response. In others, feminist actors may choose to engage critically, demanding greater accountability, transparency, and redistribution of power as necessary steps toward more just technological futures.
[...] Feminist engagement with AI must be strategic. It must acknowledge that not all AI is salvageable or reformable. In some cases, abolition, i.e. the refusal to develop or deploy certain forms of AI, may be the most feminist response.
As feminists and gender justice advocates, it is important to recognise that AI is not an autonomous force but a deeply political construction. As such, we must commit to challenging the power asymmetries and exploitation that AI reinforces, advocating for AI systems that center gender justice and human rights, and amplifying the voices and expertise of feminist movements globally without being extractive. This includes supporting community-led alternatives, pushing for stronger international and national regulations, and engaging critically with the reality that technological innovation, without justice, only deepens inequality.
Feminist approach to AI is not simply about "fixing bias" or "adding diversity." It must be about reimagining the very foundations of how technologies are designed, governed, and deployed. It must be about centering care over control, justice over efficiency, and collective power over corporate monopoly. For a truly feminist internet, it’s critical to be firmly committed to the belief that technology serves as the tool of liberation and to not perpetuate the systems of oppression that we are fighting to dismantle.
--
Relevant Reading:
- Feminist Principles of the Internet – https://feministinternet.org/
- Interview with DJ Outvertigo: Big Tech almost demands our silence in exchange for the use of their services – Take Back the Tech
- Research: Gender Shades
- Algorithms of Oppression: How Search Engines Reinforce Racism – Safiya Umoja Noble
- Research: Emotional Entanglement: China’s emotion recognition market and its implications for human rights – Article 19
- Algorithmic Injustices and Relational Ethics with Abeba Birhane
- Design Justice: Community-Led Practices to Build the Worlds We Need – Sasha Costanza-Chock
- Race After Technology: Abolitionist Tools for the New Jim Code – Ruha Benjamin
- 327 views





Add new comment