Photo by Dibakar Roy on Unsplash
[This call has now been closed.]
Identity-based disinformation in India is a rapidly growing concern, with harmful impacts on the safety, dignity, and participation of marginalised communities in public life. While gendered disinformation disproportionately targets women and gender-diverse individuals on the basis of their gender and sexuality, seeking to delegitimise their voices, credibility, and participation, it is part of a broader pattern of disinformation efforts targeting people on the basis of various identity markers including religion, caste, ethnicity, and migration or refugee status.
In India, heightened political polarisation, religious majoritarianism, entrenched caste hierarchies, and patriarchal norms intersect to create an ecosystem where gender and other identity-based disinformation flourishes. Social media platforms and encrypted private messaging services have become powerful amplifiers of such content, where false, misleading, or decontextualised narratives are rapidly circulated to reinforce existing power structures and social divisions, while platforms continue to ignore the impact of this promotion of hateful content.
Across all of these identity markers, prominent individuals such as activists, journalists, community leaders, political actors and academics often bear the brunt of coordinated disinformation attacks. But increasingly, ordinary people who speak out against inequality or injustice also find themselves vilified through edited videos, impersonation accounts, and targeted smear campaigns.
These campaigns are designed to exploit cultural stereotypes, inflame public sentiment, and manufacture moral outrage. Algorithms and engagement-driven content models on digital platforms often reward the most polarising or sensational content, making it easier for such disinformation to go viral which ultimately leads to more profits for the platforms themselves. Private messaging services, with little to no content moderation, serve as closed ecosystems where dangerous narratives can be seeded and amplified without scrutiny.
In light of this, we would like to develop a multi-format series that explores the experiences of women, LGBTQIA+ persons and others who are targeted by disinformation on the basis of their identity. The series will hope to capture how gender and other forms of identity based disinformation is understood and experienced; identify its underlying causes, motivations and actors responsible; examine the role of technology and tech companies in its spread; and explore counter-measures that would be meaningful within the Indian context, to raise awareness, share measure and tools to protect those targeted, and pin accountability on those responsible.
These stories can be narrative pieces exploring the experiences of communities and offering on-ground realities and commentary of the contributor; or can be researched pieces that conduct interviews and surveys coupled with desk research to form the story and present findings.
Related terms:
Gendered disinformation; political instability; platform accountability; deepfakes and GenAI; online violence; freedom of speech; privacy; digital rights
Format:
This is a multi-format series to be published in English, and we’re not bound by format as long as the contributor is able to communicate their story comfortably. Some of the formats that GenderIT publishes, and is inviting contributors to think around are,
- Short form articles: 1000-1200 words
- Long form articles: 1500-2000 words
- Poems
- Short videos: 10 minutes max
- Audio conversations: 30 minutes max
- Illustrations, drawings and comics: 6 frames
Compensation:
We pay 0.30 USD/word for written stories, visual pieces are paid 150 USD/frame, whereas a flat rate of 500 USD is paid for video and audio submissions.
How to Pitch:
Here are the key elements that you can add to your pitch not exceeding 200 words:
- Your introduction – What do you do professionally? Where are you from? How is this issue relevant to you?
- Central idea of the story – one paragraph explaining what your story is about, how you plan on framing it, who it is about, who you’ll interview (if at all), how is your angle different from the existing reporting and coverage around the topic;
- Why are you best suited to write about this topic?
It is highly recommended to check genderit.org to determine whether the topic you are pitching has already been covered on the website or not. In the interest of not duplicating the knowledge on the platform, we reserve the right to reject a pitch if there are multiple stories on the topic with similar angles.
Send your pitch via email with the subject line “PITCH: Gendered Disinformation in India”, to genderit@apcwomen.org, no later than Thursday, April 24, 2025.
Preference will be given to people working with women, LGBTQIA+ community members, and those who live on the margins in Indian society – including religious, caste, ethnic, refugee status and other oppressed identities.
Use of AI:
GenderIT discourages the use of AI in brainstorming, drafting, and editing the stories for its platform. We have formed this approach after carefully considering the fact that mainstream AI tools are trained by appropriating and stealing the years worth of work of many feminists, academics, researchers, journalists, writers, activists, artists and others with no consent, recognition, reparations or remuneration, while these tools and their parent corporations are consistently profiting off of their labour. In addition, AI tools are notorious for making factual errors and enforcing harmful biases, which can impact the credibility of the work that contributors produce and GenderIT publishes.
However, we’re also mindful of the limitations that many people with disabilities live with and would require the assistance of AI tools in the process of drafting their contributions for GenderIT. For this, we encourage contributors to inform us of such use and its intent so we can ensure that known errors and biases are not reflected in the final work.
Publication date
- 679 views





Add new comment