Archismita Choudhury

Gendered disinformation falls under online gender-based violence (OGBV), and uses false and misleading narratives (with malign intent) against women and queer persons that use their gender as the focus of attacks. Shaming, intimidation and using threats of violence are three common strategies within it, and they often overlap with each other.[1]

Here’s what gendered disinformation is: A strategy to silence women and queer folks that “uses false or misleading gender and sex-based narratives against women, often with some degree of coordination”[2], and aims to stop women and queer (human rights defenders) from taking part in digital publics.

#ShePersisted, an organisation that works to address gendered disinformation against women in politics, published a report series in 2023 called "Monetizing Misogyny". In it, their research shows us that gendered disinformation is being employed by malign actors under the pretext of defending what is often framed as “traditional values”.

These tactics are often linked to anti-gender narratives in many countries, and the prevalence of gendered disinformation — the research series underlines — must be treated as an early warning system for backsliding on progress made around women’s rights and democratic principles.

According to Kristina Wilfore, a global democracy activist and co-founder of #ShePersisted, “gendered disinformation campaigns build on, and are rooted in, deeply set misogynistic frameworks and gender biases that portray masculine characteristics as those fit for leadership while painting women leaders as inherently untrustworthy (insinuating a woman is dishonest or not trustable is a tried and true attack), unqualified (one of the biggest barriers women face when seeking office), unintelligent (tropes about women as dumb and unfit for the job are a prominent feature of gendered disinformation, made worse with objectifying sexualised content), and unlikable (which for women can be the death knell of their campaign).[3]

In a personal interview, Indian parliamentarian Priyanka Chaturvedi told #ShePersisted: "as the world and the operations turned to digital space, the incidents and the severity of hate attacks against women particularly women in politics, journalists and feminist activists, increased. As more women took to social media platforms to raise concerns, the level of abuse and insults shifted drastically.”

But the impact of gendered disinformation goes much beyond affecting individual women’s mental health, drastically undermining the free and fair functioning of democracy as well as weakening the rights of women and gender-marginalised persons.

"Monetizing Misogyny" also referred to the ways in which state-led gendered disinformation campaigns have been used to silence and deter women in politics from speaking out all over the world, “in some cases with the aid of a “cyber militia,” in addition to ordinary internet users.

In this environment, as Big Tech and social media platforms took over the internet with the promise of connection, it is argued that “the way the major digital platforms are designed is largely responsible for the current hellscape experienced by women online.”[4]

It can’t be denied that as it stands today, digital platforms have failed to protect their users as well as to be a democratising force (despite early potential). Algorithms boost and amplify harmful narratives, helping gendered disinformation go viral through recommender systems that are built to maximise attention and features that help to share such disinformation widely and rapidly.

By using the tools digital platforms have provided to simulate topic momentum artificially, and engage in coordinated sharing, gendered disinformation can easily be scaled up — profiting commercial interests at the expense of democracy and human rights. Platforms fail to act on a regular basis, as they continue to boost and amplify narratives through algorithms that make false and malicious content go viral for  profit.

“The approach adopted by the major social media platforms to address this problem - like doubling down on “notice and take down models” of content moderation and automation - have proven to be grossly inadequate, with dire consequences for many democracies around the world,” explains "Monetizing Misogyny".

Marwa Fatafta told #ShePersisted in a personal interview, “Evidence surfaced last year that Facebook/Meta internal guidelines have allowed some authoritarian world leaders to use social media to “deceive the public or harass opponents” despite being alerted to evidence of the wrongdoing. That same year, an internal document was leaked, revealing that the company dedicated 87% of its resources towards addressing disinformation to the U.S., despite a majority of its users being outside of the country, and that safeguarding elections had been deprioritised everywhere. In our countries of study, we saw how platforms have repeatedly failed to act in tackling abusive and disinformation content against women political leaders. In India, for example, there have been claims that both Facebook and Twitter have provided preferential treatment for government officials, providing them private content or allowing them to violate their terms of service.”

We can see clearly how digital platforms ignore issues of misogyny and racism in their platforms and responses, as well as non-English speaking countries in the Global South.

So what might be the solution?
Although there is no “silver bullet” solution for gendered disinformation, there are a few practices we can consider adopting:

  • Beginning and advancing targeted, independent research that is not funded by Big Tech.
  • A comprehensive approach to reframing legal frameworks with a focus on transparency — of content moderation, algorithms, and takedown requests.
  • Prioritising a ‘duty of care’ for social media companies with respect to the harm that is caused by their services.