Sexism and intersectionality

Cybersexism

Digital violence does not begin and end on the internet. It is important to recognise its gender-specific dimension. Digital gender-based violence is actually a continuation and a mirror of pre-existing sexist violence. Prejudices and marginalisation spill over into the digital world as well. Just as sexist prejudices in our off-screen lives lead to myths about rape and victim blaming, for example, the same happens with sexualised violence on the internet. So cybersexism is actually just that: sexism. As a result, combating digital violence also involves combating sexism. The misconception that this is an internet-only phenomenon will not help.

Multiple discrimination and digital violence

Various forms of discrimination tend to overlap. Women and trans people who are discriminated against offline due to their (perceived) identity characteristics also experience violence online based on these characteristics. At the same time, a person's sexual orientation, religion, disability or origin also influence the extent to which they experience violence and discrimination. Haters, i.e. attackers who engage in online hate speech, deliberately target aspects of people's identities in order to hurt them.

Research shows that people discriminated against for a variety of reasons are also subjected to online attacks more often, and in different ways.

In 2024, the Competence Network Against Hate on the Internet "Hass im Netz" published data on violence in the digital sphere in Germany. Its representative survey was entitled "Loud Hate, Silent Withdrawal". Online violence can affect everyone, although it does not affect everyone in the same way. According to the survey, people of visible immigrant origin (30%), young women (30%) and people with a homosexual (28%) or bisexual (36%) orientation are particularly frequently affected.


Das NETTZ, the Professional Association for Media Education, Media Literacy and Communication Culture, HateAid and Neue deutsche Medienmacher*innen all form part of the Competence Network Against Hate on the Internet (ed.) (2024): Loud Hate – Silent Withdrawal. How online hate threatens democratic discourse. Results of a representative survey. Berlin.

Amnesty International also analysed and documented digital violence in the study "Toxic Twitter - A Toxic Place for Women” using Twitter as an example. The quotes from the experts interviewed highlight how inseparable and interwoven the various levels of digital violence are.

I’ve never had abuse only because I’m a woman – it almost always had to do with my race.

Charlie Brinkhurst, British journalist

Women who have experienced racism, trans women, non-binary people and women with disabilities are frequently attacked on Twitter on different levels. They not only experience digital violence and discrimination because of their origin or disability, but also in many different ways, and in a particular quantity and quality. This makes it more difficult to deal with experiences of violence.

I am from a Scottish Asian community. I am a Muslim. And I’m a woman. So it’s everything. It has an exponential effect, so people will pile on the abuse for a variety of different reasons. Some of them because you are all of these things, and some because you are one of these things, or two of these things, which makes it so much more difficult to deal with, because you just wonder where do I start with this?

Tasmina Ahmed-Sheikh, former British politician

 

You can read Amnesty’s report here: Toxic Twitter - A Toxic Place For Women

The quotes can be found in the second chapter of the report.

Content warning: the report contains descriptions of violence.

Discriminatory algorithms and "AI"

Even software can be discriminatory. Software often takes decisions that influence our lives. So-called biases (prejudices or distortions) in data sets lead to algorithms and so-called artificial intelligence perpetuating and reinforcing prejudices and marginalisation. One example of this is facial recognition software, which is less able to distinguish faces of black people, or even fails to recognise them as human faces. Likewise, adverts for jobs on digital services are displayed – or not – depending on gender. This means that women don't even get the chance to apply for certain jobs, because they don't even find out about them.  

Likewise, AI applications reproduce prejudices. For example, many image generators depict women in a sexualised way or as young, thin and with unlined faces. All of this is particularly problematic if AI or other software is used in important decisions: In HR departments, in the police force or in asylum procedures, a prejudiced machine can then decide on the future of human beings.