A new study reveals that online communities dedicated to hating men share strikingly similar behavioral and language patterns to communities dedicated to hating women. This study suggests that gender-based hate speech is a widespread phenomenon characteristic of harmful digital groups, regardless of the gender of the victim. These findings were published in the scientific journal Scientific Reports.
Social media networks enable people around the world to share ideas and perspectives on an unprecedented scale. While these platforms can foster community building, they also create an environment where discrimination and extremist ideologies thrive. One unintended effect is the creation of an echo chamber. An echo chamber is a closed environment in which users only encounter information and opinions that reflect or reinforce their own.
Anonymity on the Internet often accelerates the formation of these isolated spaces. Within these chambers, hate speech acts as a communication mechanism that uses offensive stereotypes to express ideology. This speech targets individuals based on characteristics such as ethnicity, religion, and gender. Gender-based hate speech includes, among other things, behavior that harasses or degrades people based on whether they are male or female.
Historically, researchers and content moderators have focused on misogyny, which is hatred and prejudice against women. Regular searches of academic databases reveal hundreds of thousands of papers examining online misogyny over the past two decades. In contrast, academic attention to misandry, defined as hatred or prejudice against men, remains significantly lacking. Studies examining misandry only began to be published around 2014, leaving significant gaps in the scientific understanding of digital harassment.
Erika Coppoliro, a researcher at the University of Calabria and the Italian National Research Council, has launched a project to address this literature gap. Coppolillo sought to determine whether there were systematic differences between communities targeting men and communities targeting women. The aim was to determine whether the nature of hostility changes depending on the gender of the perpetrator. If behavior remains the same, it suggests that the central issue is the toxicity of extremist online environments, rather than specific gender dynamics.
To investigate these questions, this study focused on Reddit. The platform is organized into thousands of individual communities, known as subreddits, dedicated to specific topics. Users interact by sharing posts and commenting on threads, building dense conversational networks. The researchers selected four subreddits known for their extreme views on gender as the basis for their text analysis.
Two of these groups were chosen as examples of misandric communities. The first was a mainstream feminist subreddit discussing women’s issues, and the second was a radical feminist subreddit. The latter was banned by the platform in 2020 for violating its hate speech policy. For the misogynistic aspect, the researchers selected the Men’s Rights subreddit and a group for involuntary celibates. Involuntary celibate communities were also eventually banned for promoting hatred and violence.
Primary data included text posts and comments generated between 2016 and 2022. A rigorous filtering process was applied to ensure that the analysis focused strictly on gender targeting. In the misandric group, only texts that mentioned terms such as man, man, husband, etc. were retained. Misogynistic groups required the text to include terms such as woman, woman, and wife.
The analysis began with linguistic comparisons to identify the vocabulary shaping these conversations. A computational tool designed to process human language cleaned up the text by removing punctuation and numbers. The researchers then looked at the 20 most frequently used words in each community. Results showed that the most common terms appeared with similar frequency in all four groups.
There was no clear linguistic boundary separating groups aimed at men from groups aimed at women. The study then measured the toxicity of the content to see how offensive these conversations were. Toxicity refers to how rude, disrespectful, or hateful a particular comment appears to readers. The researchers evaluated the texts using an advanced artificial intelligence framework known as Transformer.
Transformer is a deep learning model that understands the context of words based on the surrounding sentence structure. This particular model was trained on tens of thousands of manually annotated internet posts to learn the nuances of hate speech. We assigned a toxicity score to each post and comment, categorizing it on a continuous scale from completely harmless to extremely harmful.
Toxicology analysis showed that the majority of content in all four communities was rated as non-toxic. Almost every community has a dual pattern, with large peaks indicating benign text and small peaks indicating highly harmful text. The two misogynistic communities showed slightly higher peaks in extreme toxicity compared to the misandric group. Still, the overall distribution pattern of toxicity was strikingly similar.
The third stage of the study assessed the specific emotions expressed within the text. The researchers used two different machine learning algorithms that can detect emotions such as sadness, joy, fear, and anger. In this analysis, we focused only on negative emotions. The algorithm evaluated each text to see if sadness, anger, fear, or disgust were the dominant emotions.
When examining sentiment at a broad content level, all four communities expressed hatred most frequently. Anger was the second most common emotion overall. Men’s rights groups and mainstream feminist groups showed incredibly similar emotional patterns. The involuntary celibacy group tilted slightly toward sadness, while the radical feminist group tilted slightly toward fear.
Again, the results of this study did not reveal any major differences between the two. The researchers also decided to evaluate the same sentiment at the individual user level. Instead of looking at unlinked posts, the algorithm calculated the dominant sentiment expressed by each user across their lifetime of posts. Looking at it this way, the pattern has changed dramatically.
Mainstream feminist communities showed the highest levels of user-driven hate, followed by radical feminist groups and men’s rights groups. This altered perspective suggests that misandric communities may harbor more concentrated negative sentiment among actively posting users than misogynistic communities. Finally, the study mapped the conversation network within each subreddit. The researchers created a visual graph where every user is a dot and the interactions between two users are connecting lines.
This allowed researchers to measure the structural properties of each community network. One of the properties measured was modularity, which determines how strongly the network is divided into smaller and more isolated subcommunities. Another structural characteristic is the network diameter, which represents the longest communication chain between two users.
The network structure was not consistent with the subreddit’s gender focus. Mainstream feminist groups shared more structural characteristics with men’s rights groups, such as high modularity and wide diameters. In contrast, the conversation network of the involuntary celibacy community more closely resembled the radical feminist network. Structural analysis confirmed that the intended orientation of hate speech does not determine how online communities are organized.
These findings suggest that content moderation strategies need to neutrally address all hate speech. Recognizing misandric hostility as a serious problem can lead to safer digital spaces for everyone. Treating misogyny and misandry with equal seriousness propels the platform toward universal interventions to curb harmful behaviors.
However, this study relies on data collected from open Internet platforms, so it is inevitable that it will contain noise and formatting errors. Real-world social data is rarely completely clean and can impact automated ratings. The study also relies heavily on artificial intelligence algorithms to assess toxicity and emotion. Although these computerized models are highly accurate, they are not perfect.
These models can sometimes misclassify Internet slang and sarcasm, which can introduce some uncertainty in the results. This finding is also specific to the analyzed Reddit community. The dynamics of content on different platforms, such as Facebook and video-sharing sites, can produce very different results.
Future research could investigate whether synthetic bot accounts contribute to the spread of negative opinions on these specific forums. Researchers can also look for highly radicalized sub-sects hidden within the broader internet community.
The study, “Women Who Hate Men: A Comparative Analysis of the Entire Extremist Reddit Community,” was authored by Erica Coppoliro.

