Researchers at Justus Liebig University Giessen recently investigated the relationship between moralized language used in a tweet and hate speech found in the replies. Their findings suggest that the more moralized words are used in a tweet, the more likely the replies to the tweet will contain hate speech. This article could provide insight into what triggers hate speech in social media settings.
Hate speech was usually limited to those who were known, or discriminatory statements or words in movies or television programs. Today, hate speech can be a part of everyday life through the internet. Anyone with a social media account is likely to be exposed.
Hate speech that is enticing our online social interactions might sabotage the belief in American social unity in a way that democracy may be permanently harmed. Kirill Solovev and Nicolas Pröllochs developed a better understanding of what might trigger individuals to respond with hate speech.
Solovev and Pröllochs analyzed 691,234 original tweets and 35.5 million replies from three groups of Twitter users.
The original tweets were assessed for hate speech. For this study, hate speech was defined as "abusive or threatening language (or writing) that attacks a person or group, usually on the basis of attributes such as ethnicity, religion, sexual orientation."
Training research assistants took a look at each tweet and reply, identifying moral language or hate speech. Another assistant would repeat the work as needed.
Each additional moral word was associated with an increase in the odds of receiving hate speech of between 9.35% and 20.63%, according to the researcher's team.
Solovev and Pröllochs agree that social media postings will never be free of moral language; despite this, the research team believed that these findings "may assist in educational applications, counterspeech strategies, and automated methods for detection of hate speech."
Kirill Solovev and Nicolas Pröllochs co-authored the paper "Moralized language predicts hate speech on social media."