Covid and online hate, if the ‘unsuspected’ become keyboard lions

Hate online? It does not seem to be the preserve of users addicted to the insult. Offensive or even violent language comes from ‘unsuspected’ commentators who in certain contexts lose their composure. They are not serial haters, habitual ‘keyboard lions’, but they react badly when they find themselves in a hostile situation, unleashing their hatred on other users. This is what emerges from a study published in “Scientific Reports” by researchers of the Ca ‘Foscari University of Venice, in collaboration with Agcom and Jozef Stefan Institute of Ljubljana, who analyzed 1 million comments on Covid-19 videos published on Youtube . To monitor the presence of hate speech on this amount of content, the team coordinated by Fabiana Zollo, researcher at Ca ‘Foscari, has developed a’ machine learning ‘model capable of labeling each comment and classify it as appropriate, inappropriate, offensive or violent, depending on the type of language used. “Hate speech – explains Matteo Cinelli, first author of the study and postdoc researcher at Ca ‘Foscari – is one of the most problematic phenomena on the web since it represents an incitement to violence against specific social categories and in fact both social platforms and governments are looking for solutions to this problem “. The research has shown that only 32% of comments classified as violent have been removed from the platform or by the author one year after publication. On the other hand, it provides useful data to develop strategies to understand and stem the phenomenon. Among the 345 thousand authors of the analyzed comments, the study did not identify real ‘keyboard lions’ dedicated solely to sowing hatred. The insult is therefore not a drift that concerns a specific category of people. Many users, in certain contexts, become authors of ‘toxic’ comments. “It would seem that the use of offensive and violent language by users is occasionally triggered by external factors – comments Fabiana Zollo -. The study of these factors is certainly decisive for identifying the most effective strategies to stem the phenomenon “. The research quantified the amount of hate comments, registering an incidence of 1% on the million comments analyzed. This percentage was similar for both channels deemed reliable and for those spreading disinformation. Users who tend to comment under reliable channels use on average more toxic language, with offenses and violent expressions, compared to those who tend to comment under unreliable channels. On the other hand, the analysis also showed how language degenerates when the user finds himself commenting in a ‘bubble’ different from the one he is most familiar with, in an environment that is therefore ‘adverse’ to his opinions. Cinelli explains, “as the length of the conversation increases, its toxicity also increases, a result that is conceptually in line with a well-known empirical law of the web known as Godwin’s Law”. The research was carried out within the European project IMSyPP “Innovative Monitoring Systems and Prevention Policies of Online Hate Speech”, launched in March 2020 and lasting 2 years. The main objective of the project is to analyze the mechanisms that govern the formation and dissemination of online hate speech and the formulation of data-driven proposals to counter their diffusion.

1 thought on “Covid and online hate, if the ‘unsuspected’ become keyboard lions”

Comments are closed.