A new online tool to track hate speech has found that it is pervasive on Facebook and Twitter and ranges from "crude biological racism and white supremacist views" to "coded racism".
Researchers from Dublin City University analysed a sample of 5,725 comments related to media articles and 113 Twitter accounts, as well as reviewing 1,000 tweets around a fatal stabbing incident in Dundalk last January.
The HateTrack project ran for 12 months, collecting material over three months, and its findings prompted Emily Logan, the Chief Commissioner of the Irish Human Rights and Equality Commission, to call for more modern laws to address the problem.
“The potential for intolerance online to shape the public debate – and resulting political debate – offline is becoming one of the hallmarks of the digital age," Ms Logan said.
The HateTracker project looked at thousands of comments related to articles printed on thejournal.ie, the Irish Times and the Irish Independent and 10,728 tweets or comments found by 92 HateTrack searches and a parallel search of tweets linked to the Dundalk attack. The main platforms were Facebook and Twitter.
It found both crude and coded forms of racism and highlighted examples, such as "Too many uninvited and unwanted bogus, smelly immigrants and fake asylum seekers" and "We shouldn’t be housing Africa’s surplus population, let’s house our own people first".
It also found that "racially-loaded toxic discourses often coalesce around notions of ‘Irishness’ and what it means to be Irish, with specific individuals and groups being targeted directly... and indirectly" and that "Calling out racism in online environments typically leads to accusations of being ‘over-sensitive’ or ‘playing the race card’, or ‘being racist’ against white people."
It also found "clear patterns of shared language between international hard right and alt-right groups and parts of the Irish digital public sphere" and that "racially-loaded toxic discourses feed on fake news, bogus statistics, research published by institutes with dubious credential and ‘recited truths’ coalescing around the alleged failures of multiculturalism, no-go Muslim areas, and African youth gangs terrorising locals".
It also found that Facebook pages of news outlets and their comment threads "seem to play an important role in channelling racially-loaded toxic contents", often linked to what the researchers described as "trigger events", referencing Ibrahim Halawa, refugees, and terrorist attacks.
It also found that online racism was often punctuated with attacks on women and the LGBT community and that "Social media affordances and tropes lend themselves well to racially-loaded toxic contents, which can include memes, multimedia materials, hashtags, tagging and other forms that allow the materials to travel further".
As for those making the comments, the research distinguishes between "reactive" racism and people or groups behind explicitly Islamophobic or anti-immigrant pages and accounts who "invest real labour, time, and resources in promoting the everyday circulating of racist discourses", often by "carefully curating the content on their online platforms" involving spreading ill-founded stories, misinformation, and attacking or harassing other online users.
In relation to attitudes towards reporting hate speech, researchers found some believing freedom of speech was paramount, others believed racism online to be the preserve of idiots and bigots, others who saw the problem as too pervasive, and finally a 'bystander effect' where some felt someone else would deal with it.
Read more here