Social media giants urged to switch off 'toxic algorithms' that push extreme content

Extremists have taken to social media platforms to voice racist and homophobic views.
Social media giants have been asked to switch off “toxic algorithms” that push extreme content online.
In an open letter to Facebook’s owner Meta, Twitter, Snapchat, Youtube, Telegram, and TikTok in Ireland, the Far Right Observatory (FRO) has urged the companies to "turn off the hate megaphone".
The call follows increased social media activity among far-right extremists who oppose the Government's policy on housing refugees and asylum seekers.
A number of these individuals have taken to social media platforms to repeatedly voice racist and homophobic views.
They have also been using the social media platforms to orchestrate demonstrations outside venues where refugees and asylum seekers are housed.
“A small number of extremist groups and individuals are spreading hate, lies and fear within our communities," said Mark Malone of the FRO.
"They want nothing more than to exploit the struggles that communities face, attempting to turn people against each other.
“Despite the positive role social media and the internet can play in our lives, it is unfortunately clear these platforms [amplify] hate and division, undermining democracy, safety, and cohesion.”
Representing more than 120 community groups, including in Fermoy and East Wall, where anti-refugee protests have been held, the letter comes with a call on social media giants to enforce their own community standards rules.

“We want them to switch off the toxic algorithms that push extreme content to people,” Mr Malone said.
“We want them to to stop amplifying hate speech and we call on them to keep our communities safe.”
Of the social media platforms asked for a comment, a Snapchat spokesperson said: "Using Snapchat to spread disinformation or hate speech is strictly against our rules. We have no open newsfeed of unvetted content and the way the app is designed limits the possibility of harmful content or false information from being featured, recommended, or going viral.
“If we become aware of this content, we will delete it immediately and the account may be removed."
A spokesperson for Meta said: “The claim that we deliberately push hate and misinformation for profit is not true.
“Billions of people use Facebook and Instagram because they have good experiences; they don’t want to see hate speech on our platforms; our advertisers don’t want to see it, and we don’t want to see it either.
“There is no incentive for us to do anything but remove it.
“We invest heavily in teams and technology to find and remove violating content quickly, and work with 90 independent fact-checking organisations around the world.”