If there was any doubt about the Government’s view on the proliferation of hateful and harmful content on the internet, you’d need only to sit quietly for a moment and wait for the echo that has been making its way around the corridors of power to reach you.
It began last November when Taoiseach Leo Varadkar told the Dáil, newly bothered by revelations about the failure of Facebook to remove online nasties, that the social media giants were about to be taken in hand.
He couldn’t tell Brendan Howlin what was happening with a Labour Party bill that aimed to control the worst of the net, but, he announced: “When it comes to the internet and the worldwide web, what I can say is that the era of self-regulation is over.”
His words hit the back wall of the chamber and took off, catching Communications Minister Richard Bruton last March as he launched a public consultation on curbing the spread of harmful content. “The era of self-regulation in this area is over,” he declared.
In April, after the mass murder at a New Zealand mosque was livestreamed online, it was Varadkar’s turn to echo his minister echoing himself: “We can no longer rely on self-regulation alone.”
Last week, it was the turn of Tánaiste, Simon Coveney, to field frustration from Sinn Féin about the lack of progress on that party’s bill covering much the same territory. “The Government’s view is that the days of self-regulation online are over,” he assured them. On Thursday this week, publishing the submissions to the public consultation, Bruton echoed himself echoing the Taoiseach.
“We will not accept a situation whereby online companies are permitted to regulate themselves. That day is gone,” he proclaimed.
So that’s clear then — the Government is opposed to self-regulation by social media companies. It’s not surprising. What was once the beauty of the net has become its burden. No one controlled the net except the people who used it and it allowed freedom of expression and dissemination of news and views like never before.
But because no one controlled the net, it also became a hyperactive 24-hour distribution centre for fake stories masquerading as news, computer-generated bots spouting views, angry people spreading hatred, ignorant people perpetuating lies, scaremongering, election-interfering, bullying, racism, sexism and child exploitation. And all with the added dimension that everybody — young, old, casual user, and addict — could be regarded as fair game for plundering personal data that might be turned into a quick buck.
The giants of the internet, Facebook and Google, dispute little of that. In its submission to Richard Bruton’s consultation, Facebook said that during just the three months of June, July and August last, it took action on 15.4 million pieces of violent and graphic content, removed 750m fake accounts and took down 8.7m pieces of content featuring child nudity or child sexual exploitation. It knows the online space isn’t all about cute cats and holiday snaps.
Google sampled the three-month period of October to December last and reported that YouTube removed more than 8.7m videos and blocked more than 261m comments in that time. Of the videos removed, almost 19,000 featured hate speech and 49,500 violent extremism.
Both companies were at pains to stress that much of the removed content was detected by their own monitoring systems — both human and automated. Facebook said it had doubled the number of staff working on safety and security last year to 30,000, although only about half that number work as content moderators directly involved in assessing the legality and acceptability of posts, pictures and videos.
Yet all that monitoring infrastructure and effort couldn’t stop a man livestreaming his own murderous rampage through a New Zealand mosque last March or the sharing and viewing of that footage by countless other social media users.
Closer to home, it couldn’t prevent the spread of photographs of the boys convicted of the murder of Ana Kriegel — and indeed an innocent incorrectly identified Boy 2 —in direct contravention of the law that says the anonymity of minors must be maintained.
Individual high-profile incidents such as these arguably do more damage to the credibility of social media companies than the millions of pieces of nasties they remove before few or anyone gets to see them. But all of it is contributing to a feeling of distrust among the public and, among governments, a fear of being seen not to act.
Earlier this month, the latest results of an annual global survey of more than 25,000 internet users in two dozen countries on five continents, made for grim reading for Facebook, Twitter and other big names in social media.
Respondents were asked about their level of trust and distrust in the net and they replied that the likes of Facebook and Twitter were a leading cause of their distrust. Only cybercriminals were considered worse.
The survey, carried out each year for influential think-tank, the Centre for International Governance Innovation, also found growing concern about online privacy with eight out of ten people reporting they felt their personal information was insecure and more than half of those feeling more concerned than they were a year earlier.
The vast majority, 86%, admitted falling for a fake news story at least once, and Facebook and Twitter were again in the firing line as being the most commonly distrusted sources of the nonsense. The same majority wanted both governments and social media companies to take action to combat fake news by deleting posts and videos and removing accounts among other measures.
That seems to tally with the mood in Ireland. What isn’t clear in an Irish context, however, is how the self-regulation that the Government says has had its day should be replaced, and who should take charge of setting the triggers for action, overseeing that action and imposing sanctions for inaction.
Various parties have had a go at legislating for controls over various aspects of the net. The Labour Party has the Harassment, Harmful Communications and Related Offences Bill, Sinn Féin’s is the Digital Safety Commissioner Bill, and Fianna Fáil have a Social Media Transparency Bill.
An Online Safety Bill, covering ground similar to those in the various pieces of proposed legislation and more, is promised by the Government before the end of the year but no firm date has been given. Even if the bill is published, its journey to enactment is likely to be a long one with many hurdles to clear.
The public consultation drew 84 separate responses from the social media and tech industries, other media companies, civil rights organisations, children’s charities, public bodies, the gardaí, suicide charities, support groups and individuals all with their own ideas of how regulation should work. We are told their submissions will feed into the thinking behind whatever approach the Government adopts.
For some, preserving the principle of freedom of expression is paramount and any form of regulation is considered an unacceptable intrusion on that ideal.
But the majority seek a balance between preserving that principle while protecting people from harm.
The Irish Council for Civil Liberties, for example, cites the UN’s special rapporteur on freedom of expression, David Kaye: “States should only seek to restrict content pursuant to an order by an independent and impartial judicial authority, and in accordance with due process and standards of legality, necessity and legitimacy.”
It says: “The ICCL supports this position while noting that the precise mechanisms for achieving this have not yet been identified.”
The Broadcasting Authority of Ireland came up with a mechanism — namely an enhanced version of itself. It said its remit and powers should be expanded from arbiter of fairness and taste on radio and TV to that of online watchdog too.
Google liked that idea and said so in its submission, but both it and Facebook were very clear that any regulator should keep their distance from the tech giants unless and until all internal review and complaint mechanisms were exhausted.
Even then, they said they wanted an appeals process, including access to the courts, in cases where they might disagree with a regulator’s findings and directions.
They also say that any proposed sanctions should apply only for persistent failure to abide by rulings of the regulator rather than for individual breaches, particularly where the breach is due to a disagreement over the harm that attaches to the content complained of.
Facebook said a code would have to be drawn up governing the extent to which a complainant would have to prove serious and/or ongoing harm. That word ‘harm’ is going to prove contentious. Before the Government sets out the procedures for any new regulator, it will have to find a definition of harmful content.
Facebook in its submission provides an example of how a poor definition could result in removing relatively innocuous material. Material designed to encourage prolonged nutritional deprivation — so-called pro-anorexia or thinspiration posts — is among the harmful content on the Government’s radar but with too loose a definition, Facebook argues that images of a model in a clothing campaign could fall foul of the regulations purely because of the model’s physique.
It warns that discussions of suicide and admissions of suicide attempts that take place online as part of a healthy discourse on the issue could end up in the same category as harmful material that promotes self-harm.
Technology Ireland, the business grouping representing more than 200 companies in the ICT, digital and software technology sector, also expressed reservations about external regulation and said a regulator should have no role in assessing individual complaints but should be limited to monitoring how companies respond to take-down requests.
Setting a low threshold at which a regulator would acquire a right to intervene is (mere ‘dissatisfaction’ on the part of the user) would be very problematic.
“In effect, the regulator would acquire legal right to enforce a company’s terms of service which would be inappropriate for a statutory body, and would represent a highly unusual state interference in contractual relations,” it said.
It said if it was intended to give a regulator powers to hear appeals against declined or delayed take-down requests, a separate public consultation should be undertaken on how those procedures would work.
And as if that wasn’t enough, the grouping says a third public consultation would need to be carried out on how the whole machinery and personnel of a regulatory body would be funded.
Funding is an issue deftly avoided by Facebook in its submission which states: “The Regulator will be best placed to make determinations as to funding across the regulatory structure(s) here.”
Google says “direct funding from the State is the best model for promoting the independence and impartiality of any regulatory body”.
Don’t come shaking your can at us, in other words. The cost factor aside, the are other practical problems in devising any new regulatory system. The tech companies say if its brief is too broad, its structure will be unwieldy and it will become overwhelmed by complaints. Even the Department of Justice weighs on this, noting that is has responsibility for tackling illegal content as opposed to harmful content but that there is a danger of an unhelpful overlap.
“While it is extremely important to effectively tackle harmful content as well as illegal material online. It may not be desirable or sustainable in a country the size and scale of Ireland to have two parallel processes in place for interfacing with industry in relation to online content or for receiving reports in relation to harmful/illegal material,” it says.
“Any future architecture for interface with the relevant industry representatives or with the public would need a clearly defined role that would avoid duplication of effort and different government departments engaging in parallel exercises.”
And there is the question of time. Checking if companies have dealt with a take-down request adequately, investigating the complaint and allowing for an appeal all takes time.
One of the biggest concerns people have with harmful online content is the length of time it is allowed to remain live before it is taken down. Often, a post, picture or video will have been shared, spread and downloaded multiple times before it is removed.
People seem generally less aggrieved about the fact that the item got online in the first place — that’s the cost of the freedom offered by the net — than they are by the failure of social media companies to react to it in a timely manner. Any regulator strangled by an unclear remit, weak powers or poor resources will do little to address this.
‘It’s not the crime, but the cover-up,” is a sentiment expressed so often by citizens in despair about the way state services, public bodies and regulatory authorities handle their grievances that it too feels like a never-ending echo.