'New age' of online regulation to tackle harmful content
Cyberbullying is 'very much alive' with the ISPCC receiving about four contacts daily on the issue.
People are not reporting harmful or illegal content online due to “reporting fatigue” as a result of a lack of action on behalf of social media platforms.
That is according to the Online Safety Commissioner Niamh Hodnett, who was speaking at an event to mark the 21st Safer Internet Day.
The event heard people online were unaware of how to report harmful content, or do not report as they believe nothing will be done to rectify it by social media platforms.
“I think a lot of people have reporting fatigue because they have reported content and nothing has happened or they’ve reported content and weeks have gone by and nothing has been done about it or they get a very disappointing response to the platform,” she said.
However, Ms Hodnett told those in attendance that Ireland is moving into a “new age of effective regulation” which will bring about significant change.
“This is going to change, we’re going to start seeing the needle move in the right direction towards a safer internet experience,” she said.
Ms Hodnett said Coimisiún na Meán plans to hold the tech sector in Ireland “to account”, and is currently hiring for a complaints team with a contact centre opening on February 19, while hiring for a compliance and enforcement team is also underway.
“That means over the coming months, where platforms are not dealing with these obligations in a timely way, then we will be investigating these matters and dealing with it by way of compliance,” she said.
Coimisiún na Meán also has plans to establish an individual complaints framework in 2025, with an initial focus on younger victims of cyberbullying.
ISPCC head of policy Fiona Jennings said online safety issues present in many different ways but cyberbullying in particular is “very much alive” with the charity receiving about four contacts daily on the issue.
Another concern is that most parents might not grasp just how quickly online grooming can happen, she said.
Separately, although those in attendance heard that artificial intelligence (AI) is being used to effectively flag illegal content on some social media platforms, it is also being used to generate “new types” of child abuse material which are often used for monetary gain, a developing issue Ms Jennings said that we must "be alive to".
Dr Emma Murphy from the School of Computer Science at Technological University Dublin said there are tools and skills within the tech sector that can solve issues surrounding internet safety.
She said a primary concern is that young children have too much access online, adding that a more collaborative approach between those involved in the design of technology or software and stakeholders who are concerned about internet safety is needed.
“At the moment, I think maybe there's a more tech-driven approach where you're just implementing what you can but we need a more human-centred participatory approach to this,” she said.
She said children under the age of 13 are using platforms that they are technically not supposed to be using and that technology which could be refocused and redesigned could act as a solution to safety concerns.
Ms Murphy said age verification techniques which are used for financial apps such as Revolut could be implemented on social media platforms to stave off premature access.


