Ireland urged to seize 'seatbelt moment' and regulate against online harm for rest of world
Professor Brian OâNeill of Media Literacy Ireland echoed the concern that education can only go so far. File photo
Ireland is facing the equivalent of the âseatbelt momentâ for cars, where protecting people from online harms has to go beyond education and awareness-raising, according to experts.
Speaking before an Oireachtas Committee on Media, Mary Aiken, Professor of Forensic Cyberpsychology at University of East London, said that with so many internet giants headquartered in Ireland, the action Ireland takes to regulate the online space will impact the rest of the world.
âThere was a famous book back in the 60s about the automobile industry, titled Unsafe at Any Speed. It led to the introduction of safety belts. I think the parallel here is that the internet is Unsafe at Any Speed. I think this is Ireland's seatbelt moment,â she said.Â
Looking at progress in the UK, on an equivalent online safety bill, Prof. Aiken recommended that Ireland also provide for the establishment of an advisory committee on disinformation and misinformation.
âThe UK government aims to tackle the problem of disinformation through⊠a requirement for Ofcom, the UK comms regulator to establish an advisory committee. The report also notes that the viral spread of misinformation and disinformation poses a serious threat to societies around the world and that media literacy is not a standalone solution,â she said.
Professor Brian OâNeill of Media Literacy Ireland echoed the concern that education can only go so far.
âIt is important to note that while media literacy increases resilience to many of the issues that are associated with digital communications, it should not be seen nor should ever be a solution on its own. People form beliefs for complex reasons, and skills and knowledge alone may not be enough to guarantee informed decision making,â he said.
Prof. Aiken also warned that the Online Safety and Media Regulation Bill will not be âpracticable, feasible, workable, or successfulâ without first compiling a full taxonomy of the different online harms which can be experienced by internet users.
âYou can't look at bullying without thinking about harassment. You can't look at harassment without thinking about misinformation and disinformation. You can't consider online harms without factoring in aspects like cyber fraud. We need a framework and classification system, and then one by one, we can begin to make sense of these harms and look at legislation that may tackle some or hopefully in time, all of them,â she said.
Prof. Aiken said this taxonomy could then be used to build on the safety tech sector, which could automate safety measures to protect users from harm, such as using AI to help flag and take down misinformation or offensive content.
However, Dr Eileen Culloty of the Institute of Future Media, Democracy and Society, Dublin City University, warned that the nuances of harmful content cannot always be spotted by artificial intelligence (AI).
âWe have to be very cautious about assuming that these technologies can be the solution to this. It's very difficult to say that even a fact checker or a journalist or someone could come along and say something is categorically true or false. And so extending that out, it's extremely difficult to say you could rely on a piece of AI to start categorising disinformation,â she said.
Dr Culloty said it is fundamentally important to require platforms to be open to independent audits or sharing of information, to assess the effectiveness of current protection measures, such as AI content regulation.



