Colman Noctor: New, strict online age verification will keep Ireland’s children safer
Several video platforms are now required to block access to pornography, extreme violence, self-harm content, eating disorder promotion, cyberbullying, or hate speech. Unless effective age verification confirms the user is over 18, they will be unable to access the content. So, self-declaration alone is no longer sufficient.
On July 21, there was a landmark shift in Ireland. Part B of the online safety code, framed by Coimisiún na Meán, was introduced.
The code requires video-sharing platforms — such as YouTube, Facebook, TikTok, Instagram, and X — to enforce age-verification procedures before users can view adult or harmful content.
While part A covered general harmful content and took effect in November 2024, part B has an additional age-assurance requirement that safeguards younger people.
This means Ireland has moved beyond self-declaration, which often involved simply clicking ‘Yes, I’m over 18’ to access adult content.
These selected platforms must now adopt real age-verification methods to safeguard young and vulnerable users.
Practically, when your child tries to access adult content, they might be asked to take a short, live selfie video or connect their account to an age-verifying app.
If they fail the age-verification test, some content will be locked behind a barrier, and there will be no easy workaround.
That’s the intent.
As a result of part B, several video platforms are now required to block access to pornography, extreme violence, self-harm content, eating-disorder promotion, cyberbullying, or hate speech.
Unless effective age verification confirms the user is over 18, they will be unable to access the content. So, self-declaration is no longer sufficient.
These platforms must also have visible and easy-to-use parental controls, such as time limits or restrictions on who can post or view videos on a child’s account, along with clear mechanisms to report harmful content and established procedures for resolving complaints.
Crucially, these platforms are now subject to enforcement and can be fined up to €20m, or 10% of annual turnover — whichever is greater — for infringement of these directives.
For parents, this signifies a shift from self-regulation and parental vigilance towards holding tech firms legally accountable — assigning some responsibility to platforms for keeping children safe.
CyberSafe Kids describes this as a crucial milestone of online platforms accepting legal responsibility.
However, this directive is not the panacea that we might assume it to be.
A major limitation of this legislation is that it only applies to 10 sites headquartered in Ireland, including Facebook, Instagram, X, YouTube, Udemy, TikTok, LinkedIn, Pinterest, Tumblr, and Reddit.
Unfortunately, it does not cover non-video sharing platforms; therefore, significant problem areas — such as online gaming, pornography sites, and notably Snapchat — are unaffected.
Some other shortcomings include that the directive does not specify particular technologies for verifying age. As a result, various platforms suggest solutions that differ in their robustness — ranging from ‘live selfies’ or facial age-estimation AI to document upload or digital ID tokens.
Like any safeguards or identifier technology, there have been criticisms from certain groups, including European Digital Rights (EDRi), which warns that many age-verification systems pose serious privacy and surveillance risks, especially when biometrics or identity documents are
involved.
They call on regulators to require zero-knowledge proofs, data minimisation, and alternatives for those without a formal ID.
Given these changes, parents should talk to their child about the new verification steps, and explain how they might come across them on TikTok, Instagram, or YouTube, and make sure they never share sensitive documents or photos without understanding the process.
Parents should familiarise themselves with platform settings, including parental controls, reporting tools, privacy defaults, and content ratings.
Coimisiún na Mean has mandated these platforms to highlight these features, so we should take advantage.
Parents and schools must continue to promote digital literacy, so children know why some content is age-restricted and how recommendation algorithms can still promote harmful content, even behind age-restricted screens.
Parents are encouraged to report any violations of these new directives. If your child sees harmful content or has bypassed the verification tools, parents should first contact the platform to report these issues and, if unresolved, contact Coimisiún na Meán.
The UK’s Children’s Code requires online services to use strong privacy settings for children and minimise data collection, but it does not mandate strict age verification.
The Australian federal government, through the Online Safety Amendment (approved in late 2024), plans to ban under-16s from social media, enforcing this via age verification and imposing heavy fines on platforms.
It is one of the stricter regimes globally, alongside Singapore, whose regulators require app stores to screen users’ ages, blocking those under 18 from downloading adult apps and under-12s from downloading popular social-media apps.
Although not as strict as Australia or Singapore, Ireland is among the first EU countries to implement legally binding age- verification requirements, specifically for video- sharing platforms, with plans for robust enforcement and penalties.
It aligns with the EU Digital Services Act by requiring age verification in service design rather than relying on self-regulation.
This represents a step in the right direction, but the fact that the code only applies to platforms with EU headquarters in Ireland is deeply problematic.
Consequently, Snapchat, and other services based elsewhere, will be outside the scope of these rules, although they remain subject to the EU’s Digital Services Act, which has less-rigorous age-verification measures.
Even with content restrictions, algorithms can still promote harmful suggestions, especially for self-harm or eating-disorder content, which is not always blocked by age gates.
Part B of Ireland’s Online Safety Code does not guarantee that teenagers will not encounter troubling content.
Still, in collaboration with families, schools, and civic groups, it might provide a stronger framework than ever before.
For parents aiming to keep children safe in today’s digital world, the law is catching up, but your involvement remains important.
While this new online code makes parenting children in a technological age a shared responsibility rather than an individual one, it’s also realistic to recognise that there are still limitations to ensuring children’s safety online.
Unfortunately, recommendation algorithms (like those promoting trending or extreme content) aren’t directly regulated under this code, though they are covered by the EU’s Digital Services Act.
This code only applies to 10 video-sharing platforms based in Ireland, and many other apps and games remain outside its scope.
Underage users can still create fake ages or use older siblings’ IDs, though the new tools will make this more difficult.
Not all platforms are required to verify users in the same way, so some may operate more smoothly, while others might be more frustrating or invasive.
This new online code marks a significant shift from passive gatekeeping to active protection. Let’s hope it has the teeth needed to be effective by following through on the sanctions it promises.
It is our first step to keeping children in Ireland safe online.
- Dr Colman Noctor is a child psychotherapist


