Government urged to hit social media companies 'where it hurts' for harms caused to children by AI

Government urged to hit social media companies 'where it hurts' for harms caused to children by AI

Report said 84% of eight to 12-year-old children have online accounts, with the top five most popular platforms being YouTube, TikTok, Snapchat, Instagram and WhatsApp.

An Oireachtas report has urged the Government to "hit companies responsible where it hurts," in their profits, for the harms caused to young people by artificial intelligence (AI).

About a quarter of Irish six-year-olds now have smartphones, with Ireland also having the highest rate of daily internet use by young people in the EU.

The report on the safeguarding of children in the age of AI was published by the Joint Committee on Children, Equality, Disability, Integration and Youth after extensive work and meetings involving stakeholders.

It was using these statistics, among others, which led the committee to disagree with the idea the current move by many administrations to address issues like phone use, AI and social media among young people was "alarmist or reactionary".

It noted in the report that 84% of eight to 12-year-old children have online accounts, with the top five most popular platforms being YouTube, TikTok, Snapchat, Instagram and WhatsApp.

Writing in the report, the committee said it was "essential that Ireland, being home to many of the main platforms’ headquarters, acts on these issues".

In total, it made 36 recommendations to protect children from the harms of AI.

Among them were: 

  • Companies should have a requirement to have "rigorous and effective age verification techniques in place", though the report noted these should not impinge on privacy and security for users;
  • Self-declaration, whereby the user inputs their age when setting up an account, should also not be permitted as an appropriate age verification system;
  • Social media companies should be made to abide by their agreed community standards;
  • Companies should amend their algorithms so  they are prevented from enabling and profiting from fake news and harmful content;
  • Communities need to be given the tools and education to think critically and decipher disinformation. This should be done through schools, education and training boards, youth clubs and any appropriate body.

The report noted it was not just down to the companies to act to protect children, saying "a whole of society approach is now needed to keep people safe and ensure that AI and social media are forces for good".

It added: "This will need to involve adults self-reflecting, making behavioural changes and setting boundaries.

The committee report made a point that the gains in terms of investment "no longer justify fence-sitting in the face of the harms being discussed", with one member surmising that “money has no morals.”

There was also an agreement there was a need to hit companies responsible for the proliferation of harms where it hurts, namely their profits.

The report added: "Blunt instruments like parental controls on apps or devices and limiting or banning phone use or time online will not work in isolation."

However, it also said AI had played a positive role in terms of accessibility and inclusivity.

"The report recognises the good work platforms are doing with AI to identify and filter out underage users, harmful content and bad actors, at scale," it said.

More in this section

Lunchtime News

Newsletter

Keep up with stories of the day with our lunchtime news wrap and important breaking news alerts.

Cookie Policy Privacy Policy Brand Safety FAQ Help Contact Us Terms and Conditions

© Examiner Echo Group Limited