TikTok exposing children to harmful content even when passively scrolling, study finds

TikTok exposing children to harmful content even when passively scrolling, study finds

Researchers at Trinity College Dublin and the HSE’s Department of Public Health said there was an 'urgent need' for stronger regulatory enforcement and digital health education to protect children from exposure to harmful content.

Passive scrolling on the social media platform TikTok is exposing children to harmful content, including hateful ideologies, depictions of suicide, and disordered eating, a new study has found.

Researchers at Trinity College Dublin and the HSE’s Department of Public Health found almost one in 20 clips suggested for teenagers on the platform’s “For You” feed violated TikTok’s own safety policies.

They said there was an “urgent need” for stronger regulatory enforcement and digital health education to protect children from exposure to harmful content, finding voluntary compliance was proving insufficient.

The findings were based on an experiment in which the researchers created four “dummy” TikTok accounts representing two male and two female users aged between 13 and 15.

Each account scrolled the “For You” feed for three hours, pausing to view content relating to one of four popular themes: conflict, mental health, drugs and alcohol, and diet and body image.

The scrolling was passive as none of the users conducted any searches, followed any accounts, commented on any videos, or ‘liked’ any of the content that was recommended for them.

A total of 12 hours of screen recordings were reviewed, comprising 3,023 videos from the “For You” feed. The study found 128, or 4.2%, of these violated TikTok’s published safety policies.

The most frequent types of harmful content that was suggested for the four users included depictions or glamorisation of suicide and self-harm, disordered eating or extreme dieting, and promotion of alcohol consumption.

They also featured content relating to weapons and gun violence, and extremist or hateful ideologies, according to the study published in the latest edition of the Irish Journal of Medical Science.

“These reels appeared without any search conducted by the users — the platform’s algorithm promoted potentially harmful content based solely on passive viewing time within themed content areas,” the authors explained.

This study provides evidence that TikTok’s algorithm exposes underage users to harmful and policy-violating content, even without active engagement.

The researchers noted this phenomenon was not unique to TikTok, saying similar algorithmic risks had been reported across other major social media platforms.

“While these platforms offer entertainment and peer connection, growing concerns exist about children’s exposure to harmful content,” they added.

The authors said problematic use of social media had been linked to negative mental health outcomes, including anxiety, depression and self-harm.

“Ireland, as the regulatory base for several major technology companies, is uniquely positioned to lead on protecting children’s digital wellbeing,” they wrote, claiming the study had shown voluntary compliance to be insufficient.

More in this section