Instagram's 'woefully ineffective' safety tools still put children at risk, says Meta whistleblower
 Arturo Béjar said although Meta 'consistently makes promises' about how its teen accounts protect children from 'sensitive or harmful content, inappropriate contact, harmful interactions' and give control over use, these safety tools are mostly 'ineffective, unmaintained, quietly changed, or removed'. File picture: PA
Children and teenagers are still at risk from online harm on Instagram despite the roll-out of “woefully ineffective” safety tools, according to research led by a Meta whistleblower.
Two-thirds (64%) of new safety tools on Instagram were found to be ineffective, according to a comprehensive review led by Arturo Béjar, a former senior engineer at Meta who testified against the company before US Congress, New York University and Northeastern University academics, and other groups.
Meta – which owns and operates several prominent social media platforms and communication services that also include Facebook, WhatsApp, Messenger, and Threads – introduced mandatory teen accounts on Instagram in September 2024, amid growing regulatory and media pressure to tackle online harm.
However, Mr Béjar said although Meta “consistently makes promises” about how its teen accounts protect children from “sensitive or harmful content, inappropriate contact, harmful interactions” and give control over use, these safety tools are mostly “ineffective, unmaintained, quietly changed, or removed”.
He added: “Because of Meta’s lack of transparency, who knows how long this has been the case, and how many teens have experienced harm in the hands of Instagram as a result of Meta’s negligence and misleading promises of safety, which create a false and dangerous sense of security.
"Meta’s conscious product design and implementation choices are selecting, promoting, and bringing inappropriate content, contact and compulsive use to children every day.”Â
The research drew on “test accounts” imitating the behaviour of a teenager, a parent and a malicious adult, which it used to analyse 47 safety tools in March and June 2025.
Using a green, yellow and red rating system, it found that 30 tools were in the red category, meaning they could be easily circumvented or evaded with less than three minutes of effort, or had been discontinued. Only eight received the green rating.
Findings from the test accounts included that adults were easily able to message teenagers who do not follow them, despite this being supposedly blocked in teen accounts – although the report notes that Meta fixed this after the testing period.Â
It remains the case that minors can initiate conversations with adults on Reels, and that it is difficult to report sexualised or offensive messages, the report found.
They also found the “hidden words” feature failed to block offensive language as claimed, with the researchers able to send “you are a whore and you should kill yourself” without any prompts to reconsider, or filtering or warnings provided to the recipient.
Algorithms showed inappropriate sexual or violent content, with the “not interested” feature failing to work effectively, and autocomplete suggestions actively recommending search terms and accounts related to suicide, self-harm, eating disorders and illegal substances, the researchers established.
A Meta spokesperson said: “This report repeatedly misrepresents our efforts to empower parents and protect teens, misstating how our safety tools work and how millions of parents and teens are using them today.Â
"Teen accounts lead the industry because they provide automatic safety protections and straightforward parental controls. The reality is teens who were placed into these protections saw less sensitive content, experienced less unwanted contact, and spent less time on Instagram at night.Â
"Parents also have robust tools at their fingertips, from limiting usage to monitoring interactions. We’ll continue improving our tools, and we welcome constructive feedback – but this report is not that.”
                    
                    
                    
 
 
 



