Video clips of a man taking his own life which spread on TikTok were amplified by a “co-ordinated attack”, the social media site has told UK MPs.
TikTok said groups “operating on the dark web” had collaborated to repeatedly upload the footage onto social media platforms in order to further spread it across the internet.
Concerns were raised earlier this month when TikTok users reported seeing the different clips from the footage, which had been originally live-streamed on Facebook, appearing across the video-sharing app.
Giving evidence to the Digital, Culture, Media and Sport (DCMS) Select Committee in the UK, TikTok’s director of government relations and public policy in Europe, Theo Bertram, said the company had evidence that the wider spreading of the clip had been malicious.
“On August 31, a man live-streamed his own death by suicide on Facebook Live. A small number of clips were uploaded to our platform in the immediate days after, and then on the evening of September 6 we saw a huge spike in the volume of clips being uploaded,” he said.
“There was evidence of a co-ordinated attack. Through our investigations, we learned that groups operating on the dark web made plans to raid social media platforms, including TikTok, in order to spread the video across the internet.
“What we saw was a group of users who were repeatedly attempting to upload the video to our platform – splicing it, editing it, cutting it in different ways – and join the platform to try and drive that.”
Mr Bertram said TikTok’s “emergency machine learning services” were then used to quickly detect and remove different versions of the video as they were uploaded, but said the platform noticed an “unusual pattern” around how the video was being viewed.
“What we saw in this instance was people searching for content in a very specific way; frequently clicking on the profile of people as if anticipating that those people had uploaded the video, so it was quite an unusual pattern of how it was viewed as well,” he said.
He added that “one view of this type of content on our platform is one too many” and that TikTok was already taking internal steps to improve its detection systems to prevent similar incidents in the future.
Mr Bertram also said that in further response to the incident, TikTok had proposed to other social media firms that a new partnership be formed to lead an industry-wide response to the issue of harmful content.
“Last night, we also wrote to the CEOs of Facebook, Instagram, Google, YouTube, Twitter, Twitch, Snapchat, Pinterest and Reddit and what we are proposing is that, in the same way that these companies already work together around child sexual abuse imagery and the way we already work together on terrorist-related content, we should now establish a partnership around dealing with this type of content,” he said.
“We know we have to do better and our hearts go out to the victim in this case, but we do believe that we can do even better in the future.”
Online safety organisations have welcomed TikTok’s plan.
Andy Burrows, head of online child safety policy at the NSPCC said: “TikTok has rightfully recognised that tech firms have to do better to act on graphic posts which can be quickly spread online and can have an incredibly damaging impact on young people who see them.
“Online harms are rarely siloed on a single site, which is why the Government must require a cross-industry response to get a head start in the cat-and-mouse game to remove dangerous content and prevent online grooming.
“The Online Harms Bill proposals must also take legal but harmful content, including damaging self-harm and suicide posts, as seriously as they do illegal content.”