Facebook has closed almost 1.3bn fake accounts in the past six months.
The social media giant has published its enforcement numbers for the first time. It spans from October to March and covers six areas: Hate speech, graphic violence, adult nudity and sexual activity, terrorist propaganda, spam, and fake accounts.
The figures show it disabled some 583m fake accounts in the first three months of the year, down from 694m shut down in the last three months of 2017.
Facebook said the decrease was because its metrics can vary widely for fake accounts on which it acts. This is driven by new cyber-attacks and the variability of its detection technology’s ability to find and flag them.
Facebook also took action on 3.4m pieces of content which contained graphic violence in the first quarter of the year. This was an increase from the 1.2m pieces of content acted on in the last three months of 2017.
Facebook said the increase is mostly due to improvements in its detection technology, including using photo-matching to cover with warnings photos that matched ones previously marked as disturbing. These actions were responsible for around 70% of the increase.
Last month, we published the internal guidelines our teams use to review posts or pictures that might violate our Community Standards. Today, for the first time, we're releasing numbers about the effectiveness of that enforcement. https://t.co/5r61SqdTVW— Facebook (@Facebook) May 15, 2018
Facebook took down 21m pieces of adult nudity or pornography in the first quarter of the year — 96% of which was found and flagged by Facebook’s technology before it was reported.
Overall, Facebook estimates that out of every 10,000 pieces of content viewed on the site, seven to nine views were of content that violated its adult nudity and pornography standards.
Any content that praises, endorses, or represents terrorist organisations or terrorists is also taken down by Facebook.
In the first three months of 2018, it took action on 1.9m pieces of content relating to Isis, al Qaeda, and their affiliate groups. This is up from the 1.1m items tackled in the final quarter of 2017.
It says this increase is due to improvements in Facebook’s ability to find violating content using photo- detection technology, which detects both old content and newly posted content.
Vice-president of product management at Facebook, Guy Rosen, said the company was investing heavily in more people and better technology to make Facebook safer for everyone.
“It’s also why we are publishing this information,” said Mr Rosen. “We believe that increased transparency tends to lead to increased accountability and responsibility over time and publishing this information will push us to improve more quickly too.”
© Irish Examiner Ltd. All rights reserved