Facebook changes live streaming rules following New Zealand attack

Facebook will restrict people who have broken certain rules from using its Live streaming feature, in response to the mosque terror attack in Christchurch, New Zealand.

The social network is toughening its stance on live broadcasts with a “one strike” policy being placed on any account which violates Facebook’s most serious policies from their first offence.

This means that, for example, if someone were to share a statement from a terrorist group with no context, they will be immediately blocked from using Live for a set period of time, such as 30 days.

Our goal is to minimise risk of abuse on Live while enabling people to use Live in a positive way every day

Facebook said it intends to extend restrictions into other areas over the coming weeks, beginning with preventing those same people from creating ads on Facebook.

“We recognise the tension between people who would prefer unfettered access to our services and the restrictions needed to keep people safe on Facebook,” Guy Rosen, Facebook vice president of integrity, said.

“Our goal is to minimise risk of abuse on Live while enabling people to use Live in a positive way every day.”

At the time, Facebook said that the video was viewed fewer than 200 times during its live broadcast and was was viewed about 4,000 times in total before being removed.

In addition, the company has pledged $7.5m towards new research partnerships in a bid to improve its ability to automatically detect offending content after some manipulated edits of the Christchurch attack managed to bypass existing detection systems.

It will work with the University of Maryland, Cornell University and The University of California, Berkeley, to develop new techniques that detect manipulated media, whether it is imagery, video or audio, as well as ways to distinguish between people who unwittingly share manipulated content and those who intentionally create them.

“This work will be critical for our broader efforts against manipulated media, including DeepFakes,” Mr Rosen continued.

“We hope it will also help us to more effectively fight organised bad actors who try to outwit our systems as we saw happen after the Christchurch attack.”

Google also struggled to remove new uploads of the attack on its video sharing website YouTube.

During the opening of a safety engineering centre (GSEC) in Munich on Tuesday, Google’s senior vice president for global affairs, Kent Walker, admitted that the tech giant still needed to improve its systems for finding and removing dangerous content.

“In the situation of Christchurch, we were able to avoid having live-streaming on our platforms, but then subsequently we were subjected to a really somewhat unprecedented attack on our services by different groups on the internet which had been seeded by the shooter,” Mr Walker said.

Google, Facebook, Microsoft and Twitter are taking part in a summit in Paris on Wednesday involving French president Emmanuel Macron and New Zealand’s prime minister Jacinda Ardern, to address terrorist and violent content online.

- Press Association

More on this topic

Facebook takes action against campaign aimed at disrupting foreign elections

WhatsApp users urged to update app following spyware vulnerability

Facebook sues South Korean analytics firm over allegations of data misuse

France threatens new rules on Facebook amid Zuckerberg visit

More in this Section

Female-voiced AI assistants reinforce gender bias – UN study

Driverless cars can work together to keep traffic moving, research finds

Microsoft Surface: Beauty is in the ear of the beholder

Panasonic’s new Lumix FZ1000 II for serious snaps


Here’s how new parents can get more sleep, according to a Hollywood nanny

Everything you need to know about Binyavanga Wainaina’s work, as the Kenyan author dies

6 greener straws and stirrers for your drinks, as plastic versions are set to be band

These are the signs and symptoms of sepsis to be aware of

More From The Irish Examiner