Facebook trained its AI to block violent live streams after Christchurch attacks

Read More

Facebook trained its AI to block violent live streams after Christchurch attacks

Leaked papers detail emergency exercise that followed 2019 mass murder in New Zealand

Last modified on Fri 29 Oct 2021 12.21 EDT

Facebook trained its artificial intelligence systems to detect and block any future attempt to livestream a shooting spree with “police/military body cams footage,” and other violent material, in the aftermath of the Christchurch terror attack.

The emergency exercise – detailed in corporate papers leaked by whistleblower Frances Haugen – followed the March 2019 mass murder in the New Zealand city, described internally as “a watershed moment” for the Facebook Live video service.

The white supremacist attacker was able to broadcast a 17-minute live stream of the attack on two mosques that was not detected by the company’s systems, allowing it to be swiftly replicated online. 1.5m uploads had to be removed in the 24 hours afterwards.

“It was clear that Live was a vulnerable surface which can be repurposed by bad actors to cause societal harm,” the leaked review stated. “Since this event, we’ve faced international media pressure and have seen regulatory and legal risks increase on Facebook increase considerably.”

At the time Facebook admitted its AI systems had failed to prevent the broadcast, and the video was only removed after the company was alerted by New Zealand police. No Facebook user complained for 29 minutes and executives were forced to admit its detection systems were “not perfect”.

The leaked documents, initially published by Gizmodo, underscore the failure, showing that at the time of Christchurch, the social media giant was “only able to detect violations five mins into a broadcast” – and that the attack video only scored 0.11 on an internal graphic violence scale when the threshold for intervention was 0.65.

It also details how Facebook grappled with the problem, trying to improve its cutting edge technology. A key element was to retrain its company’s AI video detection systems by feeding it a dataset of harmful content, to work out what to highlight and block.

“The training dataset includes videos like police/military body cams footage, recreational shooting and simulations,” the internal material says, plus “videos from the military” obtained from by the company’s law enforcement outreach team. It also included first person shooter video game footage, as examples of content not to block.

As a result of this and other efforts, the documents show that Facebook believed it had slashed the detection time from five minutes to 12 seconds. The Christchurch video now scored 0.96 on the internal graphic violence scale, well above the intervention threshold.

Elsewhere, this set of leaked documents show how keen Facebook was to repair its damaged image. The company admitted it had “minimal restrictions in place to prevent risky actors going to live” and in May 2019 announced a “one strike” policy that blocked accounts with a single terror violation from using Live for 30 days.

The change was announced in tandem with the Christchurch summit held in Paris, aimed at eliminating terrorist content online. In what Facebook described as “a major PR win”, the New Zealand prime minister, Jacinda Ardern, used “FB live to update her followers after the announcement”.

Related articles

You may also be interested in

Headline

Never Miss A Story

Get our Weekly recap with the latest news, articles and resources.
Cookie policy

We use our own and third party cookies to allow us to understand how the site is used and to support our marketing campaigns.