Footage of the Christchurch mass shooting has been freely circulating on Twitter since Elon Musk took over, suggesting the platform’s previous measures stopping the spread of known terrorist content are no longer operating.
On Monday afternoon, a clipped version of the shooter’s livestream was removed by Twitter after accruing more than 150,000 views over two days. The short video, which was posted with the caption “how to end the paris riots in under 24 hours”, had been retweeted hundreds of times. The account that posted the video has been suspended.
The video had been repeatedly reported to Twitter using its in-platform tools and to the Australian eSafety commissioner before it was removed, the eSafety commissioner’s office confirmed.
This marks at least the third reported time since Musk took over Twitter that the social media platform had failed to detect and remove the shooting video, once in November and once last month.
In 2019, an Australian white supremacist livestreamed himself carrying out a mass shooting at two Christchurch mosques, killing 51 people and injuring 40 more. A recording of the killer’s Facebook livestream and his meme-filled manifesto spread widely online after the attack.
The fallout of the attack prompted criticism of the tech companies that had amplified the shooter’s content. Facebook removed 1.5 million uploads of the video in the 24 hours after the attack. Two months after the shooting, then-New Zealand prime minister Jacinda Ardern convened representatives from tech companies (including Twitter) and countries at the Christchurch Call summit to commit to policies to stamp out terrorist and violent extremist material spreading online.
Part of this included a commitment to take steps to prevent terrorist material from being uploaded by developing automated systems that recognise and block known content like the Christchurch shooter’s video.
Anonymous anti-fascist research group White Rose Society, which reported the video to the company and the eSafety commissioner, said it was concerned there was nothing to prevent Twitter from being used to spread footage of a future neo-Nazi mass shooting if its automated systems no longer operate.
“We saw with the Allen, Texas, mall shooting that graphic footage of dead people, including children, spread widely on Twitter without users searching for it,” the group said in a message to Crikey. “A genocide could be incited on Twitter (e.g. Myanmar) and there’s no one home at Trust & Safety.”
An inquiry to Twitter’s press inbox returned its customary auto-reply poop emoji.
Crikey is committed to hosting lively discussions. Help us keep the conversation useful, interesting and welcoming. We aim to publish comments quickly in the interest of promoting robust conversation, but we’re a small team and we deploy filters to protect against legal risk. Occasionally your comment may be held up while we review, but we’re working as fast as we can to keep the conversation rolling.
The Crikey comment section is members-only content. Please subscribe to leave a comment.
The Crikey comment section is members-only content. Please login to leave a comment.