Scott Morrison walked away from Osaka’s G20 summit with, optically at least, a few wins. Apart from his sit down with the US president, Morrison secured the support of world leaders to put increased pressure on Facebook and other social media giants to act faster on taking down “violent terror content”.
The non-binding statement, endorsed by all G20 members, calls on companies to act immediately when contacted by authorities to remove content such as the live video of an attack or terrorist recruitment material.
But with the already mammoth task that moderating social media content entails, what exactly would a stricter framework look like? And is change even possible?
The sheer scale
Before the horror of the Christchurch attack gave the argument a new urgency, Facebook was already wrestling with content moderation, facing criticism for the flowering of fake news, white nationalism and radicalisation on its platform. Last year, Vice reported on a series of dinners between Mark Zuckerberg and leading social media academics to discuss the issue. Noting the debate “has largely shifted the role of free speech arbitration from governments to a private platform”, the piece sums up Facebook’s challenge:
How to successfully moderate user-generated content is one of the most labor-intensive and mind-bogglingly complex logistical problems Facebook has ever tried to solve. Its two billion users make billions of posts per day in more than a hundred languages, and Facebook’s human content moderators are asked to review more than 10 million potentially rule-breaking posts per week.
The human cost
Content moderation is undertaken largely by around 7,500 workers in sites scattered across the world. In recent years, the effects of this kind of work have become more clear. In 2017 two Microsoft moderators sued the company for the PTSD they suffered from having to regularly view “inhumane and disgusting content”. In March 2018, a moderator in Florida named Keith Utely died of a heart attack because of, according to a damning piece on The Verge, the terrible conditions endured workers at his site:
The 800 or so workers there face relentless pressure from their bosses to better enforce the social network’s community standards, which receive near-daily updates that leave its contractor workforce in a perpetual state of uncertainty …
‘The stress they put on him — it’s unworldly,’ one of Utley’s managers told me.
The conditions were such that three of Utely’s coworkers broke the 14 page non-disclosure agreements required of Facebook contractors. These NDAs lead to another layer of trauma: not only must a moderator absorb the most grotesque content on the internet, they are not allowed to tell anyone what they have seen. Filmmakers Hans Block and Moritz Riesewieck, who made The Cleaners, a documentary account of a content moderation site in Manila, told the ABC this had lasting effects:
You’re not allowed to verbalise the horrible experience you had. While we were filming the documentary … we both had time to talk about what we are filming, time to have a break and to stop watching and to take our time to recover from what we saw. The workers in Manila don’t have the time. They don’t have the ability to talk to someone.
An ‘open’ internet?
A further question is one of who decides what is acceptable, and what the lasting effects on the discourse might be.
In a bracing piece for The New York Times‘ “Op-eds from the future” series, Cory Doctorow posits one possible outcome. He speculates that regulation of social media will see “the legal immunity of the platforms … eroded”, spurred by “an unholy and unlikely coalition of media companies crying copyright; national security experts wringing their hands about terrorism; and people who were dismayed that our digital public squares had become infested by fascists, harassers and cybercriminals”.
He envisages that news giants “thanks to their armies of lawyers, editors and insurance underwriters” will be able to navigate the tightening walls of acceptable speech.
If this seems at all melodramatic, it’s worth noting that a civil rights audit of Facebook, released today, shows that banning certain terms fails to root out the problem and necessitates “scope creep”, where more and more content is banned:
While Facebook has made changes in some of these areas — Facebook banned white supremacy in March — auditors say Facebook’s policy is still “too narrow.” That’s because it solely prohibits explicit praise, support or representation of the terms “white nationalism” or “white separatism,” but does not technically prohibit references to those terms and ideologies the audit team recommends Facebook expand its policy to prohibit content that “expressly praises, supports, or represents white nationalist ideology” even if the content does not explicitly use the terms “white nationalism” or “white separatism.”
Crikey is committed to hosting lively discussions. Help us keep the conversation useful, interesting and welcoming. We aim to publish comments quickly in the interest of promoting robust conversation, but we’re a small team and we deploy filters to protect against legal risk. Occasionally your comment may be held up while we review, but we’re working as fast as we can to keep the conversation rolling.
The Crikey comment section is members-only content. Please subscribe to leave a comment.
The Crikey comment section is members-only content. Please login to leave a comment.