We’ve spent about a year now angsting about “fake news”. The result? The situation is getting both worse — and better.
Here’s some frightening news: the truthy veneer of fake news is getting a boost with the emergence of advanced video and audio manipulation software that literally lets a fake news bot put words in a person’s mouth, in a way that is sufficiently sophisticated to fool even equally advanced voice biometric identification systems. In fact, there’s now software that applies it to speakers in real time.
Researchers at Stanford University have tested Face2Face, which enables real-time facial capture and manipulation. A Montreal-based team has developed a tool, called Lyrebird, that synthesises words actually spoken into an alternate order.
So if you’ve had the view that you’ll only believe it when you hear it, well, you’re out of luck.
In a sign of how fast life imitates art, it’s only four years since the publication of the short story Nirvana by Silicon Valley-based, Pulitzer Prize-winning novelist Adam Johnson. This told the tale of a programmer developing voice-synthesising bots built on the speeches of dead public figures to help him through his marriage break-up.
Of course, this might be a useful leg up for any stray Australian politician wanting to “prove” they long ago renounced dual citizenship.
“Fake news” was coined to describe deliberately fabricated stories falsely presented as actual news journalism with an intent to mislead. Instead, it has come to mean “any story in the media I disagree with or that embarrasses me”.
[Sorry, ‘fake news’ isn’t information you don’t like]
But we’re not ready to give the term up yet. So what does it mean? Google Search’s VP of engineering Ben Gomes says “‘fake news’ [is] where content on the web has contributed to the spread of blatantly misleading, low quality, offensive, or downright false information”.
Canadian commentator Phil Smith describes it as a larger problem of “digital disinformation”: “a tsunami of polluted information that is threatening the fabric of trust between users and what they experience online.”
Awareness of fake news was boosted by it being weaponised, apparently by foreign agents, in the US presidential election. But anyone who’s friends on Facebook with a grumpy old uncle or two would know that when it comes to subjects like climate change, fake news is old news.
What makes dealing with this “tsunami of polluted information” so difficult is that the forces driving it are not simple and, in many ways, are embedded in the practical operations of the internet.
First, the social internet is “open access” by default, so its basic rules of engagement encourage the spread of bots and algorithms that use the sensational lie over the prosaic truth. An AI bot has been reported to have generated thousands of fake YouTube video news without human intervention. And it’s easy to amplify the message through purchased accounts, followers and even private personal data on the not-even-particularly-grey parts of the web.
Second, funding structures support the lie, whether through programmatic advertising (like Google’s AdSense), government-funded cyber-war or right-wing-supported campaigns. Contrary to income, the global nature of the internet means fake news can be produced in the cheapest of low-wage countries, such as, say, Macedonia. Sometimes it seems “fake news” has a stronger business model than traditional news.
Third, fake news stories are highly crafted. They are designed to sound true, to sound like they come from an authentic voice, at least to someone with a disposition to believe. Ignore the “Re-tweets do not mean endorsement” on people’s Twitter profiles; we’re all “useful idiots” in the fake news war. We share through Facebook or Twitter based on our basic beliefs, not any assessment of truth.
[Fake news is not new, and a Senate inquiry into it would be a colossal waste of time]
You hate Hillary Clinton? You’ll perhaps be one of the half million people who shared the totally bogus 2016 story headlined “FBI Agent Suspected In Hillary Email Leaks Found Dead In Apparent Murder-Suicide” and first published in the equally bogus “Denver Guardian”. NPR’s hunt for the origin of this story here will tell you a lot about how “fake news” works.
Most of the angst has been directed at the dominant platforms, Facebook and Google, because they are the public vector for the disease. And both are attempting to at least be seen to be addressing the problem.
Google, for example, announced in April that it had adjusted its algorithms to prioritise true news in search rankings. Facebook has tweaked its processes to build in a trust factor for professional journalistic news organisations (while downgrading news overall). Both have intervened to block advertising revenues from their own programmatic advertising to fake news sites.
But the solution for fake news has to start with a recognition that its popularity is based on a loss of trust in journalism itself. When people don’t trust real news, they’re inclined to welcome fake news that gives comfort and support to what they already believe.
Crikey is committed to hosting lively discussions. Help us keep the conversation useful, interesting and welcoming. We aim to publish comments quickly in the interest of promoting robust conversation, but we’re a small team and we deploy filters to protect against legal risk. Occasionally your comment may be held up while we review, but we’re working as fast as we can to keep the conversation rolling.
The Crikey comment section is members-only content. Please subscribe to leave a comment.
The Crikey comment section is members-only content. Please login to leave a comment.