This year is a landmark one for democracy, with more than 4 billion people in more than 60 countries eligible to vote in elections. There’s a grim poetry in this historic milestone coming at a point when our information ecology is at its most fractious.
At the end of last week, American artificial intelligence company OpenAI announced Sora, its text-to-video model. Sora, when it is publicly available, will allow users to generate lifelike video from text prompts — with example outputs on the website ranging from sweeping drone shots of a patch of rocky Californian coastline to a woman walking down a densely crowded street in Tokyo. These videos may not stand up to close scrutiny, but amid the endless firehouse of content on today’s internet, it’s hard to imagine them receiving that level of examination.
Sora is the latest in a legion of new generative-AI tools that have emerged from recent advances in neural network development, many spearheaded by researchers at OpenAI. These tools, now also offered by major big tech firms like Meta, Google and Microsoft, leverage vast reservoirs of computing power to enable millions of users to frictionlessly conjure text, video and audio from the digital ether. The outputs of these new platforms at this point have spread far faster than our ability to develop and implement systems to verify their artificial nature.
It’s a problem of scale: by bringing the marginal cost of producing content to near zero, the volume of that content will increase exponentially.
Most generative-AI deceptions exist on a harmless continuum from amusing to annoying. In March last year, a photo went briefly viral that depicted Pope Francis decked out in an alarmingly stylish Balenciaga puffer jacket in lieu of his usual papal vestments. It was an AI fake, generated with the latest release from generative-AI startup Midjourney.
But there are good reasons to be concerned about political impacts on states other than the Vatican. In 2023’s elections in Slovakia and Argentina, for example, deepfaked audio spread on social media depicting political candidates and government figures saying things they did not say. The actual lasting impact of these generative-AI interventions is hard to quantify, but it demonstrates an obvious point: if you make it vastly easier to fake images, audio and video, then bad actors will avail themselves of the opportunity.
Generative AI also poisons the well when it comes to things that did occur. Politicians now have a readymade excuse when confronted with video or audio evidence of misdeeds: it’s a deepfake. Last year, a Taiwanese lawmaker suggested a grainy video that purportedly depicted him engaged in an extramarital affair was AI-generated. In July, a politician from India’s ruling Bharatiya Janata Party mounted a similar defence when audio of him accusing his own political faction of corruption leaked online. Despite reporting, the actual truth of the matter in both of these examples remains unresolved.
There’s an argument to be made that generative AI is a symptom of a broader collapse in our traditionally truth-bearing institutions, rather than some new and unique problem for democracy. The past decade has seen numerous destabilising political events, with misinformation and disinformation blamed as the culprit. In 2016, Brexit and the election of Donald Trump led to an international discourse about fake news, social media “filter bubbles” and state disinformation campaigns. Populist anger at COVID-19 lockdowns and vaccinations was similarly blamed on online misinformation, with institutions like the World Health Organization mounting public information campaigns against the pithily named “infodemic”.
It may well be the case that this is a slow death for the existing media and political establishment — or “regime”, as the new torchbearers of free speech would say — under a technological onslaught that began in earnest when Google started indexing and ranking the web for public consumption. We might ask not why people are inclined to believe fake images and videos that cross their internet feeds, but instead why they distrust anyone telling them otherwise.
Crikey is committed to hosting lively discussions. Help us keep the conversation useful, interesting and welcoming. We aim to publish comments quickly in the interest of promoting robust conversation, but we’re a small team and we deploy filters to protect against legal risk. Occasionally your comment may be held up while we review, but we’re working as fast as we can to keep the conversation rolling.
The Crikey comment section is members-only content. Please subscribe to leave a comment.
The Crikey comment section is members-only content. Please login to leave a comment.