An AI-generated image (Image: DALL-E)
An AI-generated image (Image: DALL-E)

The cliché of a “rise of the machines” is as old as science fiction itself. Beginning with Mary Shelley’s Frankenstein in 1818, and appearing in all manner of guises — from Karel Čapek’s Universal Robots, to Clarke and Kubrick’s HAL9000, to the android matriarch in I Am Mother — it shadows the genre with all the tenacity of Arnie’s Uzi-toting Terminator. Machine intelligence is destined to eclipse our own, the idea goes, envisaging a struggle for power between smart machines and their human creators — a mechanical mutiny.

Recent developments have given credence to this rather hoary notion. Last year, for example, the visual artist Supercomposite (real name Steph Maj Swanson) became briefly famous in tech circles when her experiments with “negative prompt weights” (instructions to an image-generating AI to create an image as different from the prompt as possible) turned up the character of “Loab” — a devastated-looking woman who proved nearly impossible to eliminate from subsequent images, and whose haunting likeness became increasingly violent and gory as the experiment progressed.

More recently, the emergence of ChatGPT has led Elon Musk and other would-be Oppenheimers to refresh their warnings about “strong AI”: that which is not merely intelligent but capable of human-like reflection. Even if no one is claiming (yet) that we are facing a Terminator-style Judgment Day, we are way past the point where Alan Turing’s measure of machine intelligence — whether or not a machine can pass as a conscious actor — has been decisively breached. In short, things are getting distinctly weird.

Nevertheless, the rise-of-the-machines scenario isn’t as relevant as it may seem at first blush. In fact, there’s no reason to think smart machines are actually thinking, if by “thinking” we mean the mental activity engaged in by humans. This is best expressed through another movie cliché, one drawn not from science fiction but from horror: we are waiting for the monster to materialise in the doorway, when in fact it was in the room all along.

Technological somnambulism

Monsters can be defined as liminal beings. They exist between categories, in a way that subverts, or threatens to subvert, the moral, social or natural order. This is what Freud called the “uncanny”: the monster creeps us out because it doesn’t quite fit with our received ideas of how things should look, especially when the thing in question has some features of humanness. Vampires, zombies and evil clowns all provoke this psychic disturbance, and so too, in a weak way, does the rogue AI — the non-human machine that, in point of its intelligence, appears to be gaining on its human counterparts.

But it is not the uncanniness of AIs that should worry us. Rather it is the uncanniness that may come (and may already be coming) to descend on humans as they grow ever more reliant on smart machines and transformative technologies. With so much of their social and creative lives now shaped by powerful tools, humans could well become permanently alienated from one another and from their own humanity. This is where the real monsters lurk.

As I consider in my new book Here Be Monsters: Is Technology Reducing Our Humanity? (published May 1 by Monash University Publishing), recent developments in artificial intelligence, biotechnology, bio-hacking and pharmaceuticals may be affecting us. How might new communication technologies be shaping human social life, and what consequences might that have for social solidarity and individual selfhood? How might developments in biotechnology alter the notions of luck and equality on which social solidarity (arguably) rests? And how might the increasing tendency towards algorithmic complexity — the creation of a “black box society” — affect human creativity and agency?

What makes these questions critical is the character of emerging technologies. Of course, many technologies have changed very little over the past half-century. The car is still substantially the same machine it was in 1973, while the airliner has barely changed at all in that time, except in terms of power and efficiency.

But technologies emerging now are of a different order than those based on the internal combustion engine. Adept for many centuries at bending capricious nature to our will, we can now intervene directly in it, and do so at the level of the atom, the cell and the molecule. We have moved from harnessing nature’s power to actively reconstituting nature itself.

Unfortunately, these developments in technology have not been accompanied by a public conversation about whether or not they are good for us. Technology is developed within an institutional framework that protects it from humanistic oversight. Science, which used to enjoy a degree of independence from technological considerations, is now largely subordinated to those considerations, which are subordinated, in turn, to the profit motive.

The modern university sits at the heart of this framework, where commercial research arms, industrial parks and an emphasis on the marketisation of knowledge through patents and licensing keep research and development on a neoliberal trajectory. Technological development just seems to… happen. Such discussion as there is about emerging technologies is suffused with a sense of inevitability, even a kind of fatalism.  

We need to develop a more “techno-critical” attitude to new and emerging technologies, and indeed to technologies in general. We need to reject “technological somnambulism” (as the political theorist Langdon Winner dubs this sense of inevitability) in the name of a democratic “technics” that puts human freedom and flourishing at its core.

Such democratisation does not merely entail the socialisation of private technologies, though that would undoubtedly form part of the process. It would entail thinking deeply about those technologies from a radically humanistic perspective, about what they add to our lives and what they take away. It would mean asking what we want from technology, which is a way of asking who we are. To put it in the words of Lewis Mumford, a philosopher of technology from a very different era, it would mean rediscovering “the human scale”.

Everyman is a handyman  

Of course, human beings are technological animals. The use of stone tools predates the emergence of Homo sapiens by some 3 million years — there is no period of our history that does not know the use of tools. Just as anyone reading this sentence was using technology before they woke up this morning (technology in the form of blankets, a bed, a house, an alarm clock, etc), so our species “woke up” in technology. We are Homo faber — man the maker. Not knowing one end of a power drill from another is no disqualification from this condition. Everyman is a handyman.

But humans are not just the users of their tools. In complex ways, they are also their products. Contra the Silicon Valley view that technologies are politically “neutral” phenomena that can be used for good or evil ends, the technologies we use have the power to shape the way we live in community with each other, which will in turn shape the kinds of technologies we decide (or others decide) to create in the future. In this sense, technologies are themselves political. And yet our politics has little to say about them.  

As we stand on the edge of a revolutionary era of technological development, it is necessary to bring technology to the centre of our deliberations, as part of a broader examination of the kind of entity humankind is.

We need to reopen the question of technology, upon pain of getting monstered by it.  

Richard King’s book Here Be Monsters: Is Technology Reducing Our Humanity? is out May 1 through Monash University Publishing.