(Image: StudioCanal)

In the popular Terminator movies, a relentless super-robot played by Arnold Schwarzenegger tracks and attempts to kill human targets. It was pure science fiction in the 1980s. Today, killer robots hunting down targets have not only become reality, but are sold and deployed on the field of battle.

These robots aren’t cyborgs, like in the movies, but autonomously operating killer drones. The new Turkish-made Kargu-2 quadcopter drone can allegedly autonomously track and kill human targets on the basis of facial recognition and artificial intelligence — a big technological leap from the drone fleets requiring remote control by human operators.

A United Nations Security Council report claims the Kargu-2 was used in Libya to mount autonomous attacks on human targets. According to the report, the Kargu-2 hunted down retreating logistics and military convoys, “attack[ing] targets without requiring data connectivity between the operator and the munition”.

The burgeoning availability and rapidly expanding capabilities of drones pose urgent challenges to all of humanity. First, unless we agree to halt their development and distribution, autonomous killer drones like the Kargu-2 will soon be affordable and operable by anyone — from rogue states all the way down to minor criminal gangs and individual psychopaths.

Second, swarms of killer drones may, through sheer numbers, render irrelevant the defences against terrorist threats deployed by technologically advanced nations.

Third, in creating a challenging new asymmetry in warfare, autonomous killer drones threaten to upset the balance of power that otherwise keeps the peace in various regions. The increasing ubiquity of affordable drones is an open invitation to one power and another to turn stable regions into battle zones.

The arrival and rapid proliferation of robot-like killer drones comes as no surprise. For decades, consumer technology has been outpacing military adoption of advanced technologies. Because a drone is essentially a smartphone with rotors, today’s affordable consumer drones are largely a byproduct of the rapid development of smartphone technology. They are making access to the third dimension essentially free and creating new commercial opportunities: drones can already deliver groceries and medical supplies directly to your doorstep.

But endowing drones with human-like cognitive abilities — for instance, by combining rapidly improving facial recognition with artificial intelligence (AI) — will make powerful targeted weapons available to tin-pot despots, terrorists, and rampaging teenagers at a fraction of the cost of the fancy drones flown by the United States military. And unless we take concrete steps now to oppose such developments, instructions to turn cheap off-the-shelf drones into automated killers will be posted on the internet in the very near future.

To date, AI has struggled to provide accurate identification of objects and faces in the field. Its algorithms are easily confused when an image is slightly modified by adding text. An image-recognition system trained to identify an apple as a fruit was tricked into identifying an apple as an iPod, simply by taping to the apple a little strip of paper with the word “iPod” printed on it.

Protesters in Hong Kong have used sparkly paint on their faces to confound government facial-recognition efforts.

Environmental factors, such as fog, rain, snow and bright light, can dramatically reduce the accuracy of recognition systems employing AI. This may allow a defence against drones using relatively simple countermeasures to confound recognition systems at their present level of development. But to actors who already place a low value on collateral damage and innocent victims, accuracy is not much of a concern. Their drones might be programmed to kill anyway.

What’s more, any defence against the drones zeroing in on individual targets does not prevent their deployment as new weapons of mass destruction. A swarm of drones bearing explosives and dive-bombing a sports event or populated urban area could kill numerous people and would be hard to stop.

Various companies are now selling drone countermeasure systems with different strategies to stop rogue flying objects, and advanced militaries have already deployed electronic countermeasures to interrupt the drones’ control systems. But so far, shooting down even one drone remains a challenge.

Although Israel recently demonstrated an impressive flying laser that can vaporise drones, shooting down an entire swarm of them is still well beyond our capabilities. And with the new generation of autonomous drones, simply blocking communication to the drones is not enough. It may be critical to develop ways to safely bring them back to Earth in order to avert random chaos and harm.

To a group intent on causing significant damage, autonomous drones open an entire new field of possibilities. Imagine attacks on 100 different locations in a single day — the effect of the 9/11 terrorist attacks on the US could pale in comparison.

Though all countries are at risk of killer-drone attacks, the most likely victims of the first wave of these weapons are poorer countries with porous borders and weak law enforcement. The same gap between rich and poor states in the effects of COVID-19 will likely apply to the vulnerability to autonomous drones. The first such battles are more likely to play out in Africa rather than America — and with heavier tolls.

The companies producing the new wave of autonomous flying weapons are heavily marketing their wares. Meanwhile, the US and China have thus far refused to back calls for a ban on the development and production of fully autonomous weapons. Washington and Beijing are thereby providing a cover of tacit legitimacy for weapons makers and governments deploying the new killer drones in the field.

Admittedly, the drone threat cuts both ways. Autonomous or semi-autonomous drones have been used to tip the battlefield balance against rogue states; in Syria, rebel groups have used drones as an asymmetrical weapon against Russian-made armour, destroying multimillion-dollar tanks with cheap drones.

But the risk of drones ending up in the hands of malevolent actors and being deployed as weapons of highly inaccurate mass destruction far outweighs their possible military benefits in guerilla warfare. Asymmetrical warfare disproportionately benefits forces of chaos rather than forces of liberty.

It is not too late to place a global moratorium on killer robots of all kinds, including unmanned aerial vehicles. This would require a change of strategy on the part of the great powers. But any such moratorium should confine itself to offensive systems only, while all manner of anti-drone defence should be allowed.

As part of a ban, the wealthy governments should consider subsidising the purchase of drone-defence systems by poorer countries and teaching them how to defeat drone swarms. Drone technology is a global problem that humanity should address together.

In the Terminator series, the killer robot is eventually destroyed. In reality, the battle to confine AI to constructive uses will be constant and never-ending. Its application to killer drones is merely the first, short chapter in a long book of AI as an agent of chaos.

Whether we choose to act to close that book will determine the type of future we want to hand to our children. It will be either a world of Terminator chaos with the spectre of death in every corner — or a world like Star Trek, where humanity’s cohesive efforts channel technology to prioritise social good and prosperity over conflict and violence.

Vivek Wadhwa is a columnist at Foreign Policy, a distinguished fellow at Harvard Law School’s Labor and Worklife Program, and the co-author of From Incremental to Exponential: How Large Companies Can See the Future and Rethink Innovation. He tweets at @wadhwa.

Alex Salkever is a technology writer, futurist, and the co-author of From Incremental to Exponential: How Large Companies Can See the Future and Rethink Innovation. He tweets at @AlexSalkever.