The wait for self-driving cars has been agonising. In 2012, self-driving cars were just five years away. In 2017, Elon Musk pledged progress within “three months maybe, six months definitely.”

Here we are in 2018, and just a few, semi-autonomous cars are on the road. And they are failing us in awful ways. Both Tesla and Uber semi-autonomous vehicles have had fatalities this year, while operating under computer control. In both cases, a human driver was present but seemingly overconfident in the system’s capabilities.

Robot cars just aren’t that good at some things humans take for granted — like seeing things straight in front of them. Machines can’t see the patterns we can see, and don’t respond in the same ways. They still can’t handle our roads.

At this time it is helpful to take a step back and ask, why? How did we come to overestimate the capacity of machines to do this task?

Pattern repeating

When it comes to the question of robots taking over the world, humans turn out to be slow learners. We keep making the same mistake — assuming a Jetsons‘ future is close to hand.

In the 1980s, a young robotics researcher named Hans Moravec had an insight about the robots he was working on. They could do “hard” tasks well — things like calculating, memorising and playing chess, but not “easy” tasks. A robot can’t (or couldn’t then) walk up stairs or work a door handle.

This has come to be known as Moravec’s paradox. He speculated it was to do with evolution.

“We are all prodigious olympians in perceptual and motor areas, so good that we make the difficult look easy. Abstract thought, though, is a new trick, perhaps less than 100,000 years old. We have not yet mastered it. It is not all that intrinsically difficult; it just seems so when we do it.”

Yes, Moravec is calling you stupid. And me, and himself.

We see ourselves as smart because we’re smarter than all the other animals. Meanwhile, we’re no match for most other animals physically, so we discount our capacities. But in fact, we remain like them. We have far more in common with a chimpanzee than a central processing unit. You can’t shake off billions of years of evolution that easily.

I may be one of the clumsiest beings in the animal kingdom (you should see me attempt sports) but that turns out to be a tremendous lead over where computers are at.

Robots taking your jobs

Truck drivers, it is now widely believed, will be an endangered species within a few years. Robots will take their jobs swiftly, decisively and totally, with horrible consequences for the economy.

Humans generally rank jobs by their abstract cognitive ability requirements. We see some jobs as hard — like being a surgeon. Other jobs we see as easy — like driving a truck. Now, being a truck driver is a manual, somewhat routine job.

We might assume early robots will take jobs with low cognitive ability requirements, while later, more advanced robots will take the cognitively demanding jobs.

In fact, what we observe is jobs in accounting and finance being stripped away by computerisation. Companies like MYOB and Xero are stealing business from local accountants with their handy software, while algorithms are taking the jobs of hedge fund traders.

Making the world safe for robots

Robots can of course do manual work, far faster than a human, in the right circumstances. Driverless trains already run, cheaply and tirelessly — on separated tracks. We can make robots work for us when we remove physical complexity and the need for them to make judgements.

To me, this implies that we will have to make the world safe for driverless cars in order for them to make it safer for us. They will need highly regular, highly predictable highways and roadways on which to drive.

So I am highly sceptical robot drivers will be a serious part of the centre of our cities any time soon. The narrow streets of city centres — crowded with pedestrians and bicycles — will likely be the domain of our animal brains for a long time to come.