Science fiction writer Arthur Clarke once said that truly advanced technology seems like magic.

Take, for instance, self-driven cars. It’s pure magic. How do they get a car to navigate safely through traffic, without a driver, when we drivers, intelligent humans, slam into one another and worldwide, 1.25 million people (roughly, the population of Dallas, Texas) die in road crashes yearly?

And why is it taking so long for self-driven cars to take over our roads? So many lives could be saved!

Here is the rather sad answer.

Technology is developed by engineers, often building on research done by scientists. Technology, engineers learn, is about things – bits, bytes, code, microprocessors, computer chips, and so on. Engineers make magical things.

But there is a problem. Those magical things are used by humans. And here’s where the problems begin. Things are predictable. Humans often are not. Put technology together with the humans using it, and you may get mayhem, or worse.

A New York Times report by Neal E. Boudette (July 19 2019) has the headline: Optimism fades for self-driving automobiles. Boudette recounts that in a recent speech, Jim Hackett, CEO of Ford, admitted that “we overestimated the arrival of autonomous vehicles”. And Bryan Salesky, CEO of Argo AI, a Pittsburgh startup that has innovated autonomous-vehicle technology, said the industry’s promise to create driverless cars that could go anywhere was “way in the future”.

Why ‘way in the future’? What happened, when the recent hype claimed driverless cars would soon be everywhere on our roads, saving many thousand of lives? After all, self-driven cars do not drink and drive – ever.

It’s “human behavior”, Salesky said. People. Technology alone is magic. Technology plus people equals… well, who knows?

According to The New York Times, Salesky said, “you see all kinds of crazy things on the road, and it turns out that they’re not all that infrequent, but you have to be able to handle all of them [with self-driven cars]. With radar and high-resolution cameras and all the computing power we have, we can detect and identify the objects on a street. The hard part is anticipating what they’re going to do next.”

Autonomous vehicle technology is 80% ready. But the remaining 20%, the part that can anticipate what drivers, cyclists, scooters, and pedestrians will do? That’s still a long ways off.

Here’s the problem. Suppose, in a fictitious world, every single truck and car was self-driven. Since each vehicle had essentially the same software, it could tell with absolute certainty what other vehicles will do. How? Well – that Audi facing me, turning left, ahead? Will it wait for me to pass, or turn? Simple. What would I do? I would wait. So it too will wait, because its brain is the same as mine. Good. Simple.

If the technology only interacts with things, the magic is real simple.

But wait. What if, in the real world, some cars are self-driven and some are human-driven. And of course, this will be the situation everywhere. That Audi turning left. Is it driven by an impetuous macho 18-year-old who drives as if the road were some mixed martial arts game? Or an 80-year-old great-grandmother who gets stopped by police for driving 30 mph in a 65 mph zone. Or, by another self-driven vehicle?

Maybe, that Audi can identify itself and its driver with facial recognition? Sure – and the self-driven car approaching it must do a complex personality background check in a split second? It won’t work.

How about super-extra-hyper-caution. Assume the worst. He’s going to cut in front of me. That strategy will take you 45 minutes to drive half a mile. The driver-observer will take control in frustration – and the rest is predictable.

So, where do we stand? Slow and steady. Operate driverless cars in controlled environments, at slow speeds – like the shuttles in the Brooklyn Navy Yard that go 25 mph. Or the six-passenger golf carts that travel short, defined routes, also at 25 mph or less, in Detroit, Providence RI and Columbus, OH.

In statistics, there is a firm rule. If you want to apply statistical inference, to test a hypothesis, you need N=30 – a sample of 25 to 30 at least. But in the world of advanced technology, N=1 rules. A woman was killed walking a bicycle in a crosswalk, in Tempe Arizona, when she was struck by a self-driven Uber test car. The car had a driver – but she was watching a TV show on her phone at the time. That incident – N=1 – was enough to “reset expectations”, one expert said. Later, three Tesla drivers were killed using the Autopilot driver-assistance system, when they failed to detect and respond to danger.

With today’s social media, N=1 incidents are amplified instantly, worldwide. Maybe 1.25 million people are killed yearly in car crashes, mostly preventable. Maybe that number could be halved or more, if half the world’s vehicles were autonomous. But N=1 mishaps, along with Technology + People imply: it won’t happen soon. Somehow, one Uber crash caused by one distracted driver causes massive alarm, while the US has over 40,000 traffic deaths for the third straight year – and nobody seems to care much.

It’s that human factor, I suppose.