5 minute read

I came across this interesting post on the legal and insurance implications of Tesla’s upcoming autonomous driving feature. The new feature would only work for interstate highways in the USA but it would be a big step towards automated driving in cars being widely used in regular driving situations. From a technological perspective it is exciting.

The article notes that fully automatous cars are not going to be common in the near future which they followed with this stunning sentence:

If a circumstance arises where an accident is unavoidable — say, for instance, a child runs out into the street — the computers that control the car do not yet have the ethical reasoning to deduce whether they should sacrifice the driver by suddenly swerving away, or run down the child.

Now, this is an obvious oversimplification of a tricky and possibly unsolvable problem. They linked to some Wired articles which actually provide the horrible ethical dilemma in question and some interesting discussion about what the ethics of a robotic car should be and who should decide them. Partick Lin argues that drivers should not have control over ethical settings on cars to be used to make choices in these situations. The idea of these settings is mostly to save companies from liability and somehow make the driver responsible because of a choice they made on a software screen months or years earlier. Meanwhile, Jason Millar argues the driver should have some input and the situation should be treated as we do informed consent about impossible choices in healthcare. They both make some good points but either way, asking for these preferences up front, rather than in the context of the moment makes it a different moral question. You can’t really know the answer until you’re there.

Should your car always save you first or pedestrians first, if it computes that there’s no way to save both of you? How do you answer such a question? Perhaps more importantly what if the car is wrong? What if the onboard AI has the wrong assessment of the situation and there was a way to save everyone? Then the ethics settings wouldn’t really matter would they?

Maybe the question designers need to ask themselves is:

“Do you believe in a no win scenario?”

We live in an age where the machines we interact with daily are becoming intelligent enough to do complex tasks for us, tasks that require training and licensing like driving. We need to be careful not to fool ourselves into thinking these machines are just like us, that they can make human, ethical choices. We’ve accepted that the fact that machines can play chess better than us doesn’t mean they’re smarter or that we aren’t logical. Driving a car is no different, being able to do it doesn’t mean the car can make the hard choices for us. 

A machine can only optimize its decisions based on the values it has been given and based on the information it has. In systems like robotic cars, much of that information is uncertain. It has an estimate of what is going on in the world when it is driving and it detects a person jump into the road. It can estimate the odds of getting around the person, the probability of the driver surviving. It can do this faster than us but it does this using its own model and information, not ours.

At this time, no machine can make ethical choices, they can only give a human the information needed to make an ethical choice. Meanwhile, our onboard intelligent computer, our brain, has infinitely more experience and wiring to help us make ethical choices. We also are able to discuss what morality means, justify values based on our lives and those of people we love or based on our beliefs. A machine can do none of this until it has everything we have, until it is truly alive. We humans still get it wrong, quite a lot actually, but right now only humans can make ethical choices. Machines can only help us do it.

So what does this mean for robotic cars? We need to consider another option these two articles didn’t, that cars cannot be fully automated until this ethical question is solved. It seems to me this ethical question will only be solved in two ways. We can all agree to accept robotic cars making the wrong choice with a certain frequency, as we accept the infrequent failure of a bridge or plane. Or perhaps someday we will have truly living, intelligent machines which understand morality and values just as well as we do. Although in that situation it might be surprising that such entities would be happy to simply drive our cars for us. Perhaps a general purpose self aware butler AI would do it as one of it’s many duties.

Until this happens it would mean the driver of an autonomous robotic car could not take a nap or read a book while the car is driving. They could relax, listen to music or books, or talk on the phone, maybe they could read email on a heads up dash display. But they would need to be alert and aware enough of the situation around them to make a quick ethical choice if the car asks.

This isn’t a solution either of course, since the car will need a default to follow if the human fails to respond in time. Or what if the human makes a decision that is clearly the wrong one, will we really blame only them and not the manufacturer of the car for allowing it? I don’t know.

The important thing in this discussion is to clearly keep in mind the distinction between machines which make optimizing decisions and humans which can make ethical decisions. These are completely separate entities, and no amount of ethical preference settings will make a car, or any machine, ethical until it knows what morality means. So let’s no fool ourselves into thinking it can.

Updated: