Thursday, October 29

The Ethics Of Self-Driving Cars Making Deadly Decisions

Self-driving cars are starting to pop up everywhere as companies slowly begin to test and improve them for the commercial market. Heck, Google’s self-driving car actually has its very own driver’s license in Nevada! There have been minimal accidents, and most of the time, they say it’s not the autonomous cars’ fault. But when autonomous cars are widespread — there will still be accidents — it’s inevitable. And what will happen when your car has to decide whether to save you, or a crowd of people? Ever think about that before?

It’s an extremely valid concern, and raises a huge ethical issue. In the rare circumstance that the car has to choose the “best” outcome — what will determine that? Reducing the loss of life? Even if it means crashing into a wall, mortally injuring you, the driver? Maybe car manufacturers will finally have to make ejection seats a standard feature!

If that is standard on all commercial autonomous vehicles, are consumers really going to buy a car that might in the 0.00001% probability be programmed to kill you in order to save pedestrians? Would you even be able to make that choice if you were the one driving?

Well — the researchers decided to poll the public. Which is wherein lies the paradox — people do think autonomous cars should sacrifice the needs of the few for the needs of the many… as long as they don’t have to drive one themselves. There’s actually a name for this kind of predicament. It’s called the Trolley Problem.

In scenario 1, imagine you are driving a trolley down a set of railway tracks towards a group of five workers. Your inaction will result in their death. If you steer onto the side track, you will kill one worker who happens to be in the way.

In scenario 2, you’re a surgeon. Five people require immediate organ transplants. Your only option is to fatally harvest the required organs from a perfectly healthy sixth patient, who doesn’t consent to the idea. If you don’t do anything, the five patients will die.

The two scenarios are identical in outcome. Kill one to save five. But is there a moral difference between the two? In one scenario, you’re a hero — you did the right thing. In the other, you’re a psychopath. Isn’t psychology fun?

As the authors of the research paper put it, their surveys:

suggested that respondents might be prepared for autonomous vehicles programmed to make utilitarian moral decisions in situations of unavoidable harm… When it comes to split-second moral judgments, people may very well expect more from machines than they do from each other.

Fans of Issac Asimov may find this tough reading: when asked if these decisions should be enshrined in law, most of those surveyed felt that they should not. Although the idea of creating laws to formalize these moral decisions had more support in autonomous vehicles than human-driven ones, it was still strongly opposed.

This does create an interesting grey area, though: if these rules are not enforced by law, who do we trust to create the systems that make these decisions? Are you okay with letting Google make these life and death decisions?

What this really means is before autonomous cars become commercial, public opinion is going to have to make a big decision on what’s really “OK” for autonomous cars to do (or not to do). It’s a pretty interesting problem and if you’re interested in reading more, check out this other article about How to Help Self-Driving Cars Make Ethical Decisions.

Might have you reconsider building your own self-driving vehicle then, huh?

[Research Paper via MIT Technology Review]


Filed under: car hacks, robots hacks, transportation hacks

No comments:

Post a Comment