jellAIfish / jellyfish

This repository is inspired by Quinn Liu's repository Walnut.
4 stars 4 forks source link

what_moral_decisions_should_driverless_cars_make - ted talk #22

Closed markroxor closed 6 years ago

markroxor commented 6 years ago

https://www.ted.com/talks/iyad_rahwan_what_moral_decisions_should_driverless_cars_make

markroxor commented 6 years ago

So this is what we did. With my collaborators, Jean-François Bonnefon and Azim Shariff, we ran a survey in which we presented people with these types of scenarios. We gave them two options inspired by two philosophers: Jeremy Bentham and Immanuel Kant. Bentham says the car should follow utilitarian ethics: it should take the action that will minimize total harm -- even if that action will kill a bystander and even if that action will kill the passenger. Immanuel Kant says the car should follow duty-bound principles, like "Thou shalt not kill." So you should not take an action that explicitly harms a human being, and you should let the car take its course even if that's going to harm more people.

markroxor commented 6 years ago

What do you think? Bentham or Kant? Here's what we found. Most people sided with Bentham. So it seems that people want cars to be utilitarian, minimize total harm, and that's what we should all do. Problem solved. But there is a little catch. When we asked people whether they would purchase such cars, they said, "Absolutely not."

markroxor commented 6 years ago

They would like to buy cars that protect them at all costs, but they want everybody else to buy cars that minimize harm.

markroxor commented 6 years ago

In our survey, we did ask people whether they would support regulation and here's what we found. First of all, people said no to regulation; and second, they said, "Well if you regulate cars to do this and to minimize total harm, I will not buy those cars." So ironically, by regulating cars to minimize harm, we may actually end up with more harm because people may not opt into the safer technology even if it's much safer than human drivers.

markroxor commented 6 years ago

In the 1940s, Isaac Asimov wrote his famous laws of robotics -- the three laws of robotics. A robot may not harm a human being, a robot may not disobey a human being, and a robot may not allow itself to come to harm -- in this order of importance. But after 40 years or so and after so many stories pushing these laws to the limit, Asimov introduced the zeroth law which takes precedence above all, and it's that a robot may not harm humanity as a whole.

markroxor commented 6 years ago

The zeroth law of Isaac Asimov.