Amid all the buzz about vehicles that drive themselves, there are serious ethical questions facing regulators, manufacturers and the people who will ride in them. If faced with an unavoidable fatal crash, would the car be programmed to save its occupants at all costs or would it sacrifice its passengers for the greater good of saving a group of pedestrians?
“There’s this trade-off between the interests of the driver, or rather the passenger who buys the car, and the level of public acceptance versus public outrage,” says Azim Shariff of the Culture and Morality Lab at the University of Oregon. Along with researchers from France and the Massachusetts Institute of Technology, Shariff set out to test public attitudes on the cold, hard decisions computer programs will have to make when lives are on the line.
Shariff says it’s important that the question is addressed before fully autonomous vehicles begin filling streets and highways in the coming years.
“Car companies are not going to be able to figure out what to do unless they know people’s psychology on this.”
Researchers created several scenarios in which the driverless car would have to choose between saving its passengers or killing them in order to save a greater number of pedestrians. About 900 adults were surveyed on what they would want the car to do.
The situations involved a car hurtling toward a group of up to 10 pedestrians, with the only options hitting them or driving into a barrier and killing the passenger inside. The number of pedestrians varied among scenarios, as did whether the person facing that situation was in the car, one of the pedestrians or a casual observer.