Sign Up Now! Don't Miss Anything We Post. Join Our Mailing List Today.

Should A Self-Driving Car Kill Its Passengers In A “Greater Good” Scenario?

Share on Facebook0Tweet about this on Twitter0Share on Google+0Pin on Pinterest0

Picture the scene: You’re in a self-driving car and, after turning a corner, find that you are on course for an unavoidable collision with a group of 10 people in the road with walls on either side. Should the car swerve to the side into the wall, likely seriously injuring or killing you, its sole occupant, and saving the group? Or should it make every attempt to stop, knowing full well it will hit the group of people while keeping you safe?


This is a moral and ethical dilemma that a team of researchers have discussed in a new paper published in Arxiv, led by Jean-Francois Bonnefon from the Toulouse School of Economics. They note that some accidents like this are inevitable with the rise in self-driving cars – and what the cars are programmed to do in these situations could play a huge role in public adoption of the technology.

“It is a formidable challenge to define the algorithms that will guide AVs [Autonomous Vehicles] confronted with such moral dilemmas,” the researchers wrote. “We argue to achieve these objectives, manufacturers and regulators will need psychologists to apply the methods of experimental ethics to situations involving AVs and unavoidable harm.”


In their paper, the researchers surveyed several hundred people on Amazon’s Mechanical Turk, an online crowdsourcing tool. They presented the participants with a number of scenarios, including the one mentioned earlier, and also altered the number of people in the car, the number of people in the group, the age of the people in the car (to include children), and so on.

The results are perhaps not too surprising; on the whole, people were willing to sacrifice the driver in order to save others, but most were only willing to do so if they did not consider themselves to be the driver. While 75% of respondents thought it would be moral to swerve, only 65% thought the cars would actually be programmed to swerve.

How much control are we willing to part with? RioPatuca.

“On a scale from -50 (protect the driver at all costs) to +50 (maximize the number of lives saved), the average response was +24,” the researchers wrote. “Results suggest that participants were generally comfortable with utilitarian AVs, programmed to minimize an accident’s death toll.”

As MIT Technology Review notes, however, self-driving cars themselves are still inherently safer than human drivers – and perhaps that in itself creates a new dilemma. “If fewer people buy self-driving cars because they are programmed to sacrifice their owners, then more people are likely to die because ordinary cars are involved in so many more accidents,” the MIT article says. “The result is a Catch-22 situation.”

Get Free Email Updates!

Signup now and receive an email once I publish new content.

I will never give away, trade or sell your email address. You can unsubscribe at any time.

Share on Facebook0Tweet about this on Twitter0Share on Google+0Pin on Pinterest0