Skip to main content

How self-driving cars weigh the safety of pedestrians versus passengers has been an ongoing ethical debate.Fred Lum/The Globe and Mail

It's a fairly simple choice on the surface: Does a driver swerve to avoid a dog crossing the road?

If that's an easy choice, make it a little harder: Does a driver swerve to miss a pedestrian on the road if the driver knows his or her own life will be put in danger?

Millions of people around the world make these choices and others like them every day. Some die as a result.

When autonomous vehicles get on the road in large numbers in the next decade, machines will be making these decisions, giving rise to a growing debate about what ethical and moral choices should be programmed into self-driving cars.

"It's a huge issue," Bill Ford, chairman of Ford Motor Co., told a small group of reporters over dinner at the North American International Auto Show in Detroit. The discussion in the industry is all about hardware and software for autonomous vehicles and how soon they will be widely available, but "nobody's talking about ethics," Mr. Ford said.

"If this technology is really going to serve society, then these kinds of issues have to be resolved and resolved relatively soon," he said.

Auto makers and suppliers are spending billions of dollars developing technology to make cars autonomous – in the interests of making roads safer and reducing or eliminating the estimated 737 deaths every hour, every day of the year, from traffic accidents around the world.

There are now systems in place that will pull vehicles back into their lanes when they drift out of them; brake automatically if necessary; and warn drivers of cars in their blind spots.

But cars that drive themselves will have to make choices that are now made by humans.

The early data about the choices humans want those vehicles to make are not encouraging.

"People think cars should minimize total harm, but they don't want to buy cars that are going to diminish their own safety," said Iyad Rahwan, an associate professor at MIT who specializes in collective intelligence and the social aspects of artificial intelligence.

That's the indication from a series of surveys he and colleagues from France and Oregon conducted.

"Figuring out how to build ethical autonomous machines is one of the thorniest challenges in artificial intelligence today," they wrote in a paper published in Science magazine last June.

Mr. Ford is concerned about who will be responsible for setting standards and what those standards will be.

"Ultimately government's going to have to play a role but then you say, well just the U.S. government, how about the Chinese government," he said.

"I think we're going to have to have a global standard because we can't have different sets of ethics."

The U.S. government has set some high-level standards and set up a special committee of 25 people this week to advise the Department of Transportation on automation and a number of transportation systems. Advisers include General Motors Co. chairman Mary Barra, Los Angeles Mayor Eric Garcetti and Chesley 'Sully' Sullenberger, the former U.S. Airways pilot who landed a plane in the Hudson River.

Prof. Rahwan and MIT have set up an interactive website called Moral Machine that lays out 13 scenarios for potential crashes involving self-driving vehicles, passengers, pedestrians and animals and allows users to choose one of two outcomes in each of the scenarios.

The website has gone viral several times, Prof. Rahwan said, and researchers have collected 22 million decisions from 160 countries that they hope will help regulators decide how to program cars.

Information is still being collected, he said, but there are quantifiable differences in attitudes between North Americans and people from other regions.

The simulations include choosing whether an autonomous vehicle that has lost its brake fun- ctions should kill five pedestrians or five people in the vehicle.

It's morbid and uncomfortable, Mr. Rahwan acknowledged, but "we want people to feel the discomfort of those who are trying to regulate cars and trying to make these kinds of judgment calls on design choices that have societal implications."

He is worried about a backlash if the benefits wrought by autonomous vehicles are perceived to be unfair.

It will likely be too difficult for regulators to specify how these vehicles should react in every situation or even in many situations, he said.

What may be reasonable, he said, is for whoever sets the standards to insist that public safety will come first and vehicle manufacturers will be scrutinized to make sure their cars don't cause more deaths or injuries than usual or is to be expected.

Report an editorial error

Report a technical issue

Editorial code of conduct

Tickers mentioned in this story

Study and track financial data on any traded entity: click to open the full quote page. Data updated as of 27/03/24 6:40pm EDT.

SymbolName% changeLast
F-N
Ford Motor Company
+4.98%13.06
TSLA-Q
Tesla Inc
+1.22%179.83

Interact with The Globe