Who Should Decide How Algorithms Decide?
Contrary to sci-fi dystopias in which the machines become conscious and take over, artificial-intelligence applications will do only what humans tell them to do. So it is in everyone's interest to consider how technologies such as self-driving cars will navigate life-or-death ethical dilemmas in the real world.
CAMBRIDGE – Over the past few years, the MIT-hosted “Moral Machine” study has surveyed public preferences regarding how artificial-intelligence applications should behave in various settings. One conclusion from the data is that when an autonomous vehicle (AV) encounters a life-or-death scenario, how one thinks it should respond depends largely on where one is from, and what one knows about the pedestrians or passengers involved.
For example, in an AV version of the classic “trolley problem,” some might prefer that the car strike a convicted murderer before harming others, or that it hit a senior citizen before a child. Still others might argue that the AV should simply roll the dice so as to avoid data-driven discrimination.
Generally, such quandaries are reserved for courtrooms or police investigations after the fact. But in the case of AVs, choices will be made in a matter of milliseconds, which is not nearly enough time to reach an informed decision. What matters is not what we know, but what the car knows. The question, then, is what information AVs should have about the people around them. And should firms be allowed to offer different ethical systems in pursuit of a competitive advantage?
We hope you're enjoying Project Syndicate.
To continue reading, subscribe now.
Get unlimited access to PS premium content, including in-depth commentaries, book reviews, exclusive interviews, On Point, the Big Picture, the PS Archive, and our annual year-ahead magazine.
Already have an account or want to create one? Log in