The Promise of Ethical Machines
The prospect of artificial intelligence has long been a source of knotty ethical questions, typically about how humans can and should use advanced robots. What is missing from the discussion is the need to develop a set of ethics for the machines themselves, to enable them to operate autonomously.
STORRS, CONNECTICUT – The prospect of artificial intelligence (AI) has long been a source of knotty ethical questions. But the focus has often been on how we, the creators, can and should use advanced robots. What is missing from the discussion is the need to develop a set of ethics for the machines themselves, together with a means for machines to resolve ethical dilemmas as they arise. Only then can intelligent machines function autonomously, making ethical choices as they fulfill their tasks, without human intervention.
There are many activities that we would like to be able to turn over entirely to autonomously functioning machines. Robots can do jobs that are highly dangerous or exceedingly unpleasant. They can fill gaps in the labor market. And they can perform extremely repetitive or detail-oriented tasks – which are better suited to robots than humans.
But no one would be comfortable with machines acting independently, with no ethical framework to guide them. (Hollywood has done a pretty good job of highlighting those risks over the years.) That is why we need to train robots to identify and weigh a given situation’s ethically relevant features (for example, those that indicate potential benefits or harm to a person). And we need to instill in them the duty to act appropriately (to maximize benefits and minimize harm).
We hope you're enjoying Project Syndicate.
To continue reading, subscribe now.
Get unlimited access to PS premium content, including in-depth commentaries, book reviews, exclusive interviews, On Point, the Big Picture, the PS Archive, and our annual year-ahead magazine.
Already have an account or want to create one? Log in