A Precautionary Approach to Artificial Intelligence
Artificial intelligence is a perfect example of a “post-normal” scientific puzzle, defined by empirical uncertainty, conflicting values, high stakes, and urgency. For these challenges policy cannot afford to wait for science to catch up.
FLORENCE – For policymakers anywhere, the best way to make decisions is to base them on evidence, however imperfect the available data may be. But what should leaders do when facts are scarce or non-existent? That is the quandary facing those who must grapple with the fallout of “advanced predictive algorithms” – the binary building blocks of machine learning and artificial intelligence (AI).
In academic circles, AI-minded scholars are either “singularitarians” or “presentists.” Singularitarians generally argue that while AI technologies pose an existential threat to humanity, the benefits outweigh the costs. But although this group includes many tech luminaries and attracts significant funding, its academic output has so far failed to prove their calculus convincingly.
On the other side, presentists tend to focus on the fairness, accountability, and transparency of new technologies. They are concerned, for example, with how automation will affect the labor market. But here, too, the research has been unpersuasive. For example, MIT Technology Review recently compared the findings of 19 major studies examining predicted job losses, and found that forecasts for the number of globally “destroyed” jobs vary from 1.8 million to two billion.
We hope you're enjoying Project Syndicate.
To continue reading, subscribe now.
Get unlimited access to PS premium content, including in-depth commentaries, book reviews, exclusive interviews, On Point, the Big Picture, the PS Archive, and our annual year-ahead magazine.
Already have an account or want to create one? Log in