The False Promise of “Ethical AI”
Responding to growing demands for more accountability in the development and deployment of artificial intelligence, public policymakers have signed on to the fashionable push for "ethical AI." Yet by adopting what amounts to a euphemism for inaction, they are playing directly into the industry's hands.
WARSAW – The use of algorithms “in the wild” to measure, quantify, and optimize every aspect of our lives has led to growing public concerns and increased attention from regulators. But among the list of responses are some impractical ideas, not least those being promoted under the banner of “ethical AI.”
It is understandable that public authorities would want to mitigate the downsides of certain applications of artificial intelligence, particularly those associated with increased surveillance, discrimination against minorities, and wrongful administrative decisions. But cash-strapped governments are also eager to embrace any technology that can deliver efficiency gains in the provision of public services, law enforcement, and other tasks. The stalemate between these two priorities has shifted the debate away from law and policymaking, and toward the promotion of voluntary best practices and ethical standards within the industry.
So far, this push, which has been championed by public bodies as diverse as the European Commission and the US Department of Defense, revolves around the concept of “algorithmic fairness.”The idea is that imperfect human judgment can be countered, and social disputes resolved, through automated decision-making systems in which the inputs (data sets) and processes (algorithms) are optimized to reflect certain vaguely defined values, such as “fairness” or “trustworthiness.” In other words, the emphasis is placed not on politics, but on fine-tuning the machinery, either by debiasing existing data sets or creating new ones.
To continue reading, register now.
Already have an account? Log in