op_campanella12_BlackJack3DGetty Images_AIbrain BlackJack3D/Getty Images
en English

Which AI Risks Matter?

Although overly confident predictions about artificial intelligence are as old as the field itself, there is good reason to take seriously the risks associated with the technology. Like a future asteroid strike, the emergence of an intelligence that could marginalize humanity is as plausible as human evolution itself.

CAMBRIDGE – The so-called gorilla problem haunts the field of artificial intelligence. Around ten million years ago, the ancestors of modern gorillas gave rise, by pure chance, to the genetic lineage for humans. While gorillas and humans still share almost 98% of their genes, the two species have taken radically different evolutionary paths.

Humans developed much bigger brains – leading to effective world domination. Gorillas remained at the same biological and technological level as our shared ancestors. Those ancestors inadvertently spawned a physically inferior but intellectually superior species whose evolution implied their own marginalization.

The connection to AI should be obvious. In developing this technology, humans risk creating a machine that will outsmart them – not by accident, but by design. While there is a race among AI developers to achieve new breakthroughs and claim market share, there is also a race for control between humans and machines.