Reimagining Security and Rethinking Economics
Economic and financial issues nowadays tend to be discussed in intellectual silos, by specialists who give little mind to security concerns or the interplay between national and international objectives. But sooner or later, economists will realize that global security demands a new approach, just as it did in the interwar period.
PRINCETON – Now that the world is facing a trade war and the growing possibility that the West could find itself in a real war, we would do well to reconsider the lessons of the interwar period.
Many of today’s economic and security disorders are frequently attributed to the 2008 global financial crisis. In addition to exposing the flaws in conventional economic policies, the crisis and its aftermath accelerated the global rebalancing from the Atlantic to the Asia-Pacific region, while fueling political discontent and the rise of anti-establishment movements in the West.
Likewise, the Great Depression of the 1930s is usually thought to have produced a seismic shift in economic thinking. According to the conventional narrative, policymakers at the time, having vowed never to repeat the errors that led to the crisis, devised new measures to overcome their economies’ prolonged malaise.
The conceptual and institutional reordering of economics that followed is usually credited to one towering figure: the British economist John Maynard Keynes, who published The General Theory of Employment, Interest, and Money in 1936. Keynes also orchestrated the 1944 Bretton Woods conference, which led to the creation of the World Bank, the International Monetary Fund, and the post-war global monetary order.
According to Keynes’s collaborator and biographer Roy Harrod, Keynes enjoyed a god-like presence at the Bretton Woods talks. But some of Keynes’s other contemporaries, notably the British economist Joan Robinson, always doubted that he deserved so much credit for ushering in the new order.
After all, the real reason that Keynesian thinking took hold was that its method of calculating aggregate consumption, investment, and savings proved invaluable for American and British military planning during World War II. With consistent national accounting, governments could make better use of resources, divert production from civilian to military purposes, and curtail inflationary pressures, thereby maintaining consumption and staving off civil unrest.
The same tools turned out to be just as useful in reorienting the post-war economy toward higher household consumption. But the point is that the revolution in economics, followed by the economic miracles of the post-war era, was a product of wartime calculation, not peacetime reflection. Pressing security concerns and the need to ensure domestic and international stability made policymakers more willing to challenge longstanding economic orthodoxy.
This era holds important lessons for the present. Nowadays, many economists complain that the financial crisis did not prompt a serious rethinking of conventional economics. There are no modern-day equivalents to Keynes. Instead, economic and financial issues tend to be discussed in intellectual silos, by specialists who give little mind to security concerns or the interplay between national and international objectives.
Still, as in the interwar period, there are security threats today that will make rethinking economic assumptions necessary, if not inevitable. Though the financial crisis did not lead to a holistic intellectual reckoning, three broader challenges to the liberal international order since 2016 almost certainly will.
The first challenge is the existential threat of climate change, which will have far-reaching geopolitical consequences, particularly for areas already facing water shortages, and for tropical countries and coastal cities already experiencing the effects of rising sea levels. At the same time, some countries will enjoy temporary gains, owing to longer growing seasons and increased access to minerals, hydrocarbons, and other resources in polar regions.
Ultimately, reducing the amount of greenhouse gases in the atmosphere will serve the common good. But, without an international mechanism to compensate those most at risk of a warming planet, individual countries will weigh the trade-offs of reducing greenhouse-gas emissions differently.
The second global challenge is artificial intelligence and its foreseeable disruption of labor markets. AI threatens not just employment but also security, because it will render obsolete many technologies that states use to defend their populations and deter aggression. It is little wonder that larger powers like the United States and China are already racing to dominate AI and other big-data technologies. As they continue to do so, they will be playing an increasingly dangerous and unstable game, in which each technological turn could fundamentally transform politics by rendering old defenses useless.
The third challenge is the monetary revolution being driven by distributed-ledger technologies such as blockchain, which holds out the promise of creating non-state money. Since Bretton Woods, monetary dominance has been a form of power, particularly for the US. But alternative modes of money will offer both governments and non-state actors new ways to assert power or bypass existing power structures. Cryptocurrencies such as Bitcoin are already disrupting markets, and could someday alter the financial relations on which modern industrial societies are based.
In the new political geography, China, Russia, India, and others see each of these challenges as opportunities to shape the future of globalization on their own terms. What they envision would look very different from the model of the late twentieth century. China, for example, regards AI as a tool for recasting political organization through mass surveillance and state-directed thinking. By replacing individualism with collectivism, it could push global politics in a profoundly illiberal direction.
Fortunately, there are alternative paths forward. In rethinking economics and security, we will need to develop an approach that advances innovation within a framework of coordinated deliberation about future social and political arrangements. We need to apply human imagination and inventiveness not only to the creation of new technologies, but also to the systems that will govern those technologies.
The best future will be one in which governments and multinational corporations do not control all of the information. The challenge, then, is to devise generally acceptable solutions based on cooperation, rather than on the destruction of competing visions.
Embracing the New Age of Automation
With rapid advances in automation and artificial intelligence in recent years, many are worried about a jobless future and sky-high levels of inequality. But the large-scale technologically driven shift currently underway should be welcomed, and its adverse effects should be managed with proactive policies to reinvest in workers.
LONDON – Ever since early-nineteenth-century textile workers destroyed the mechanical looms that threatened their livelihoods, debates over automation have conjured gloom-and-doom scenarios about the future of work. With another era of automation upon us, how nervous about the future of our own livelihoods should we be?
A recent report by the McKinsey Global Institute estimates that, depending on a country’s level of development, advances in automation will require 3-14% of workers worldwide to change occupations or upgrade their skills by the year 2030. Already, about 10% of all jobs in Europe have disappeared since 1990 during the first wave of routine-based technological change. And with advances in artificial intelligence (AI), which affects a broader range of tasks, that share could double in the coming years.
Historically, job displacement has occurred in waves, first with the structural shift from agriculture to manufacturing, and then with the move from manufacturing to services. But throughout that process, productivity gains have been reinvested to create new innovations, jobs, and industries, driving economic growth as older, less productive jobs are replaced with more advanced occupations.
The internal combustion engine, for example, wiped out horse-drawn carriages, but gave rise to many new industries, from car dealerships to motels. In the 1980s, computers killed typewriters, but created a host of new occupations, from call-center service representatives to software developers.
Because the far-reaching economic and social benefits of new technologies tend to receive less attention than job losses, it is worth noting that automation technologies are already demonstrating a capacity to improve lives. This past November, Stanford University researchers showed that an AI system outperforms expert radiologists in detecting pneumonia from lung X-rays.
In an era of stalled productivity growth and declining working-age populations in China, Germany, and elsewhere, automation could provide a badly needed economic boost. Higher productivity implies faster economic growth, more consumer spending, increased labor demand, and thus greater job creation.
Nonetheless, any discussion about AI-based automation must also take public anxieties into account. Even though new occupations will likely replace those lost to automation, wages may take time to catch up to the reality of higher labor productivity.
In the early nineteenth century, wages stagnated for almost 50 years before picking up again. That may have been an extreme situation. But for lower-skilled workers, the transition underway today could prove just as wrenching. With fears of increased inequality already growing, governments will need to rethink policies for providing income and job-transition support to displaced workers.
Looking ahead, policymakers and businesses should keep five imperatives in mind. The first is to embrace AI and automation without hesitation. Even if it were possible to slow the pace of change, succumbing to that temptation would be a mistake. Owing to the effects of global competition, hampering technological diffusion in one domain would simply dampen overall prosperity. In fact, we recently estimated that northern European economies could lose 0.5 percentage points of annual GDP growth if they do not keep pace with their neighbors in adopting AI.
The second imperative is to equip workers with the right skills. Future-of-work debates often overlook the question of how the labor market will evolve and either improve or exacerbate the skills mismatch that is already acute in developed countries. According to recent OECD research as much as one-third of workers in advanced economies are either underutilized or unable to handle their current duties.
The jobs of the future will require not just more cognitive skills, but also more creativity and social skills, such as coaching. We estimate that, unless workers’ skill sets are upgraded, today’s mismatch could double in severity within ten years, resulting in major productivity losses and higher levels of inequality.
Upgrading skills on a large scale will require coordination among parents, educators, governments, employers, and employees, with a focus on lower-skilled individuals. Unfortunately, in the past two decades, public spending on labor markets, relative to GDP, has declined by 0.5 percentage points in the United States, and by more than three percentage points in Canada, Germany, and Scandinavia.
The third imperative is to focus on augmented-labor opportunities. Unlike older industrial robots, newer technologies can interact safely and efficiently with humans, who sometimes need to train them and will increasingly have to work seamlessly with algorithms and machines. For example, a doctor’s practice will be greatly enhanced by diagnostic algorithms. Policymakers and businesses should seek to maximize this kind of complementarity across all sectors.
Fourth, businesses will need to innovate and capitalize on new market opportunities at the same pace that human tasks are being replaced. For example, in the first wave of robotics, countries such as Germany and Sweden displaced auto-sector jobs by adopting CAD (computer-aided design) robots; but they simultaneously brought other jobs back from Asia, and even created new downstream jobs in electronics. Similarly, AI offers countless opportunities for innovation and tapping into global value chains. By seizing these opportunities quickly, we can ensure a smoother transition from old to new jobs.
Finally, it is imperative that we reinvest AI-driven productivity gains in as many economic sectors as possible. Such reinvestment is the primary reason why technological change has benefited employment in the past. But without a strong local AI ecosystem, today’s productivity gains may not be reinvested in a way that fuels spending and boosts demand for labor. Policymakers urgently need to ensure that strong incentives for reinvestment are in place.
Automation has been given a bad rap as a job killer. Nevertheless, to ensure that its benefits outweigh its potential disruptions, private- and public-sector actors must exercise strong joint leadership – and keep the five imperatives for the new age of automation at the top of the agenda.
Racing the Machine
Economists have always believed that previous waves of job destruction led to an equilibrium between supply and demand in the labor market at a higher level of both employment and earnings. But if robots can actually replace, not just displace, humans, it is hard to see an equilibrium point until the human race itself becomes redundant.
LONDON – Dispelling anxiety about robots has become a major preoccupation of business apologetics. The commonsense – and far from foolish – view is that the more jobs are automated, the fewer there will be for humans to perform. The headline example is the driverless car. If cars can drive themselves, what will happen to chauffeurs, taxi drivers, and so on?
Economic theory tells us that our worries are groundless. Attaching machines to workers increases their output for each hour they work. They then have an enviable choice: work less for the same wage as before, or work the same number of hours for more pay. And as the cost of existing goods falls, consumers will have more money to spend on more of the same goods or different ones. Either way, there is no reason to expect a net loss of human jobs – or anything but continual improvements in living standards.
History suggests as much. For the last 200 years or so, productivity has been steadily rising, especially in the West. The people who live in the West have chosen both more leisure and higher income. Hours of work in rich countries have halved since 1870, while real per capita income has risen by a factor of five.
How many existing human jobs are actually “at risk” to robots? According to an invaluable report by the McKinsey Global Institute, about 50% of time spent on human work activities in the global economy could theoretically be automated today, though current trends suggest a maximum of 30% by 2030, depending mainly on the speed of adoption of new technology. The report’s midpoint predictions are: Germany, 24%; Japan, 26%; the United States, 23%; China, 16%; India, 9%; and Mexico, 13%. By 2030, MGI estimates, 400-800 million individuals will need to find new occupations, some of which don’t yet exist.
This rate of job displacement is not far out of line with previous periods. One reason why automation is so frightening today is that the future was more unknowable in the past: we lacked the data for alarmist forecasts. The more profound reason is that current automation prospects herald a future in which machines can plausibly replace humans in many spheres of work where it was thought that only we could do the job.
Economists have always believed that previous waves of job destruction led to an equilibrium between supply and demand in the labor market at a higher level of both employment and earnings. But if robots can actually replace, not just displace, humans, it is hard to see an equilibrium point until the human race itself becomes redundant.
The MGI report rejects such a gloomy conclusion. In the long run, the economy can adjust to provide satisfying work for everyone who wants it. “For society as a whole, machines can take on work that is routine, dangerous, or dirty, and may allow us to use our intrinsically human talents more fully and enjoy more leisure.”
This is about as good as it gets in business economics. Yet there are some serious gaps in the argument.
The first concerns the length and scope of the transition from the human to the automated economy. Here, the past may be a less reliable guide than we think, because the slower pace of technological change meant that job replacement kept up with job displacement. Today, displacement – and thus disruption – will be much faster, because technology is being invented and diffused much faster. “In advanced economies, all scenarios,” McKinsey writes, “result in full employment by 2030, but transition may include periods of higher unemployment and [downward] wage adjustments,” depending on the speed of adaptation.
This poses a dilemma for policymakers. The faster the new technology is introduced, the more jobs it eats up, but the quicker its promised benefits are realized. The MGI report rejects attempts to limit the scope and pace of automation, which would “curtail the contributions that these technologies make to business dynamism and economic growth.”
Given this priority, the main policy response follows automatically: massive investment, on a “Marshall Plan scale,” in education and workforce training to ensure that humans are taught the critical skills to enable them to cope with the transition.
The report also recognizes the need to ensure that “wages are linked to rising productivity, so that prosperity is shared with all.” But it ignores the fact that recent productivity gains have overwhelmingly benefited a small minority. Consequently, it pays scant attention to how the choice between work and leisure promised by economists can be made effective for all.
Finally, there is the assumption running through the report that automation is not just desirable, but irreversible. Once we have learned to do something more efficiently (at lower cost), there is no possibility of going back to doing it less efficiently. The only question left is how humans can best adapt to the demands of a higher standard of efficiency.
Philosophically, this is confused, because it conflates doing something more efficiently with doing it better. It mixes up a technical argument with a moral one. Of the world promised us by the apostles of technology, it is both possible and necessary to ask: Is it good?
Is a world in which we are condemned to race with machines to produce ever-larger quantities of consumption goods a world worth having? And if we cannot hope to control this world, what is the value of being human? These questions may be outside McKinsey remit, but they should not be off limits to public discussion.
The Coming Technology Policy Debate
Technological progress brings far-reaching benefits, but it also poses increasingly serious threats to humankind. With governments and citizens already struggling with the consequences of recent innovations – from job displacement to security risks – technology policy is likely to take center stage in the coming decade.
STANFORD – What do the leaks of unflattering email from the Democratic National Committee’s hacked servers during the 2016 US presidential election campaign and the deafening hour-long emergency-warning siren in Dallas, Texas, have in common? It’s the same thing that links the North Korean nuclear threat and terrorist attacks in Europe and the United States: all represent the downsides of tremendously beneficial technologies – risks that increasingly demand a robust policy response.
The growing contentiousness of technology is exemplified in debates over so-called net neutrality and disputes between Apple and the FBI over unlocking suspected terrorists’ iPhones. This is hardly surprising: as technology has become increasingly consequential – affecting everything from our security (nuclear weapons and cyberwar) to our jobs (labor-market disruptions from advanced software and robotics) – its impact has been good, bad, and potentially ugly.
First, the good. Technology has eliminated diseases like smallpox and has all but eradicated other, like polio; enabled space exploration; sped up transportation; and opened new vistas of opportunity for finance, entertainment, and much else. Cellular telephony alone has freed the vast majority of the world’s population from communication constraints.
Technical advances have also increased economic productivity. The invention of crop rotation and mechanized equipment dramatically increased agricultural productivity and enabled human civilization to shift from farms to cities. As recently as 1900, one-third of Americans lived on farms; today, that figure is just 2%.
Similarly, electrification, automation, software, and, most recently, robotics have all brought major gains in manufacturing productivity. My colleague Larry Lau and I estimate that technical change is responsible for roughly half the economic growth of the G7 economies in recent decades.
Pessimists worry that the productivity-enhancing benefits of technology are waning and unlikely to rebound. They claim that technologies like Internet search and social networking cannot improve productivity to the same extent that electrification and the rise of the automobile did.
Optimists, by contrast, believe that advances like Big Data, nanotechnology, and artificial intelligence herald a new era of technology-driven improvements. While it is impossible to predict the next “killer app” arising from these technologies, that is no reason, they argue, to assume there isn’t one. After all, important technologies sometimes derive their main commercial value from uses quite different from those the inventor had in mind.
For example, James Watt’s steam engine was created to pump water out of coal mines, not to power railroads or ships. Likewise, Guglielmo Marconi’s work on long-distance radio transmission was intended simply to create competition for the telegraph; Marconi never envisioned broadcast radio stations or modern wireless communication.
But technological change has also spurred considerable dislocation, harming many along the way. In the early nineteenth century, fear of such dislocation drove textile workers in Yorkshire and Lancashire – the “Luddites” – to smash new machines like automated looms and knitting frames.
The dislocation of workers continues today, with robotics displacing some manufacturing jobs in the more advanced economies. Many fear that artificial intelligence will bring further dislocation, though the situation may not be as dire as some expect. In the 1960s and early 1970s, many believed that computers and automation would lead to widespread structural unemployment. That never happened, because new kinds of jobs emerged to offset what dislocation occurred.
In any case, job displacement is not the only negative side effect of new technology. The automobile has greatly advanced mobility, but at the cost of unhealthy air pollution. Cable TV, the Internet, and social media have given people unprecedented power over the information they share and receive; but they have also contributed to the balkanization of information and social interaction, with people choosing sources and networks that reinforce their own biases.
Modern information technology, moreover, tends to be dominated by just a few firms: Google, for example, is literally synonymous with Internet search. Historically, such a concentration of economic power has been met with pushback, rooted in fears of monopoly. And, indeed, such firms are beginning to face scrutiny from antitrust officials, especially in Europe. Whether consumers’ generally tolerant attitudes toward these companies will be sufficient to offset historic concerns over size and abuse of market power remains to be seen.
But the downsides of technology have become far darker, with the enemies of a free society able to communicate, plan, and conduct destructive acts more easily. The Islamic State and al-Qaeda recruit online and provide virtual guidance on wreaking havoc; often, such groups do not even have to communicate directly with individuals to “inspire” them to perpetrate a terrorist attack. And, of course, nuclear technology provides not only emissions-free electricity, but also massively destructive weapons.
All of these threats and consequences demand clear policy responses that look not just to the past and present, but also to the future. Too often, governments become entangled in narrow and immediate disputes, like that between the FBI and Apple, and lose sight of future risks and challenges. That can create space for something really ugly to occur, such as, say, a cyber attack that knocks out an electrical grid. Beyond the immediate consequences, such an incident could spur citizens to demand excessively stringent curbs on technology, risking freedom and prosperity in the quest for security.
What is really needed are new and improved institutions, policies, and cooperation between law enforcement and private firms, as well as among governments. Such efforts must not just react to developments, but also anticipate them. Only then can we mitigate future risks, while continuing to tap new technologies’ potential to improve people’s lives.
Can Artificial Intelligence Be Ethical?
A recent experiment in which an artificially intelligent chatbot became virulently racist highlights the challenges we could face if machines ever become superintelligent. As difficult as developing artificial intelligence might be, teaching our creations to be ethical is likely to be even more daunting.
PRINCETON – Last month, AlphaGo, a computer program specially designed to play the game Go, caused shockwaves among aficionados when it defeated Lee Sidol, one of the world’s top-ranked professional players, winning a five-game tournament by a score of 4-1.
Why, you may ask, is that news? Twenty years have passed since the IBM computer Deep Blue defeated world chess champion Garry Kasparov, and we all know computers have improved since then. But Deep Blue won through sheer computing power, using its ability to calculate the outcomes of more moves to a deeper level than even a world champion can. Go is played on a far larger board (19 by 19 squares, compared to 8x8 for chess) and has more possible moves than there are atoms in the universe, so raw computing power was unlikely to beat a human with a strong intuitive sense of the best moves.
Instead, AlphaGo was designed to win by playing a huge number of games against other programs and adopting the strategies that proved successful. You could say that AlphaGo evolved to be the best Go player in the world, achieving in only two years what natural selection took millions of years to accomplish.
Eric Schmidt, executive chairman of Google’s parent company, the owner of AlphaGo, is enthusiastic about what artificial intelligence (AI) means for humanity. Speaking before the match between Lee and AlphaGo, he said that humanity would be the winner, whatever the outcome, because advances in AI will make every human being smarter, more capable, and “just better human beings.”
Will it? Around the same time as AlphaGo’s triumph, Microsoft’s “chatbot” – software named Taylor that was designed to respond to messages from people aged 18-24 – was having a chastening experience. “Tay” as she called herself, was supposed to be able to learn from the messages she received and gradually improve her ability to conduct engaging conversations. Unfortunately, within 24 hours, people were teaching Tay racist and sexist ideas. When she starting saying positive things about Hitler, Microsoft turned her off and deleted her most offensive messages.
I do not know whether the people who turned Tay into a racist were themselves racists, or just thought it would be fun to undermine Microsoft’s new toy. Either way, the juxtaposition of AlphaGo’s victory and Taylor’s defeat serves as a warning. It is one thing to unleash AI in the context of a game with specific rules and a clear goal; it is something very different to release AI into the real world, where the unpredictability of the environment may reveal a software error that has disastrous consequences.
Nick Bostrom, the director of the Future of Humanity Institute at Oxford University, argues in his book Superintelligence that it will not always be as easy to turn off an intelligent machine as it was to turn off Tay. He defines superintelligence as an intellect that is “smarter than the best human brains in practically every field, including scientific creativity, general wisdom, and social skills.” Such a system may be able to outsmart our attempts to turn it off.
Some doubt that superintelligence will ever be achieved. Bostrom, together with Vincent Müller, asked AI experts to indicate dates corresponding to when there is a one in two chance of machines achieving human-level intelligence and when there is a nine in ten chance. The median estimates for the one in two chance were in the 2040-2050 range, and 2075 for the nine in ten chance. Most experts expected that AI would achieve superintelligence within 30 years of achieving human- level intelligence.
We should not take these estimates too seriously. The overall response rate was only 31%, and researchers working in AI have an incentive to boost the importance of their field by trumpeting its potential to produce momentous results.
The prospect of AI achieving superintelligence may seem too distant to worry about, especially given more pressing problems. But there is a case to be made for starting to think about how we can design AI to take into account the interests of humans, and indeed of all sentient beings (including machines, if they are also conscious beings with interests of their own).
With driverless cars already on California roads, it is not too soon to ask whether we can program a machine to act ethically. As such cars improve, they will save lives, because they will make fewer mistakes than human drivers do. Sometimes, however, they will face a choice between lives. Should they be programmed to swerve to avoid hitting a child running across the road, even if that will put their passengers at risk? What about swerving to avoid a dog? What if the only risk is damage to the car itself, not to the passengers?
Perhaps there will be lessons to learn as such discussions about driverless cars get started. But driverless cars are not superintelligent beings. Teaching ethics to a machine that is more intelligent than we are, in a wide range of fields, is a far more daunting task.
Bostrom begins Superintelligence with a fable about sparrows who think it would be great to train an owl to help them build their nests and care for their young. So they set out to find an owl egg. One sparrow objects that they should first think about how to tame the owl; but the others are impatient to get the exciting new project underway. They will take on the challenge of training the owl (for example, not to eat sparrows) when they have successfully raised one.
If we want to make an owl that is wise, and not only intelligent, let’s not be like those impatient sparrows.
Democratizing Artificial Intelligence
Artificial Intelligence has the potential to make or break the world order, either pulling the “bottom billion” out of poverty and transforming dysfunctional institutions or entrenching injustice and increasing inequality. The outcome will depend on how we manage the coming changes.
OXFORD – Artificial Intelligence is the next technological frontier, and it has the potential to make or break the world order. The AI revolution could pull the “bottom billion” out of poverty and transform dysfunctional institutions, or it could entrench injustice and increase inequality. The outcome will depend on how we manage the coming changes.
Unfortunately, when it comes to managing technological revolutions, humanity has a rather poor track record. Consider the Internet, which has had an enormous impact on societies worldwide, changing how we communicate, work, and occupy ourselves. And it has disrupted some economic sectors, forced changes to long-established business models, and created a few entirely new industries.
But the Internet has not brought the kind of comprehensive transformation that many anticipated. It certainly didn’t resolve the big problems, such as eradicating poverty or enabling us to reach Mars. As PayPal co-founder Peter Thiel once noted: “We wanted flying cars; instead, we got 140 characters.”
In fact, in some ways, the Internet has exacerbated our problems. While it has created opportunities for ordinary people, it has created even more opportunities for the wealthiest and most powerful. A recent study by researchers at the LSE reveals that the Internet has increased inequality, with educated, high-income people deriving the greatest benefits online and multinational corporations able to grow massively – while evading accountability.
Perhaps, though, the AI revolution can deliver the change we need. Already, AI – which focuses on advancing the cognitive functions of machines so that they can “learn” on their own – is reshaping our lives. It has delivered self-driving (though still not flying) cars, as well as virtual personal assistants and even autonomous weapons.
But this barely scratches the surface of AI’s potential, which is likely to produce societal, economic, and political transformations that we cannot yet fully comprehend. AI will not become a new industry; it will penetrate and permanently alter every industry in existence. AI will not change human life; it will change the boundaries and meaning of being human.
How and when this transformation will happen – and how to manage its far-reaching effects – are questions that keep scholars and policymakers up at night. Expectations for the AI era range from visions of paradise, in which all of humanity’s problems have been solved, to fears of dystopia, in which our creation becomes an existential threat.
Making predictions about scientific breakthroughs is notoriously difficult. On September 11, 1933, the famed nuclear physicist Lord Rutherford told a large audience, “Anyone who looks for a source of power in the transformation of the atoms is talking moonshine.” The next morning, Leo Szilard hypothesized the idea of a neutron-induced nuclear chain reaction; soon thereafter, he patented the nuclear reactor.
The problem, for some, is the assumption that new technological breakthroughs are incomparable to those in the past. Many scholars, pundits, and practitioners would agree with Alphabet Executive Chairman Eric Schmidt that technological phenomena have their own intrinsic properties, which humans “don’t understand” and should not “mess with.”
Others may be making the opposite mistake, placing too much stock in historical analogies. The technology writer and researcher Evgeny Morozov, among others, expects some degree of path dependence, with current discourses shaping our thinking about the future of technology, thereby influencing technology’s development. Future technologies could subsequently impact our narratives, creating a sort of self-reinforcing loop.
To think about a technological breakthrough like AI, we must find a balance between these approaches. We must adopt an interdisciplinary perspective, underpinned by an agreed vocabulary and a common conceptual framework. We also need policies that address the interconnections among technology, governance, and ethics. Recent initiatives, such as Partnership on AI or Ethics and Governance of AI Fund are a step in the right direction, but lack the necessary government involvement.
These steps are necessary to answer some fundamental questions: What makes humans human? Is it the pursuit of hyper-efficiency – the “Silicon Valley” mindset? Or is it irrationality, imperfection, and doubt – traits beyond the reach of any non-biological entity?
Only by answering such questions can we determine which values we must protect and preserve in the coming AI age, as we rethink the basic concepts and terms of our social contracts, including the national and international institutions that have allowed inequality and insecurity to proliferate. In a context of far-reaching transformation, brought about by the rise of AI, we may be able to reshape the status quo, so that it ensures greater security and fairness.
One of the keys to creating a more egalitarian future relates to data. Progress in AI relies on the availability and analysis of large sets of data on human activity, online and offline, to distinguish patterns of behavior that can be used to guide machine behavior and cognition. Empowering all people in the age of AI will require each individual – not major companies – to own the data they create.
With the right approach, we could ensure that AI empowers people on an unprecedented scale. Though abundant historical evidence casts doubt on such an outcome, perhaps doubt is the key. As the late sociologist Zygmunt Bauman put it, “questioning the ostensibly unquestionable premises of our way of life is arguably the most urgent of services we owe our fellow humans and ourselves.”