The AI Revolution and Strategic Competition with China
Artificial intelligence is going to reorganize the world and change the course of human history. With China increasingly using the technology to usher in a new form of authoritarianism, the world’s democracies must come together and stand up for their own values and strategic interests.
The Art of AI
As the world enters a new decade, research and development into artificial intelligence and its many applications are barreling forward, and nowhere more so than in China. Although popular narratives tend to focus on the threats posed by AI, the truth is that many of the technology's dangers have been overhyped, and its promises neglected.
A leading figure in the Chinese tech scene and in artificial-intelligence development globally, Kai-Fu Lee earned a PhD in computer science from Carnegie Mellon University in 1988 before serving in executive roles at Apple, SGI, Microsoft, and Google, where he was president of Google China. Now the chairman and CEO of Sinovation Ventures in Beijing, he is the author of AI Superpowers: China, Silicon Valley, and the New World Order. Here, he discusses the global AI race, the current state of the field, and what may – and should – come next.
Project Syndicate: As someone who long worked for US companies and now oversees a tech venture capital firm, you’re deeply familiar with the world’s two main settings for AI development and research. What are the trade-offs of each R&D environment? What advantages does China offer over the US, and what must policymakers change or improve to achieve China’s goal of catching up to and surpassing the US?
Kai-Fu Lee: There is now a clear US-China AI duopoly. AI in China is rising rapidly, boosted by several structural advantages: huge data sets, a young army of technical talent, aggressive entrepreneurs, and strong and pragmatic government policy. The attitude in China can be summarized as pro-tech, pro-experimentation, and pro-speed, all of which puts the country on track to becoming a major AI power.
The big players in China are competing fiercely with their US counterparts, and rapidly catching up in terms of research and scientific expertise, as well as global platform experience. Because China is a latecomer in the world’s technology leadership, clear and consistent government policy has also helped accelerate the growth in R&D funding, as well as encouraging the adoption and implementation of AI technologies across the Chinese public and private sectors.
In my interactions with national leaders around the world in the recent years, I have been glad to see more countries adopting national strategies for technology and investment – particular in AI – to advance their economies.
PS: You recently reported that only a few of the roughly 40 investments that Sinovation Ventures has made in AI would actually threaten jobs. That will no doubt surprise many readers, given that labor-replacing automation has been a major focus of attention in media coverage of AI. What are the most promising AI applications that the dominant narrative ignores? What’s the best value-enhancing application that no one has ever heard of?
KL: AI has moved from the age of discovery to the age of implementation, and the biggest opportunities are in businesses where AI and automation can deliver significant efficiencies and cost savings. Among our invested portfolios, primarily in China, we see flourishing applications in banking, finance, transportation, logistics, supermarkets, restaurants, warehouses, factories, schools, and drug discovery. But I am most hopeful about the impact of AI on education and health care.
I would highlight education-related applications as an area where China may soon be leading the world. We have companies in our portfolios developing AI solutions to personalize and gamify math learning, to improve English pronunciation, and even to grade exams and homework. This promises to free teachers from routine tasks, allowing them to spend time building inspirational and stimulating connections with the next generations.
In health care, we have companies combining deep learning and generative chemistry to shorten the drug-discovery time by a factor of three or four. We have also invested in a company that uses AI and big data to optimize supply chains, reducing medication shortages for more than 150 million people living in rural China. I feel particularly confident that AI education and health-care applications are evolving in ways that will benefit current and future generations at scale.
PS: Turning that question around, which areas of AI have been overhyped, either by the industry or in the media?
KL: Many dystopian visions of AI predict omnipotent superintelligences, which may or may not spell the end of humankind. To be clear, this sort of superintelligence is not possible based on current technologies. There are no known algorithms for AGI (Artificial General Intelligence), nor is there a clear engineering route to get there.
The singularity is not something that can occur spontaneously, with autonomous vehicles (AVs) running on deep learning suddenly “waking up” and realizing that they can band together to form a superintelligent network. I do feel that AGI is overhyped and creates unnecessary fear among people.
Getting to AGI would require a series of foundational scientific breakthroughs in AI, a string of advances on the scale of, or greater than, deep learning. These breakthroughs would need to remove key constraints on the “narrow AI” programs that we run today, and empower them with a wide array of new abilities: multi-domain learning, domain-independent learning, natural-language understanding, commonsense reasoning, planning, and learning from a small number of examples.
Taking the next step to emotionally intelligent robots may require self-awareness, humor, love, empathy, and appreciation for beauty. These are the key hurdles that separate what narrow AI does today – spotting correlations in data and making predictions.
I cannot guarantee that scientists will not achieve the breakthroughs that would bring about AGI and superintelligence in the future. In fact, I believe we should expect continual improvements to the existing state of the art. But I believe we are still many decades away from the real thing.
PS: Putting aside the complications posed by the current US-China trade/technology war, should there be a global pact for AI along the lines of the Universal Declaration of Human Rights (which has been updated over time to account for scientific advances in genetics and other fields)?
KL: Whether the Universal Declaration of Human Rights is the right vehicle, I cannot say, but it is true that global cooperation is paramount. In my book, I urge us to move beyond competitive instincts to recognize that AI’s effects know no borders, and that our common challenges call for solutions that recognize how inextricably intertwined our destinies are across all economic classes and national borders.
Having said that, the idea that we can come up with a single set of global standards for AI ethics and consider the job done is naive, I fear. There is no one institution with either the mandate to codify basic rules or the power to enforce them. We must recognize that attitudes and visions for AI will be different across regions and countries. We must find a way to work together to reach serviceable solutions (which is a challenge to which I am contributing some of my personal time). But we’re still a long way off.
PS: You’ve said that AI will never be capable of mimicking key human traits such as creativity and empathy. What about human morality? When it comes to vesting AIs with moral decision-making – for example, when an SAE Level 5 AV (one with full autonomy) confronts the “trolley problem” – must governments step in, or could the relevant standards be set by the industry over time?
KL: It’s a good question. I tend to believe that these sorts of scenarios, and the “standards” that might be created to solve them, will be based on the accumulation of data, and therefore shaped by the industry over time. Engineers are clearly focused on developing systems that are safe and rigorously tested.
PS: You’ve said that if we can get AI right, it could liberate us from toil and free up more time for leisure. That’s a very old promise, going back at least to John Maynard Keynes; and yet, despite the many labor-saving innovations of recent decades, we seem to be working more than ever. Why should we believe that this time will be any different?
KL: Simply put: Because AI is bigger – much bigger – than the introduction of the washing machine or the industrial production line. The AI revolution will be at least of the same magnitude as the Industrial Revolution, but probably larger, and definitely faster. Whereas the steam engine replaced only physical labor, AI can perform both intellectual and physical tasks.
For cognitive tasks, the ability to learn means that computers are no longer limited simply to carrying out a rote set of instructions written by humans. Instead, they can continuously learn from new data and perform better than their human programmers. For physical tasks, robots are no longer limited to repeating one set of actions (automation), but instead can chart new paths based on the data they take in (autonomy).
Together, this enables AI to perform countless tasks across society: driving a car, analyzing a disease, providing customer support, and so on. AI’s superhuman performance of these tasks will lead to massive increases in productivity and the potential for liberation from toil.
PS: When not AI-ing, what does leisure mean to you?
KL: Spending time with my family is very important to me.
The Power of AI in Emerging Markets
Artificial intelligence is forecast to contribute $15.7 trillion to the global economy by 2030, delivering socioeconomic value to all sections of society over the coming years. And a substantial share of this total will accrue to emerging economies.
KREUZLINGEN – Artificial intelligence is permeating almost every aspect of life in advanced economies. From governments to businesses to individuals, AI’s reach is sweeping, and its implementation is proving transformational.
But the benefits are not just being felt in the developed world. AI is forecast to contribute $15.7 trillion to the global economy by 2030, delivering socioeconomic value to all sections of society over the coming years. And a substantial share of this total will accrue to emerging economies, where AI is already helping to address deep-rooted problems.
The enormous sums being invested in AI illustrate the potential many see in this new technology. According to estimates by the International Data Corporation, global spending on AI will reach roughly $36 billion in 2019, a remarkable 44% increase over 2018. That figure is expected to surpass $79 billion by 2022.
The reason so much money is being invested in AI is obvious: the worldwide business value to be derived from it is expected to soar to $3.9 trillion by 2022, more than three times the $1.2 trillion in value it generated in 2018. And it is not just that businesses are benefiting from adopting AI. They are also key agents of change, enabling millions in the developing world to benefit from increased efficiencies, both incremental and far-reaching.
Owing to AI’s sophistication, many believe it lends itself better to applications in developed economies. But AI is perhaps even more relevant in emerging markets, which are exploiting the opportunities it creates to produce significant social and economic gains. AI is enabling new products and models that are helping the poorest move up the economic ladder through solutions that leapfrog existing technologies.
For example, lack of access to credit has been a massive impediment to socioeconomic development, but now AI is helping to clear this bottleneck in the worlds’ remotest and poorest areas. From villages in Indonesia to agricultural land in Kenya and Madagascar, AI-enabled systems are helping make money accessible to small entrepreneurs and farmers – not just kick-starting a virtuous cycle of serving the underserved, but also potentially boosting growth and productivity. In the absence of traditional credit histories, alternative capital providers are using AI applications to rate potential borrowers and predict default. Prominent examples include M-Shwari in Kenya and Ant Financial across East Asia.
Across emerging economies, farmers are able to use near-ubiquitous mobile devices to access AI-enabled services that provide real-time information on weather, water usage and requirements, soil conditions, and the weather, allowing them to make more informed operating decisions. This is but one example of how low-cost AI solutions are altering the lives of farmers globally. When it comes to industrial production, increased automation is boosting efficiency and reducing costs, helping to increase consumption in the process.
AI applications are also being used to help solve infrastructure solutions. This is particularly important in the context of emerging markets, where strong economic growth and rapid urbanization are placing existing assets under growing strain. Smart cities, smart grids, Internet-integrated traffic systems, driverless vehicles, and sensor-based technologies (to name a few) are all a part of this AI juggernaut. Given the speed of urbanization in Asia and Africa, the adoption of AI-based solutions in the provision of infrastructure will be necessary to keep cities running smoothly.
Nevertheless, challenges remain. An important one is the high cost of implementing AI in daily life. The technology may offer tremendous potential, but it must also be commercially viable. Another is data security. Questions related to privacy and the commodification of data will not abate anytime soon, and they must be answered.
In fact, both concerns must be convincingly addressed, because choosing whether to adopt AI often depends on it. And ensuring increased adoption and balanced implementation of AI technology will be crucial to emerging markets’ long-term economic growth and development. And as the technology matures, it will become both cheaper and better understood.
One other consideration for emerging economies is the changing nature of work, owing to the increased application of AI in production processes. AI-enabled innovations are arguably reducing demand for labor, which poses a major problem for countries with large working-age populations, such as India, Indonesia, and Bangladesh. But AI also creates a window of opportunity for the developing world to reskill its workforce in better, less labor-intensive jobs, and in doing so help the economy ascend the value chain.
Given the benefits that AI is already bringing to emerging economies, it is imperative that it be embraced more widely. Yes, governments will need to make nuanced judgments, given the challenges that certainly exist in adopting it and implementing it successfully. But the only way to overcome these challenges is to meet them head-on.
Let’s Get Real About AI
While further progress in the development of artificial intelligence is inevitable, it will not necessarily be linear. Nonetheless, those hyping these technologies have seized on a number of compelling myths, starting with the notion that AI can solve any problem.
HALIFAX, NOVA SCOTIA – In recent years, artificial intelligence has been attracting more attention, money, and talent than ever in its short history. But much of the sudden hype is the result of myths and misconceptions being peddled by people outside of the field.
For many years, the field was growing incrementally, with existing approaches performing around 1-2% better each year on standard benchmarks. But there was a real breakthrough in 2012, when computer scientist Geoffrey Hinton and his colleagues at the University of Toronto showed that their “deep learning” algorithms could beat state-of-the-art computer vision algorithms by a margin of 10.8 percentage points on the ImageNet Challenge (a benchmark dataset).
At the same time, AI researchers were benefiting from ever-more powerful tools, including cost-effective cloud computing, fast and cheap number-crunching hardware (“GPUs”), seamless data sharing through the internet, and advances in high-quality open-source software. Owing to these factors, machine learning, and particularly deep learning, has taken over AI and created a groundswell of excitement. Investors have been lining up to fund promising AI companies, and governments have been pouring hundreds of millions of dollars into AI research institutes.
While further progress in the field is inevitable, it will not necessarily be linear. Nonetheless, those hyping these technologies have seized on a number of compelling myths, starting with the notion that AI can solve any problem.
Hardly a week goes by without there being sensational stories about AIs outperforming humans: “Intelligent Machines Are Teaching Themselves Quantum Physics”; “Artificial Intelligence Better Than Humans at Spotting Lung Cancer.” Such headlines are often true only in a narrow sense. For a general problem like “spotting lung cancer,” AI offers a solution only to a particular, simplified rendering of the problem, by reducing the task to a matter of image recognition or document classification.
What these stories neglect to mention is that the AI doesn’t actually understand images or language the way humans do. Rather, the algorithm finds hidden, complex combinations of features whose presence in a particular set of images or documents is characteristic of a targeted class (say, cancer or violent threats). And such classifications cannot necessarily be trusted with decisions about humans – whether it concerns patient diagnosis or how long someone should be incarcerated.
It’s not hard to see why. Although AI systems outperform humans in tasks that are often associated with a “high level of intelligence” (playing chess, Go, or Jeopardy), they are nowhere close to excelling at tasks that humans can master with little to no training (such as understanding jokes). What we call “common sense” is actually a massive base of tacit knowledge – the cumulative effect of experiencing the world and learning about it since childhood. Coding common-sense knowledge and feeding it into AI systems is an unresolved challenge. Although AI will continue to solve some difficult problems, it is a long way from performing many tasks that children undertake as a matter of course.
This points to a second, related myth: that AI will soon surpass human intelligence. In 2005, the best-selling futurist author Ray Kurzweil predicted that in 2045, machine intelligence will be infinitely more powerful than all human intelligence combined. But whereas Kurzweil was assuming that the exponential growth of AI would continue more or less unabated, it is more likely that barriers will arise.
One such barrier is the sheer complexity of AI systems, which have come to rely on billions of parameters for training machine-learning algorithms from massive data sets. Because we already do not understand the interactions between all of these parts of the system, it is difficult to see how various components can be assembled and connected to perform a given task.
Another barrier is the scarcity of the annotated (“labeled”) data upon which machine-learning algorithms rely. Big Tech forms like Google, Amazon, Facebook, and Apple own much of the most promising data, and they have little incentive to make such valuable assets publicly available.
A third myth is that AI will soon render humans superfluous. In his best-selling 2015 book, Homo Deus: A Brief History of Tomorrow, Israeli historian Yuval Noah Harari argues that most humans may become second-class citizens of societies in which all higher-level intellectual decision-making is reserved for AI systems. Indeed, some common jobs, such as truck driving, will most likely be eliminated by AI within the next ten years, as will many white-collar jobs that involve routine, repetitive tasks.
But these trends do not mean that there will be mass unemployment, with millions of households scraping by on a guaranteed basic income. The old jobs will be replaced by new jobs that we have yet to imagine. In 1980, no one could have known that millions of people would soon make a living from adding value to the internet.
To be sure, the jobs of the future will probably require much higher levels of math and science training. But AI itself may offer a partial solution, by enabling new, more engaging methods of training future generations in the necessary competencies. Jobs that AI takes away will be replaced by new jobs for which AI trains people. There is no law of technology or history that destines humanity to a future of intellectual slavery.
There are of course more myths: AIs will overpower and harm humans, will never be capable of human-type creativity, and will never be able to build a causal, logical chain connecting effects with the patterns that cause them. I believe time and research eventually will debunk these myths as well.
This is an exciting time for AI. But that is all the more reason to remain realistic about the field’s future.
The Missing Link in Europe's AI Strategy
Europe can become a global leader in artificial intelligence, but only if it protects its citizens and involves workers in the regulatory and deployment process. In that regard, the European Commission’s recent draft regulation leaves much to be desired.
BRUSSELS – The European Commission’s strategy for artificial intelligence focuses on the need to establish “trust” and “excellence.” Recently proposed AI regulation, the Commission argues, will create trust in this new technology by addressing its risks, while excellence will follow from EU member states investing and innovating. With these two factors accounted for, Europe’s AI uptake supposedly will accelerate.
Unfortunately, protecting EU citizens’ fundamental rights, which should be the AI regulation’s core objective, appears to be a secondary consideration; and protections for workers’ rights don’t seem to have been considered at all.
AI is a flagship component of Europe’s digital agenda, and the Commission’s legislative package is fundamental to the proposed single market for data. The draft regulation establishes rules concerning the introduction, implementation, and use of AI systems. It adopts a risk-based approach, with unacceptable, high-risk, limited, and low-risk uses.
Under the proposal, AI systems deemed “high-risk” – posing significant risks to the health and safety or fundamental rights of persons – are subject to an ex ante conformity assessment to be carried out by the provider, without prior validation by a competent external authority. Requirements include high-quality data sets, sound data governance and management practices, extensive record-keeping, adequate risk management, detailed technical documentation, transparent user instructions, appropriate human oversight, explainable results, and a high level of accuracy, robustness, and cybersecurity.
The Commission says that its definition of AI, as well as the risk-based approach underpinning the draft regulation, are based on public consultation. But the fact that industrial and tech firms constituted an overwhelming majority of the respondents to its 2020 AI White Paper suggests an exercise that is far from democratic. These businesses, while pretending to promote knowledge, science, and technology, steered the regulatory process in a direction that serves their interests. The voice of society, in particular trade unions, was drowned out.
The regulation has several shortcomings. Among them are the Commission’s narrow risk-based approach, the absence of a redress mechanism, the failure to address the issue of liability for damage involving AI systems, and a reliance on regulatory sandboxes for providing “safe” environments in which to test new business models. The draft also fails to deliver from a worker-protection perspective.
To address this shortcoming, an ad hoc directive that focuses on AI in the context of employment, which would protect workers (including those in the platform economy) and enable them to exercise their rights and freedoms on an individual or collective basis, would be a possible way forward.
Such a directive should address several key issues. For starters, it should set employers’ responsibilities in preventing AI risks, in the same way that they are obliged to assess occupational health and safety hazards. AI risks extend further, because they include possible abuses of managerial power stemming from the nature of the employment relationship, as well as other risks to workers’ privacy, fundamental rights, data protection, and overall health.
Safeguarding worker privacy and data protection is equally vital, because AI is hungry for data and workers are an important source of them. The EU’s General Data Protection Regulation (GDPR) is a powerful tool that, in theory, applies to workers’ data in an employment context, including when these are used by an AI system. But in practice, it is almost impossible for workers to exercise their GDPR rights vis-à-vis an employer. The EU should introduce additional provisions to ensure they can.
Making the purpose of AI algorithms explainable is important, too. Here, firms’ workplace transparency provisions will not protect workers. Instead, employers, as users of algorithms, need to account for the possible harm their deployment can do in a workplace. The use of biased values or variables can lead to the profiling of workers, target specific individuals, and categorize them according to their estimated “risk level.”
Another priority is ensuring that workers can exercise their “right to explanation.” The implication, here, is that employers would be obliged to consult employees before implementing algorithms, rather than informing them after the fact. Moreover, the information provided must enable workers to understand the consequences of an automated decision.
The new ad hoc directive should also guarantee that the “human-in-command” principle is respected in all human-machine interactions at work. This involves giving humans the last word and explaining which data sources are responsible for final decisions when humans and machines act together. Trade unions should be considered as part of the “human” component and play an active role alongside managers, IT support teams, and external consultants.
Furthermore, EU lawmakers must prohibit algorithmic worker surveillance. Currently, worker monitoring is regulated by national laws that often predate GDPR and do not cover advanced, intrusive people analytics. AI-powered tools such as biometrics, machine learning, semantic analysis, sentiment analysis, and emotion-sensing technology can measure people’s biology, behavior, concentration, and emotions. Such algorithmic surveillance does not passively scan workers but rather “scrapes” their personal lives, actively builds an image, and then makes decisions.
Lastly, workers need to be able to exercise agency by becoming AI-literate. Teaching them technical digital skills so that they can operate a particular system is not enough. Understanding AI’s role and its effect on their work environment requires workers to be informed, educated, and critically engaged with the technology.
Regulating AI systems, in particular those deemed high-risk, should not be based on their providers’ self-assessment. Europe can become a global leader in the field, and foster genuine public trust in and acceptance of this emerging technology; but only if it effectively protects and involves its citizens and workers. No “human-centric” AI will ever exist if workers and their representatives are unable to flag up the technology’s specific employment-related risks.
In that regard, the Commission’s draft regulation leaves much to be desired. The European Parliament and EU member states must now act and, in particular, integrate worker protection in the final version of this key regulation.
Realizing the Potential of AI Localism
With national innovation strategies focused primarily on achieving dominance in artificial intelligence, the problem of actually regulating AI applications has received less attention. Fortunately, cities and other local jurisdictions are picking up the baton and conducting policy experiments that will yield lessons for everyone.
NEW YORK – Every new technology rides a wave from hype to dismay. But even by the usual standards, artificial intelligence has had a turbulent run. Is AI a society-renewing hero or a jobs-destroying villain? As always, the truth is not so categorical.
As a general-purpose technology, AI will be what we make of it, with its ultimate impact determined by the governance frameworks we build. As calls for new AI policies grow louder, there is an opportunity to shape the legal and regulatory infrastructure in ways that maximize AI’s benefits and limit its potential harms.
Until recently, AI governance has been discussed primarily at the national level. But most national AI strategies – particularly China’s – are focused on gaining or maintaining a competitive advantage globally. They are essentially business plans designed to attract investment and boost corporate competitiveness, usually with an added emphasis on enhancing national security.
This singular focus on competition has meant that framing rules and regulations for AI has been ignored. But cities are increasingly stepping into the void, with New York, Toronto, Dubai, Yokohama, and others serving as “laboratories” for governance innovation. Cities are experimenting with a range of policies, from bans on facial-recognition technology and certain other AI applications to the creation of data collaboratives. They are also making major investments in responsible AI research, localized high-potential tech ecosystems, and citizen-led initiatives.
This “AI localism” is in keeping with the broader trend in “New Localism,” as described by public-policy scholars Bruce Katz and the late Jeremy Nowak. Municipal and other local jurisdictions are increasingly taking it upon themselves to address a broad range of environmental, economic, and social challenges, and the domain of technology is no exception.
For example, New York, Seattle, and other cities have embraced what Ira Rubinstein of New York University calls “privacy localism,” by filling significant gaps in federal and state legislation, particularly when it comes to surveillance. Similarly, in the absence of a national or global broadband strategy, many cities have pursued “broadband localism,” by taking steps to bridge the service gap left by private-sector operators.
As a general approach to problem solving, localism offers both immediacy and proximity. Because it is managed within tightly defined geographic regions, it affords policymakers a better understanding of the tradeoffs involved. By calibrating algorithms and AI policies for local conditions, policymakers have a better chance of creating positive feedback loops that will result in greater effectiveness and accountability.
Feedback loops can have a massive impact, particularly when it comes to AI. In some cases, local AI policies could have far-reaching effects on how technology is designed and deployed elsewhere. For example, by establishing an Algorithms Management and Policy Officer, New York City has created a model that can be emulated worldwide.
AI localism also lends itself to greater policy coordination and increased citizen engagement. In Toronto, a coalition of academic, civic, and other stakeholders came together to ensure accountability for Sidewalk Labs, an initiative launched by Alphabet (Google’s parent company) to improve services and infrastructure through citywide sensors. In response to this civic action, the company has agreed to follow six guidelines for “responsible artificial intelligence.”
As this example shows, reform efforts are more likely to succeed when local groups, pooling their expertise and influence, take the lead. Similarly, in Brooklyn, New York, the tenant association of the Atlantic Plaza Towers (in collaboration with academic researchers and nongovernmental organizations) succeeded in blocking a plan to use facial recognition technology in lieu of keys. Moreover, this effort offered important cues for how AI should be regulated more broadly, particularly in the context of housing.
But AI localism is not a panacea. The same tight local networks that offer governance advantages can also result in a form of regulatory capture. As such, AI localism must be subject to strict oversight and policies to prevent corruption and conflicts of interest.
AI localism also poses a risk of fragmentation. While national approaches have their shortcomings, technological innovation (and the public good) can suffer if AI localism results in uncoordinated and incompatible policies. Both local and national regulators must account for this possibility by adopting a decentralized approach that relies less on top-down management and more on coordination. This, in turn, requires a technical and regulatory infrastructure for collecting and disseminating best practices and lessons learned across jurisdictions.
Regulators are only just beginning to recognize the necessity and potential of AI localism. But academics, citizens, journalists, and others are already improving our collective understanding of what works and what doesn’t. At The GovLab, for example, we are deepening our knowledge base and building the information-sharing mechanisms needed to make city-based initiatives a success. We plan to create a database of all instances of AI localism, from which to draw insights and a comparative list of campaigns, principles, regulatory tools, and governance structures.
Building up our knowledge is the first step toward strengthening AI localism. Robust governance capacities in this domain are the best way to ensure that the remarkable advances in AI are put to their best possible uses.
NEW YORK – The world is only starting to grapple with how profound the artificial-intelligence revolution will be. AI technologies will create waves of progress in critical infrastructure, commerce, transportation, health, education, financial markets, food production, and environmental sustainability. Successful adoption of AI will drive economies, reshape societies, and determine which countries set the rules for the coming century.
This AI opportunity coincides with a moment of strategic vulnerability. US President Joe Biden has said that America is in a “long-term strategic competition with China.” He is right. But it is not only the United States that is vulnerable; the entire democratic world is, too, because the AI revolution underpins the current contest of values between democracy and authoritarianism. We must prove that democracies can succeed in an era of technological revolution.
China is now a peer technological competitor. It is organized, resourced, and determined to win this technology competition and to reshape the global order to serve its own narrow interests. AI and other emerging technologies are central to China’s efforts to expand its global influence, surpass the economic and military power of the US, and lock down domestic stability. China is executing a centrally-directed systematic plan to extract AI knowledge from abroad through espionage, talent recruitment, technology transfer, and investments.
China’s domestic use of AI is deeply concerning to societies that value individual liberty and human rights. Its employment of AI as a tool of repression, surveillance, and social control at home is also being exported abroad. China funds massive digital infrastructure projects around the world, while seeking to set global standards that reflect authoritarian values. Its technology is being used to enable social control and suppress dissent.
To be clear, strategic competition with China does not mean we should not work with China where it makes sense. The US and the democratic world must continue to engage with China in areas such as health care and climate change. To stop trading and working with China would not be a viable path forward.
China’s rapid growth and focus on social control have made its techno-authoritarian model attractive for autocratic governments and tempting for fragile democracies and developing countries. Much work needs to be done to ensure that the US and the democratic world can package economically viable technology with diplomacy, foreign aid, and security cooperation to compete with China’s exported digital authoritarianism.
The US and other democratic countries are playing catch-up in preparing for this global tech competition. On July 13, 2021, the National Security Commission on Artificial Intelligence (NSCAI) hosted a Global Emerging Technology Summit that showcased an important comparative advantage that the US and our partners around the world retain: the broad network of alliances among democratic countries, rooted in common values, respect for the rule of law, and the recognition of fundamental human rights.
The global technology competition is ultimately a competition of values. Together with allies and partners, we can strengthen existing frameworks and explore new ones to shape the platforms, standards, and norms of tomorrow and ensure that they reflect our principles. Extending our global leadership in technological research, development, governance, and platforms will put the world’s democracies in the best position to harness new opportunities and defend against vulnerabilities. Only by continuing to lead in AI developments can we set standards for the responsible development and use of this critical technology.
The NSCAI’s final report provides a roadmap for the democratic international community to win this competition.
First, the democratic world must use existing international structures – including NATO, the OECD, the G7, and the European Union – to deepen efforts to address all the challenges associated with AI and emerging technologies. Here, the United Kingdom’s current presidency of the G7, with its robust tech agenda and efforts to further cooperation on a range of digital initiatives, is encouraging. The G7’s decision to involve Australia, India, South Korea, and South Africa reflects an important recognition that we must convene democratic countries from around the world in these efforts.
Likewise, the newly launched US-EU Trade and Technology Council (which in many ways mirrors NSCAI’s call for a US-EU Strategic Dialogue for Emerging Technologies) is a promising mechanism to align the world’s largest trading partners and economies.
Second, we need new structures, such as the Quad – the US, India, Japan, and Australia – to expand dialogue on AI and emerging technologies and their implications, and to enhance cooperation in standards development, telecommunications infrastructure, biotechnology, and supply chains. The Quad can serve as the foundation for broader cooperation in the Indo-Pacific region across government and industry.
And, third, we need to build additional alliances around AI and future technology platforms with our allies and partners. The NSCAI has called for the creation of a coalition of developed democracies to synchronize policies and actions around AI and emerging technologies across seven critical areas:
This momentum can be maintained only by working together. Partnerships – between governments, with the private sector, and with academia – are a key asymmetric advantage that the US and the democratic world have over our competitors. As recent events in Afghanistan have shown, US capabilities remain indispensable in allied operations, but the US must do more to rally allies around a common cause. This era of strategic competition promises to transform our world, and we can either shape the change or be swept along by it.
We now know that the uses of AI in all aspects of life will grow as the pace of innovation continues to accelerate. We also know that our adversaries are determined to turn AI capabilities against us. Now we must act.
The principles we establish, the investments we make, the national-security applications we field, the organizations we redesign, the partnerships we forge, the coalitions we build, and the talent we cultivate will set the strategic course for America and the democratic world. Democracies must invest whatever it takes to maintain leadership in the global technology competition, to use AI responsibly to defend free people and free societies, and to advance the frontiers of science for the benefit of all humanity.
AI will reorganize the world and change the course of human history. The democratic world must lead that process.