The Big Picture brings together a range of PS commentaries to give readers a comprehensive understanding of topics in the news – and the deeper issues driving the news. The Big Question features concise contributor analysis and predictions on timely topics.
Preventing Big AI
As generative artificial intelligence is applied in a rapidly growing number of industries, a slew of recent lawsuits, summits, legislation, and regulatory actions have bolstered efforts to establish guardrails for the technology. While some of the challenges AI poses might prove relatively straightforward to solve, others will require creative thinking – and strong political will.
One question that has attracted considerable attention lately is whether training an AI model on copyrighted material amounts to copyright infringement. Fortunately, according to Mike Loukides and Tim O’Reilly of O’Reilly Media, Inc., this issue is not nearly as intractable as it might seem. Thanks to so-called retrieval-augmented generation, “it is entirely possible to ensure that generative AI models respect copyright and compensate authors when appropriate.”
A broader risk, points out the University of Chicago’s Eric Posner, is that AI is “likely to reinforce Big Tech’s dominance of the economy.” In fact, given “collusion and coordination among a handful of players,” a “future of economic concentration and corporate political power that dwarfs anything that came before” is “all but inevitable.”
Already, notes Diane Coyle of the University of Cambridge, Big Tech’s “dominant players” are “deploying [AI] models to reinforce their position.” Meanwhile, most policymakers and other decision-makers lack any AI expertise, so “policy responses to specific issues are likely to remain inadequate, heavily influenced by lobbying, or highly contested.” In this context, ensuring that powerful new AI technologies “serve everyone” will thus require a policy approach based on principles like interoperability.
Ian Ayres of Yale, Aaron Edlin of the University of California, Berkeley, and Nobel laureate Robert J. Shiller highlight a related problem: the AI revolution will “almost surely lead to an increase in income disparities,” as “those who make and own the inventions” amass “immense wealth,” largely by “economizing on labor costs.” Since regulation “cannot eliminate these risks without precluding…AI’s potential benefits,” including “dramatic increases in productivity,” inequality insurance is essential.
Beyond economics, explains Giulio Boccaletti of the Euro-Mediterranean Center on Climate Change, the power of a few private actors over AI development and applications has important implications for scientific research, including climate science. With “the means of research,” such as computational infrastructure, “firmly in private hands, policymakers will need to be vigilant to ensure that these new tools provide public goods, rather than just private benefits.”
But, according to Carme Artigas, James Manyika, Ian Bremmer, and Marietje Schaake – all members of the Executive Committee of the UN High-level Advisory Body on Artificial Intelligence – national-level efforts will not be enough. “The unique challenges that AI poses demand a coordinated global approach to governance,” and only the United Nations “has the inclusive legitimacy needed to organize such a response.”