en English

Nicholas Agar
Says More…

This week in Say More, PS talks with Nicholas Agar, Professor of Philosophy at the University of Waikato, New Zealand, and the author, most recently, of How to Think about Progress: A Skeptic's Guide to Technology.

Project Syndicate: You recently wrote that the “horizon bias” – the belief that what could happen will happen soon – “is most consequential in those with enough expertise to be able to offer scientific and technological solutions to big challenges” – people like Elon Musk. In your book How to Think about Progress: A Skeptic's Guide to Technology, co-authored with Dan Weijers and PS’s Stuart Whatley, you draw a sharp distinction between a moonshot and the “Mars-shot” that Musk has articulated. What makes for an effective technological moonshot, and what are the potential pitfalls – or benefits – of Musk-style Mars-shots?

Nicholas Agar: Many of our civilization’s recent missteps stem from mistaken thinking about the future. The book describes one reason for such thinking: what we call “horizon bias.” First, we’re sold an enticing vision – say, human colonies on Mars or a wholesale cure for cancer. Next, we’re told a story about how that future is imminent. Since none of the steps between the present and that future is impossible, and every step is described in the kind of “scientific detail” that one finds in hard science fiction, we buy into the vision.

It’s fun to speculate about Mars as the Plymouth Rock of a future intergalactic empire. And it can be easy to look at the rising valuations of Musk’s businesses and choose to believe that the combination of nerd bravura, “extremely hardcore” employees, and access to the US president’s ear will get us there – and soon. But there’s a fundamental difference between Musk’s “Mars-shot” and the “moonshot” that actually got humans to the moon: the latter was meticulously planned, to the last boring detail.

Imagine all the currently unknown obstacles that could arise to delay Martian colonies until 2150 or 2250. There are always casualties of progress; big progress can generate big body counts. A 200-year delay, with some SpaceX mishaps along the way, might not mean much to people in the year 3000. They will probably put the human suffering it took to build Musk City into historical perspective, much the way tourists visiting the Colosseum don’t worry much about the experiences of the slaves who built it. But it is something to consider as you advise your child about whether a future in space is right for them.

PS: In How to Think About Progress, you propose a way of contemplating the future and a “model of writing” about it that “lowers the barriers to entry for fresh thinking.” This appears to be right up the alley of liberal-arts departments, which you have vociferously defended from those who would privilege “technical ‘job-ready’ majors.” What role can humanities play in preparing younger generations to navigate today’s data-driven world, and why should students study history, literature, and philosophy alongside informatics and cognitive science?

NA: The humanities offer a way to balance frothy tech visions of the future with useful ideas about making technology work for everyone. In times of disorienting change, we need serious thinking about what it means to be human. That’s what the humanities are all about.

The retreat from the humanities is especially pronounced in Australasia. Teachers craft essay questions with care, only to have students turn them into artificial-intelligence prompts. We say that we can catch the AI cheats, but we know that we catch only the ones who don’t bother to conceal it. Students and employers are rightly questioning the value of degrees gained under these circumstances.

It doesn’t help that humanists have submitted to a Big Academic Publishing model that makes our research needlessly expensive. At a time of rising costs and declining enrollments, it should come as no surprise that university leaders are reconsidering their commitment to the humanities. I am confident that, as long as humans exist, the humanities will, too. But humanities disciplines will have to evolve alongside the needs and challenges to which they are responding. The current crisis should spur just such an evolution.

Secure your copy of PS Quarterly: The Year Ahead 2025
PS_YA25-Onsite_1333x1000

Secure your copy of PS Quarterly: The Year Ahead 2025

Our annual flagship magazine, PS Quarterly: The Year Ahead 2025, is almost here. To gain digital access to all of the magazine’s content, and receive your print copy, subscribe to PS Premium now.

Subscribe Now

For starters, we must ensure that humanities courses foster imagination, and that means placing less emphasis on training humanities students to write the way humanities academics write. If you want a job in a university program in the humanities, you need to learn how to write an academic article that could be accepted for publication in a journal. That requires a deep understanding of Big Academic Publishing’s formatting and citation practices. But many students who take humanities courses don’t seek jobs in the academy. Instead, they want to think and write clearly and creatively about the relevance of past thinking about what it means to be human – collected over millennia – to today’s chaotic world.

Attention to detail is more important in the sciences than in the humanities. If you don’t capture every detail of an experiment, how will another scientist replicate your findings? In the humanities, we welcome the correction of our proposals in spirited discussion after publication.

Humanities courses should challenge students not only to dream of a better world, but also to imagine how to get there. Steve Jobs famously said that the people who change the world are the ones who are “crazy enough to think they can.” But as we celebrate this brand of “crazy” in tech, we must also celebrate young people who are “crazy” enough to imagine new ways for humans to work, play, and prosper.

Of course, there is plenty of scope for horizon bias in these non-tech visions of the future. But in a good humanities discussion, the gaps in one student’s bold vision of human possibilities can often be filled by another student, who has different experiences of being human. The humanities we need would focus on making the most of the diversity of the world’s eight billion human imaginations.

PS: During the COVID-19 lockdowns, you noted that “social media probably fulfill our social needs about as effectively as” sugary snacks and fizzy drinks “fulfill our nutritional needs.” In a forthcoming article, you observe that if a “malevolent AI is going to replicate the plot of Terminator 2: Judgment Day” by manipulating someone into “surrendering nuclear launch codes,” it must “navigate the messy intricacies of human psychology.” How might focusing our innovation systems on the human element of technology change our approach to progress?

NA: Those messy intricacies may be inconvenient for the machines, but the machines must accommodate them, nonetheless. The alternative, after all, is simplifying ourselves for their purposes.

If we focus on tech, we can view Uber drivers as “enhanced” by the Uber app. But my conversations with Uber drivers have given me the impression that they don’t feel that way. Our jaws might drop when a large-language-model AI generates a review of TheTerminator that sounds like it was written by a time-traveling Jane Austen. But we tend to overlook the fact that no current AI comes close to matching the achievement of writing Sense and Sensibility.

If a machine threatens to do your job, you can try to compete on price – a path paved by misery, no doubt. Or you can get a new job doing something no AI can. And there is plenty that AI cannot do; in fact, no AI has given us any reason to believe that it can “imagine” at all. Whereas Austen essentially invented a new way to write, AI simply synthesizes information that already exists. I wonder what daring new forms of philosophy people will invent in response to the suggestion that getting an academic job may soon require little more that feeding prompts into an AI and spamming journal submission pages.

I hope that humanists will soon get bored with AI, in the way we find Google Search useful but part of life’s background, like bookshelves or wallpaper. Who looks at AI art and wonders what the artist was thinking?

BY THE WAY . . .

PS: One “feature of human psychology” that affects both our perceptions and the trajectory of technological progress, you write in How to Think About Progress, is “hedonic normalization”: when we are born with access to a particular technology, we not only take it for granted, but also move the goalposts of what technology must achieve to be considered successful. How does this phenomenon, when applied to technology, differ from how we perceive other types of progress, and how could it inform the quest to improve human well-being?

Hedonic normalization explains why the results of impressive progress can often leave us feeling disappointed. Now is a better time to be diagnosed with cancer than before US President Richard Nixon declared war on the disease in 1971: in the United States, there are more than 18 million cancer survivors today, compared to just three million back then. But our attitude toward the emperor of all maladies has changed little. In my 2015 book The Sceptical Optimist, I argue that the great achievements of the War on Cancer have created a new baseline from which to consider progress against the disease. So long as there is cancer, there will be plenty of misfortunes that prompt us to think, “That could have been me.”

It’s good to want better technology. But an excessive focus on technological progress can undermine our commitment to working together to solve problems. A quick tech fix for the climate crisis would be great, but history suggests that we won’t get one. Instead, we will probably get genuine advances which fall short of the vision we are sold. Looking past our many differences and working together to protect and restore the environment would almost certainly lead to a better future. That approach could prepare humanity for any future challenge.

PS: In your book, you lament the overabundance of faith in a “futurism industry” that “caters to businesses’ abiding fear of uncertainty by offering apparently scientific ‘strategic foresight’ on any subject for which there is a paying subscriber.” What risks does the “professionalization” of futurism raise, and what would a more “useful” version of the discipline look like?

NA: If an influential public figure offers a review or recommendation of a product in which they have a commercial interest, we are rightly cautious, even suspicious. The same should go for anyone who claims to be an expert on the future. In fact, futurists who discuss human-enhancement technologies often advance predictions that are conveniently aligned with the interests of companies selling those technologies. In an increasingly impoverished academy, these businesses have money to carry out “research” about ethics and the future. In short, a lot of futurism is just marketing.

A more “useful” futurism would also consider all the ways a thrilling vision of the future could go wrong and offer advice about what to do if it does. Neuralink devices raise many exciting possibilities, and encryption will obviously be a high priority for the company as it implants them in people’s brains. But as you consider the company’s commitments, you should also wonder what kind of Manchurian Candidate you might become when the quantum cryptography of five years hence is applied to your device.

Useful futurism isn’t about selling, but about advising and warning.

PS: How to Think About Progress includes extensive discussions on progress in medicine – an area where AI, in particular, is raising hopes of transformative breakthroughs. Are there principles that can be applied to help us distinguish the hype from realistic possibilities and genuine marvels in this critical field?

NA: This could be a very exciting time in medicine – if we can look past all the egotistical, profit-seeking overselling. Where there are humans, there will occasionally be hype. What if we apply AI to all the peer-reviewed literature on disease? What patterns in peer-reviewed articles on diabetes might go unnoticed by human researchers, but be quickly detected by an AI? We won’t know until we try. Access to these wonders will depend partly on academic publishers, who will have a strong incentive to find the highest possible price point for AI access to their articles. That’s one of the many joys of capitalism.

https://prosyn.org/xkCpKaM