Amid the ongoing COVID-19 pandemic, world leaders are assembling in Glasgow for COP26, the UN’s climate change conference. Both the pandemic and global warming are powerful reminders that the choices we make can have consequences that continue to unfurl over decades and centuries. But how much should we care about these hard-to-predict long-term consequences of our actions? According to some, so-called moral “longtermists,” we ought to care a great deal. Others, however, have called longtermism “the world’s most dangerous secular credo.”
COVID, climate change, and the long-term impact of our choices
The coronavirus now appears to be endemic. It is likely to continue to circulate across the globe indefinitely, causing more and more human suffering, economic damage, and disruption to our lives. The total sum of harm an endemic virus can cause is theoretically boundless. And yet, if China had better regulated its meat markets or its bio-labs (depending on your preferred origin theory), it would have likely prevented the outbreak entirely. This failure, in one place at one time, will have significant long-term costs.
The headline ambition of COP26 is for nations to commit to specific plans for achieving net zero (carbon and deforestation) by the middle of the century. Whether or not these talks are successful could have a profound long-term impact. Success could put humanity back onto a sustainable trajectory. We might avoid the worst effects of climate change: biodiversity collapse, flooding, extreme weather, drought, mass famine, mass refugee movements, possible population collapse, etc. Taking effective action on climate change now would provide a huge benefit to our grandchildren.
But the comparison between climate action and inaction does not stop there. As helping our grandchildren and great-grandchildren, the benefits of effective climate action now would likely continue to snowball deep into the next century. Instead of our great-grandchildren needing to allocate their resources and efforts on mitigating and reversing the damage of climate change, the twenty-second century might instead be spent in pursuit of other goals — eliminating poverty, making progress on global justice, and deepening our understanding of the universe, for example. Progress on these goals would, presumably, generate their own positive consequences in turn. The good we can achieve with effective climate action now would continue to accumulate indefinitely.
Commitment to taking the long-view
Both COVID and climate change make a strong intuitive case for moral “longtermism.” Longtermists think that how things go in the long-term future is just as valuable, morally speaking, as what happens in the near-term future. If you can either prevent one person from suffering today or two tomorrow, the longtermist says you morally ought to prevent the two from suffering tomorrow. But if you also had the option of preventing three people from suffering in a million years, they say you should do that instead. It doesn’t matter how far events are from us in time; morally, they’re just as significant.
The second part of the longtermist view is that we can influence the long-term future with our choices today. They argue that the long-term future that occurs depends on what humanity does in the next century. And the stakes are high. There are possible futures in which humanity overcomes the challenges we are faced with today: ones in which, over millennia, we populate the galaxy with trillions of wonderful, fulfilled lives. There are also possible futures in which humanity does not even survive this century. There is, in other words, a very valuable possibility — in moral philosopher Toby Ord’s words, a “vast and glorious” version of the future — that’s worth trying to make real.
A catastrophic future for humanity is not a particularly remote possibility. Ord, who studies existential risk, sees the next century as a particularly dangerous one for humanity. The risks that concern him are not just the cosmic ones (meteorites, supernova explosions) or the familiar ones (nuclear war, runaway global warming, a civilization-collapsing pandemic); they also include unintended and unforeseen consequences of quickly evolving fields such as biotech and artificial intelligence. Adding these risks together, he writes, “I put the existential risk this century at around one in six.” Humanity has the same odds of survival as a Russian roulette player.
The cost of failing to prevent an existential catastrophe (and the payoff of success) is incredibly high. If we can reduce the probability of an existential risk occurring (even by a percentage point or two), longtermists claim that any cost-benefit analysis will show it’s worth taking the required action, even if it incurs fairly significant costs; the good future we might save is so incredibly valuable that it easily compensates for those costs.
But, for whatever reason, reducing the probability of improbable catastrophes does not rise to the top of many agendas. Ord notes that the budget of the Biological Weapons Convention, the body that polices bioweapons around the globe, has an annual budget of just $1.6m, less than the average turnover of a McDonald’s restaurant. As Ord explains this strange quirk in our priorities, “Even when experts estimate a significant probability for an unprecedented event, we have great difficulty believing it until we see it.”
Even short of generating or mitigating existential risks, the choices we make have the potential to put the world on different trajectories of radically different value. Our actions today can begin virtuous or vicious cycles that continue to create ever-greater benefits or costs for decades, centuries, or even millennia. So besides thinking about how we might mitigate existential risks, longtermists also claim we need to give more thought to getting onto more positive trajectories. Examples of this kind of opportunity for “trajectory change” include developing the right principles for governing artificial intelligence or, as COP26 is seeking to achieve, enacting national climate policies that will make human civilization ecologically sustainable deep into the future.
Challenges to longtermism
Last week, Phil Torres described longtermism as “the world’s most dangerous secular credo.” A particular worry about longtermism is that it seems to justify just about any action, no matter how monstrous, in the name of protecting long-term value. Torres quotes the statistician Olle Häggström who gives the following illustration:
Imagine a situation where the head of the CIA explains to the U.S. president that they have credible evidence that somewhere in Germany, there is a lunatic who is working on a doomsday weapon and intends to use it to wipe out humanity, and that this lunatic has a one-in-a-million chance of succeeding. They have no further information on the identity or whereabouts of this lunatic. If the president has taken [the longtermist] Bostrom’s argument to heart, and if he knows how to do the arithmetic, he may conclude that it is worthwhile conducting a full-scale nuclear assault on Germany to kill every single person within its borders.
Longtermism entails that it’s morally permissible, perhaps even morally obligatory, to kill millions of innocent people to prevent a low-probability catastrophic event. But this can’t be right, say the critics; the view must be false.
But does Häggström’s thought experiment really show that longtermism is false? The president launching such a strike would presumably raise the risk of triggering a humanity-destroying global nuclear war. Other countries might lose faith in the judgment of the president and may launch a preventative strike against the U.S. to try to kill this madman before he does to them what he did to Germany. If this probability of catastrophic global nuclear war would be raised by any more than one-in-a-million, then longtermism would advise against the president’s strike on Germany. This is to say that if the president were a longtermist, it’s at least highly debatable whether he would order such an attack.
Of course, we can modify Häggström’s case to eliminate this complication. Imagine the chance of the madman succeeding in blowing up the world is much higher — one-in-two. In such a case, longtermism would likely speak in favor of the president’s nuclear strike to protect valuable possible futures (and the rest of humanity). But it’s also a lot less clear that such an act would be morally wrong compared with Häggström’s original case. It would be terrible, tragic, but perhaps it would not be wrong.
Maybe the real risk of longtermism is not that it gives us the wrong moral answers. Maybe the criticism is based on the fact that humans are flawed. Even if it were true that longtermism would rule out Häggström’s nuclear attack on Germany, the strategy still seems to place us in a much riskier world. Longtermism is an ideology that could theoretically justify terrible, genocidal acts whenever they seem to protect valuable long-term possible futures. And, ultimately, it’s more likely that flawed human minds perform unconscionable acts if they have an ideology like longtermism with which to attempt to justify their actions.
This last criticism does not show that moral longtermism is false, exactly. The criticism is simply that it’s dangerous for us humans to place such immense faith in our ability to anticipate possible futures and weigh competing risks. If the criticism succeeds, a longtermist would be forced to embrace the ironic position that longtermism is true but that we must prevent it from being embraced. Longtermists would have to push the view underground, hiding it from those in power who might make unwise and immoral decisions based on faulty longtermist justifications. Ironically, then, it might be that the best way to protect a “vast and glorious” possible future is to make sure we keep thinking short-term.