Polarising Yourself Against Intergenerational Justice
you know who else said future generations matter? hitler
There’s this idea called longtermism, which is basically the view that we ought to care a lot more about how our actions influence the long-term future of humanity. It was popularised by leading (and founding) figures in the Effective Altruism movement like William MacAskill and Toby Ord. Some longtermists think that we should simply value future generations more than we currently do, others think that this is the most important moral issue of our time.
There’s some wiggle room on the scope of longtermism too: some say that some of our decisions should be influenced by a concern for the wellbeing of the next few generations, others say that “in a wide class of decision situations, the option one ought, ex ante, to choose is contained in a fairly small subset of options whose ex ante effects on the very long-run future are best.” It’s not that they think the next generation, or the next few generations, matter: they think that generations hundreds and thousands of years away matter. A lot. So much so that they suggest “for the purposes of evaluating actions, we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1,000) years, focusing primarily on the further-future effects. Short-run effects act as little more than tie-breakers.”
Even those who endorse weaker forms of longtermism advocate for massively increasing spending on mitigating existential risks posed by AI, biologically engineered pandemics, giant comets and supervolcanoes. Climate change and nuclear war, on the other hand, are considered less significant threats due to the attention they already receive and the exceptionally low probability that they result in human extinction. They’d be extremely bad, sure, but something that might kill 99.9% of people is not as bad as something that might kill everyone.
Some people on the political left don’t like longtermism. They don’t like the idea of prioritising the very long-run future over the short-to-medium term or prioritising other catastrophic risks over climate change and nuclear war. They’re suspicious of the rhetorical and financial support that longtermism receives from (some) tech billionaires, (some) libertarians and (some) racists – perhaps longtermism presents those groups with a convenient excuse to ignore the present. But some people have become so negatively polarised against longtermism that they’ve ended up endorsing some bizarre views of their own.
Human Extinction
One such figure is Nathan J. Robinson. He has written extensively about his opposition to longtermism. He is so opposed, in fact, that he recently declared neutrality on the question of whether the human race should continue, writing that “If, in the very long run, our species goes extinct, I do not think that is a matter of moral concern.” Similar sentiments are expressed by Émile P. Torres who declares that they don’t see anything wrong with humanity collectively deciding not to have children and going extinct.
Maybe I have been bought off by the Silicon Valley billionaires, but I think that sounds crazy, nihilistic and anti-human. If I was presented with a button that massively improved the lives of everyone on earth but guaranteed that humanity would go extinct in 200 years (perhaps via sterilisation or rendering the known universe uninhabitable), I would not press that button. Torres is at least more careful than Robinson – they claim that human extinction would be bad in most cases because the process of bringing it about would involve great suffering. But I also think it would be bad if humanity went extinct by choice or without suffering.
The end of all human relationships and social bonds, the loss of our cumulative knowledge, the termination of human consciousness and meaning-making, the demise of humanity’s limitless potential, the termination of culture…I could go on. These are things we should do everything we can to avoid, even if some hypothetical future population collectively decides it’s time for humanity to go extinct.
Utilitarianism and Population Ethics
For reasons that have a lot to do with longtermism’s origins in analytic moral philosophy, longtermism is frequently paired with utilitarianism and the total view of population ethics. The former is a theory that says whether an act is right or wrong depends solely on whether it maximises wellbeing. The latter is a view that evaluates populations by the same metric, e.g. a population with greater total wellbeing is preferable to a population with less total wellbeing.
To the extent that Robinson offers substantive reasons to reject longtermism, it is on the grounds that utilitarianism and the total view are wrong. He notes that longtermists like MacAskill accept (or at least entertain) the repugnant conclusion: for any given population, there is a better, sufficiently larger population in which everyone’s lives are only barely worth living. Holding all other things equal, the total view entails that a world with many happy people living in it could be improved by massively increasing its population and making everyone’s lives worse.
Even the prominent alternative to the total view, person-affecting views, put pressure on you to care about future generations. Person-affecting views can be summed up by the slogan: “We are in favor of making people happy, but neutral about making happy people". It’s a view that’s insensitive to changes in population size. They don’t agree with the idea that we should, or even can, harm or benefit merely possible future people – those whose very existence depends on our choices. But most person-affecting views entail that we should care about how things turn out for future people who will turn out to exist regardless of our actions.
Even if we adopted an extreme view that says we only ought to care about living people, we are still going to end up indirectly caring about how future generations turn out. Young people alive today will eventually become old and will need to rely on working-age people to maintain a decent standard of living. One would hope, for their sake, that there are enough young and healthy working-age people around to support them. This is true even if the elderly self-fund their retirement because they still need to rely on ongoing economic output. A large generation of elderly people can accumulate all the savings they like to finance their consumption, but too many dollars chasing too few goods will just drive inflation. So even in an indirect sense, we have a reason to want future generations to thrive.
All Else Being Equal
The objections to utilitarianism and the total view raised by Robinson and others target niche thought experiments where these views seemingly advocate for doing abhorrent things. If we ought to maximise wellbeing, then, all else being equal, it’s morally obligatory to harvest organs from homeless people to save additional lives. All else being equal, we should dedicate all our resources to developing mind-uploading technology so we can achieve eternal bliss. If we ought to prefer populations with greater total wellbeing over populations with less total wellbeing, then perhaps, all else being equal, we ought to ensure that people have as many children as possible. Rule of law? Civil rights? Reproductive freedom? They’ve got to go, all else being equal.
But all else is not equal. The thought experiments that generate these problems are set up in an incredibly unrealistic way where we know that abhorrent acts would maximise wellbeing. It’s fine to argue that we should never violate certain human rights, even in strange thought experiments where they maximise wellbeing, but in the real world we have good reason to believe they wouldn’t achieve that. And even supposing that utilitarians found themselves in such a position where they crunched the numbers and figured out that these egregious acts would maximise wellbeing, they almost certainly still have good reasons to doubt themselves and act prudently.
Misapplying Moral Philosophy
Suppose we were discussing whether you should make sacrifices in the present to make things better for your future self. We could spend our time debating the merits of different metaphysical theories of persistence and personal identity to establish whether there is such a thing as the persisting self. Maybe your future self isn’t really you – they’re just a bundle of memories and experiences related to you in some way. Whether you end up believing that or not, you’re probably not going to think that it has any practical bearing on whether you should overdose on hard drugs for a quick high and leave the consequences to your future ‘self’.
We should think about population ethics in the same way. Whether the correct view entails the repugnant conclusion or tells us that creating happy lives is intrinsically neutral shouldn’t inform whether we care about the distant future or whether it’s fine if humanity goes extinct.
Longtermism in the popular imagination is a victim of its philosophical founding. It is all too easy to dismiss by virtue of your disbelief in utilitarianism and the total view, when in reality there’s no reason to believe you need to endorse these views to care about the distant future.
There’s also no reason to throw the baby out with the bathwater because some tech billionaire said something stupid about shaming women into having more babies or uploading our minds to computers. Tech billionaires say a lot of things. Some of them have abhorrent views, some of them might use longtermism as a distraction from bad things they’re doing in the present – and some of them see the longtermists as a threat with all their talk of ‘safety’ and ‘risk management’ getting in the way of them cashing in on new technology.
Don’t negatively polarise yourself against intergenerational justice.