What Happens When We Risk Civilization Itself?

Social risk aversion is an undertheorized element in Tyler’s framework. Let’s take for granted that society should maximize sustainable growth, properly understood, subject to a near-absolute human rights constraint. There’s still the question of what counts as sustainable. Since the effects of many actions are uncertain, it’s not just a question of which actions maximize absolute or even expected growth over millions of years. Suppose an action resulted in either an instantaneous tripling of wealth or a complete destruction of human civilization with 50-50 probability. In expected terms, this is a 50% gain, an incredible return on global wealth worth trillions of dollars. But I expect Tyler would reject this gamble as unethical due to the 50% chance of losing everything.

There are a number of variables we can play with here: the number of outcomes, the percent return of each, the likelihood of each, and the population that it affects (all of human civilization vs. a subset). I would expect that the acceptability of the gamble would vary positively with expected value and with the lower bound of outcomes, and negatively with the proportion of the population affected. I don’t think Tyler would reject all such gambles, adopting a minimax decision rule, as that would make economic growth nearly impossible. Society, in Tyler’s framework, should therefore be somewhat but not completely risk averse.

One way of instantiating that partial risk aversion might be discontinuously: minimax for gambles that involve the possibility of complete civilizational annihilation, and risk neutrality for everything else. Another might be to write the Wealth Plus function in such a way as to explicitly account for uncertainty, as we do in economics with utility-of-wealth functions. What is the right answer here, Tyler? And if it’s the latter, how do we pick a coefficient of social risk aversion?

Another issue is whether there is information we could receive that ought to make us more or less socially risk averse. For example, Tyler ties the sustainability criterion to the degree of irreplaceability of civilization (p. 86). If civilizations are scarce, then that pushes us in the direction of social risk aversion. But if civilizations are abundant in the universe, then for consistency we should accept some level of commensurability among civilizations just as we do for individual lives (I agree with Tyler that we all do), and that should drive us in the direction of risk neutrality. Does the detection of ‘Oumuamua, an interstellar object of possible artificial origin—almost immediately after humans gained the capability to find such projectiles—change Tyler’s coefficient of social risk aversion? At the margin, civilizations now appear less scarce than they did before we saw ‘Oumuamua.

Or what about the question of whether the universe is finite or infinite, or whether the multiverse contains an infinity of universes? Perhaps we will never know for sure, and that drives us to some risk aversion, but if there are an infinity of civilizations, even if many of them are outside of our light cone, that seems like an argument for social risk neutrality.

I suspect Tyler has thought about the issue of social risk aversion and that readers want to understand his view as much as I do.

Also from this issue

Lead Essay

  • Tyler Cowen looks at the place of economic growth in philosophy and public policy. He finds it’s an underexamined subject. But if we really can make small, sustainable improvements to long-term economic growth, these seemingly trivial changes will prove in the long term to be among the most important choices we make today. Cowen therefore argues for giving greater weight to the longer term.

Response Essays

  • Joshua M. Kim argues for public education and a higher minimum wage, challenging the advocates of economic growth to make the case against them. Although Kim agrees that economic growth matters, he is skeptical that providing social welfare today is liable to slow economic growth, and he calls on Cowen and others to justify this part of their argument.

  • Agnes Callard sees Tyler Cowen as engaged with the classic utilitarian argument for radical wealth redistribution: since spatial differences don’t have moral significance, and the marginal value of our wealth is much higher in the hands of someone crushed by poverty, we should relinquish what we have until that marginal difference disappears. She frames Cowen’s response to this argument in terms of two claims: the similarly arbitrary character of temporal differences, and the utilitarian value of economic growth. When we consider the welfare of future human beings, together with the power of economic growth to raise all boats, then this utilitarian argument becomes an argument for the status quo.

  • Economic growth is fundamental to human well-being, says Eli Dourado; why have ethicists neglected it? He answers that much philosophy was produced when economic growth was either nonexistent or difficult to notice. Even modern ethicists may need to take stock of the world around him, he suggests, and he closes by praising the beauty of economic growth.