Reasoning about doom

I occasionally teach the doomsday argument in my philosophy classes, with the hope of raising some general questions about self-locating priors. Unfortunately, the usual formulations of the argument are problematic in so many ways that it's hard to get to these questions.

Let's look at Nick Bostrom's version of the argument, as presented for example in Bostrom (2008).

We compare two possibilities about the prospects of humanity:

Early Doom: The total number of humans who will have ever lived is 100 billion.

Late Doom: The total number of humans who will have ever lived is 100 trillion.

The argument goes as follows.

Early Doom and Late Doom have roughly equal prior probability. Every Early Doom world is inhabited by 100 billion people; a priori, each of these positions is equally likely to be ours. Similarly for the 100 trillion positions in Late Doom worlds. If we now take into account the fact that there have only been around 50 billion humans so far (i.e., that our "birth rank" is around 50 billion), it follows by Bayes' theorem that Early Doom is vastly more probable than Late Doom.

More precisely, using 'E' for Early Doom, 'L' for Late Doom, and 'R' for the information that our birth rank is around 50 billion, Bayes' theorem gives us:

\[\begin{align*} P(E / R) &= \frac{P(R / E) P(E)}{P(R / E) P(E) + P(R / L) P(L)}\\ &= \frac{1/10^{11} \cdot 1/2}{1/10^{11} \cdot 1/2 + 1/10^{14} \cdot 1/2} \approx 0.999. \end{align*} \]

Can we conclude that it is 99.9% likely that we will soon go extinct?!

The most obvious problem with this argument is that E and L are not the only (a priori) possibilities. What do we get if we drop this assumption?

Let's use 'N' for the total number of humans who will have ever lived. Suppose we start with a uniform prior over N=1 to N=10100 (say), generalizing Bostrom's uniform prior over E and L. Within each N=k world, the prior is evenly divided over all humans. Each position in each N=k possibility then has probability 1/(k*10100). This is also the unnormalized posterior probability of N=k after conditioning on our position (birth rank) r, for k>=r. The probability of N=k is therefore inversely proportional to k:

\[ P(N\!=\!k) = \frac{c}{k}, \]

where c is a constant.

This does imply that every small-world hypothesis N=k is much more probable than a corresponding large-world hypothesis N=1000k. On the other hand, there are many more large-world possibilities than small-world possibilities. For example, the probability of N=1011 is about equal to the probability that N is between 1014 and 1014+1000. So we can be as confident that there will be 100 billion people as that there will be 100 trillion people plus or minus 500. It's not obvious that this should disturb us.

In fact, the calculation implies that we are a lot more likely to be among the first half of all humans than among the second half. On the face of it, this may seem unduly optimistic, given that (by definition) half of all humans in any world are among the second half.

One might respond that even if we're among the first half of all humans, we may still be close to extinction, given that the human population is so much larger today than it was in the distant past.

This points at the other obvious flaw in the argument: We have a lot of further information besides our birth rank.

Suppose all you know about a population of bacteria is that it has doubled every hour for the last few days and currently stands at 100 million. What's your probability distribution over how many bacteria will ever have existed in that population?

Hard to say, but the distribution should not be flat. We expect tendencies to project into the future. It's more likely that the population will double again in the next hour than that it will quadruple or halve.

Similarly, the fact that humanity has been growing favours futures with a lot more humans than futures with fewer humans.

But don't we know that the exponential growth of the human population will come to an end soon? Well, yes. We have a lot of further information. It's really hard to assess how it all adds up.

Once we see the obvious flaws in the argument, it's not clear why we might want to change the crucial assumption about priors that Bostrom and others have focussed on: that Early Doom and Late Doom have roughly equal prior probability.

In the future, I might use the following variation of the doomsday argument (inspired by some of the cases in Bostrom (2001)):

Doom II. We have created a device that will either destroy all humans or ensure our interplanetary survival for millions of years. Which of these will happen depends on whether the Nth digit of a certain physical constant is even (doom) or odd (no doom). We have not been able to measure this digit. How confident should we be that it is even?

Here we can, for simplicity, assume that there are really just two possibilities, much like Early Doom and Late Doom. If we start with a uniform prior over whether the digit is even or odd – as seems reasonable – and take into account our early birth rank, as above, we get the seemingly unreasonable conclusion that the digit is almost certainly even.

Bostrom, Nick. 2001. “The Doomsday Argument Adam & Eve, UN++, and Quantum Joe.” Synthese 127 (3): 359–87. doi.org/10.1023/A:1010350925053.
Bostrom, Nick. 2008. “The Doomsday Argument.” Think 6 (17-18): 23–28. doi.org/10.1017/S1477175600002943.

Comments

# on 09 February 2024, 09:09

I am definitely suspicious of Doomsday type arguments, but I think your Doom2 argument is too strongly specified. My feeling about Copernican/mediocrity/self-localizing models is that they rely on assumptions of very broad processes eg human extinction could be caused, with a small probability of occurrence for any particular event, by a large number of different processes, whose actions may very over time. I think this is what allows us to swallow such very sweeping model assumptions. By contrast, in your Doom2, at the time of the development of device D, probability of extinction went from some small number to 50%, conditional on the use of D being unavoidable.

One might consider an old fashioned doomsday type argument - nuclear technology and a pessimistic view of human nature with a constant small probability of nuclear war for each generation of politicians.

# on 16 April 2024, 16:34

Great post. Would one way of putting your point be that common ways of formulating the Doomsday argument ignore the Principle of Total Evidence? Like, maybe it's true that given *just* our birth rank, we should expect doom soon, but what we are really interested in is what we should think given our total evidence. We don't want to make the mistake of conditionalizing only on a proper part of our evidence. That would be like thinking that Tweety the penguin likely flies, because Tweety is a bird and p(Fly|Bird) is high. But that's beside the point, what matters is p(Fly|Penguin), which is low.

# on 17 April 2024, 08:16

@3vn: Thanks. Yes, that's the main point. And also, it's not even obvious that we get a disturbing conclusion if we just conditionalize on our birth rank, as long as we consider all possible population sizes.

Add a comment

Please leave these fields blank (spam trap):

No HTML please.
You can edit this comment until 30 minutes after posting.