Frequentism and the end of time

This paper (recently featured on the physics arXiv blog) argues that if the universe never comes to an end, then the universe will probably come to an end within the next 5 billion years. The reasoning, as far as I can tell, goes roughly like this.

First, define the probability of an event of type A given an event of type B as the total number of A events over the number of B events. If the universe is infinite, then the total number of A events and B events will often be infinite. But infinity over infinity isn't well-defined. So to have well-defined probabilities, the relevant counts of A and B events must be restricted, e.g. to a finite initial segment of the universe.

For any choice of a finite segment, some events will lie close to the segment's end. For example, suppose the chosen segment ends at night. Then although in fact every evening is followed by a morning, the ratio of mornings over evenings within the segment is less than 1. So there is a positive probability that an evening is not followed by a morning. And this means that there is a positive probability that time will end tonight. The longer the interval you consider, the higher the probability that time will end within it.

The conclusion isn't inconsistent. The fact that the universe has a high chance of ending within the next 5 billion years does not entail that it will end. Improbable things can happen. But it is odd that the high chance of time coming to an end is derived from a theory that also tells us that time won't end. This would seem to cast doubt on "best system" accounts of chance, and provide a counterexample to the Principal Principle.

Judged from the philosophy armchair, the obvious weak spot in the argument is the frequentist definition of probability -- see e.g. Al Hajek's 15 arguments and 15 more arguments against frequentism. I myself am quite sympathetic to frequentism, not as an analysis of our ordinary notion of "probability" or "chance", but as an analysis of the measures that play the probability role in science. (It would be odd, I think, if scientists were completely wrong about the definition of one of their core theoretical terms.) But simple frequentism won't work, and it's not obvious how to fix it. Good to see that physicists are becoming aware of the problem.

(Incidentally, the authors of the paper motivate frequentism as an extension of Born's rule in quantum physics, which would imply that one can make frequentist sense of Born's rule. I thought the consensus was that one can't, on the grounds that even in no-collapse models there is no sensible way of "counting branches". Am I missing something?)

Comments

No comments yet.

Add a comment

Please leave these fields blank (spam trap):

No HTML please.
You can edit this comment until 30 minutes after posting.