## The broken duplication machine

Fred has bought a duplication machine at a discount from a series in which 50 percent of all machines are broken. If Fred's machine works, it will turn Fred into two identical copies of himself, one emerging on the left, the other on the right. If Fred's machine is broken, he will emerge unchanged and unduplicated either on the left or on the right, but he can't predict where. Fred enters his machine, briefly loses consciousness and then finds himself emerge on the left. In fact, his machine is broken and no duplication event has occurred, but Fred's experiences do not reveal this to him.

Question: *When Fred emerges from the machine, to what extent
does his evidence support the hypothesis that his machine
works?*

The answer, I think, is that Fred's evidence makes it more likely than not that his machine works, but the matter is not entirely trivial.

To begin, one can make a fairly strong case that Fred's credence in the hypothesis that his machine works should be 1/2. After all, this would clearly have been the right attitude before he entered the machine, given his knowledge that the machine comes from a series in which 50 percent of the machines are broken. Now let E be the totality of what Fred learns upon exiting the machine. Fred's prior credence (before entering the machine) in "the machine works" conditional on the hypothesis that he would learn E should arguably have been 1/2. By a plausible principle of doxastic conservatism, learning E then should not have affected Fred's credence.

Note also that emerging on the left and emerging on the right are completely symmetrical. If Fred's credence in "the machine works" should increase (or decrease) when he emerges on the left, then it would also have had to increase (decrease) had he emerged on the right. His credence would have to go up (down) no matter what he learns. By repeatedly entering the machine, he would become almost certain that the machine works (or is broken). That seems wrong.

Finally, for what it's worth, Fred's credence will indeed be 1/2 if he follows the update rule I have defended in my recent paper on belief update across fission, or the somewhat less general rule proposed by Hilary Greaves in 2007.

So Fred should be 50 percent confident that his machine works. One might conclude that Fred's evidence supports the hypothesis that his machine works to degree 0.5, by the general evidentialist principle that one should always believe things to the extent that they are supported by one's present evidence. But note that the three arguments for just presented are all diachronic. They all rest on Fred's credences before entering the machine, not just on his evidence once he emerges. So let's have a closer look at that evidence.

When he emerges from his machine, Fred still knows that the machine
comes from a series in which 50 percent of all machines are broken. By
itself, this makes it 50 percent probable that Fred's machine is
broken. But he also knows that he has just emerged on the left. That
is, he knows that either the machine is broken and the original Fred
has emerged on the left, or the machine works and one of Fred's
duplicate successors has emerged on the left. Let's abbreviate this
information as "*some* Fred emerged on the left", using "Fred" as
a count noun that applies both to the original Fred and to his
duplicate successors.

Now if the machine works, then it is certain that some Fred would emerge on the left. On the other hand, on the supposition that the machine is broken, there is only a 50 percent probability of that outcome. The information that some Fred emerged on the left therefore supports the hypothesis that the machine works. Specifically, by Bayes's Theorem, the probability that the machine works given the information about the series from which it comes as well as the information that some Fred has emerged from the left is 2/3. But arguably that is all the relevant evidence. So Fred's evidence supports the hypothesis that his machine works to degree 2/3.

To further strengthen this answer, imagine there's another person,
Ted, standing on the left-hand side of the machine. Ted knows
everything that Fred knows. He knows that 50% of the relevant machines
are broken, and that Fred just entered this particular
machine. Initially, Ted's credence in the hypothesis that Fred's
machine works is 1/2. Ted knows that if the machine works, then some
Fred is bound to appear on the left, whereas if the machine is broken
then it is just as likely as not that some Fred will appear. So when
Ted sees Fred appear, his credence in the hypothesis that the machine
works should increase. More precisely, it should be 2/3. (Note that
the arguments above that *Fred's* credence should stay the same
do not apply to Ted. For example, it is not true that Ted's credence
would have increased no matter what.)

Unlike the case of Fred, I think the case of Ted should be completely uncontroversial: Ted's posterior credence in "the machine works" should clearly be 2/3. So if you think that people should always believe things to the extent that they are supported by their evidence, you have only two options: maintain that Fred's credence, too, should increase to 2/3, or maintain that Fred and Ted do not have the same evidence. Either option is problematic. I've already explained why Fred's credence should not increase. (If it should, we should all become virtually certain that Everettian Quantum Mechanics is correct, no matter our initial probabilities.) Moreover, Fred and Ted do seem to have all the same relevant evidence. In fact, we can allow them to communicate and share all their evidence. Neither of them would thereby gain any relevant news. For example, Fred might learn that Ted was waiting on the left-hand side of the machine and then saw some Fred appear, but Fred already knew that some Fred appeared on the left, and the mere fact that Ted was waiting on that side surely sheds no light on whether the machine works.

But couldn't Fred have *essentially self-locating information*
that he can't possibly share with Ted? For example, suppose Fred and
Ted agree to give Fred the new name "Joe" in order to avoid the
unclear usage of the old name "Fred". (If another person has emerged
on the right, then that person is definitely not Joe, whereas it is
debatable whether he is Fred.) So Fred and Ted both know that Joe has
emerged from the machine. But one might argue that in addition, Fred
knows that *he himself* has emerged from the machine, and that
this is not the same proposition as the proposition that Joe has
emerged.

So there is a loophole for the evidentialist: she could say that Ted's evidence supports "the machine works" to degree 2/3 while Fred's evidence supports that same hypothesis to degree 1/2, even though Ted and Fred have exchanged all their information and fully trust one another. The reason would be that Fred has unsharable self-locating information which lowers the probability that his machine works.

But what could that information be?

Above I said that Fred's relevant evidence is (1) that his machine
comes from a series in which 50 percent of the machines work, and (2)
that some Fred has emerged on the left. But arguably Fred also knows
something else. He knows that *he himself* has emerged on the
left. On the hypothesis that the machine works, this is plausibly not
the same proposition as the proposition that some Fred has emerged on
the left. To see why, imagine that upon exiting the machine, Fred
keeps his eyes closed for an instant. On the supposition that his
machine works, he should then be uncertain whether he is on the left
or on the right. But he should not be uncertain whether some Fred is
on the left or on the right: if the machine works then it is certain
that some Fred is both on the left and on the right. Accordingly, when
Fred opens his eyes and sees that he is on the left, what he learns is
not merely that some Fred is on the left.

So let's revisit the above argument from Bayes' Theorem. Let
*Work* be the hypothesis that the machine works, let *Fred
Left* the hypothesis that some Fred has emerged on the left, and
let *B* be the background information that 50 percent of the
machines are broken and that Fred has recently entered the
machine. Above I assumed that

P(Works / B) = 1/2

P(Fred Left / Works & B) = 1

P(Fred Left / Broken & B) = 1/2

By Bayes' Theorem, it follows that P(Works / Fred Left & B) =
2/3. But we've just seen that *Fred Left & B* does not
capture everything that Fred learns. Fred also learns "I have emerged
on the left" -- for short *I Left*. And while P(Fred Left / Works
& B) = 1, P(I Left / Works & B) is clearly not 1: assuming
that the machine has worked, it is an a priori open question for Fred
whether he is now on the left. So arguably

P(I Left / Works & B) = 1/2

P(I Left / Broken & B) = 1/2

and so we get the desired result: Fred's evidence does not support the hypothesis that his machine works. Hah!

If this argument were sound, it would refute Stalnaker's approach
to self-locating belief. According to Stalnaker, one can always model
the evidence available to agents as uncentred propositions that can in
principle be shared. We could run a parallel argument for Ted and the
evidence that *Joe* has emerged on the left to establish that
Ted's evidence is neutral on whether Fred's machine works, which is
absurd.

But the argument isn't sound. Recall that the probabilities we are interested in are not (directly) credences but evidential probabilities. We want to know to what extent such-and-such evidence supports such-and-such hypothesis. So reconsider

P(I Left / Works & B) = 1/2.

To what extent does the hypothesis that the machine works together with the background information about its origin and that Fred recently entered it support the hypothesis "I have emerged on the left"? Surely not to degree 1/2! At most what's true is that

P(I Left / Works & B & I am some Fred) = 1/2.

But the evidential prior probability of "I am some Fred" is not 1,
nor does *Works & B* entail "I am some Fred". Indeed, if P(I
Left / Works & B) were 1/2 then P(I Left or I Right / Works &
B) would have to be 1. But Ted, for example, also knows *B*, and
surely he should not be certain that *he* has emerged either on
the left or on the right on the supposition that Fred's machine
works.

Now Fred's evidence presumably includes the proposition "I am some Fred". So can't we simply fold that information into the background facts?

Well, we can, but then we have to double-check the above argument. The likelihoods

P(I Left / Works & B & I am some Fred) = 1/2

P(I Left / Broken & B & I am some Fred) = 1/2

still look plausible. But to infer that P(Works / B & I Left & I am some Fred) = 1/2, we also need the assumption that

P(Works / B & I am some Fred) = 1/2,

and that does not look right. After all, if the machine works then there are more Freds in the world than if the machine is broken. For concreteness, consider worlds where there is nobody else except Ted and Fred (or Fred's successors). Among worlds where the machine works, there are then three prior evidential possibilites: "I am Ted", "I am the Fred on the left", "I am the Fred on the right". These should have the same evidential probability: if all your evidence is that you are one of these three people, you should give equal credence to each of them. So the evidential probability of being some Fred given that the machine works is 2/3. By contrast, given that the machine is broken it is only 1/2. The hypothesis "I am some Fred" therefore raises the probability of "the machine works".

The point might be clearer if we apply Bayes' Theorem directly to
Fred's total evidence, or at least to the fragment *I am some Fred
& I Left*. By probability theory,

P(I am some Fred & I Left / Works) = P(I Left / I am some Fred & Works) * P(I am some Fred / Works).

Likewise,

P(I am some Fred & I Left / Broken) = P(I Left / I am some Fred & Broken) * P(I am some Fred / Broken).

Since

P(I Left / I am some Fred & Works) = P(I Left / I am some Fred & Broken) = 1/2

and

P(I am some Fred / Works) > P(I am some Fred / Broken),

Fred's evidence makes it more likely than not that his machine works.

So evidentialism is false.

All this looks like a reductio. By the same standards, Ted being Ted should decrease his credence that the machine works, because he is not some Fred. But should Ted expect to "become" some Fred at some point because the number of Fred in the world increases? Should he be more and more surprised to still be a Ted? It seems absurd.

Again by the same standards: we're 6 billions people and everyone should be *very* surprised to be oneself, even though the fact that there are 6 billions people is not surprising... So an unsurprising situation can give rise to everyone being very surprise, which is irrational.

Being who we are should not count as evidence of any kind, but as a given before we can talk of having evidence at all.

I think the correct analysis is that evidence is contrastive. Ted learns that Fred emerges on the left, whereas he could have not emerged, and this is informative. Ted not seing Fred emerge was a genuine possibility. Fred learns nothing because there is no such contrast: had he not emerged on the left, he wouldn't be there to know.

I also think learning and obtaining evidence supposes a certain continuity in time (it is indeed a diachronic process). You cannot learn anything if you're not identified with your past self and keep some memory. It follows that being a Fred (or a Ted) rather than not cannot count as a kind of contrastive evidence: you need a reference frame, a fixed background, a personal identity before you can contrast anything.

When Fred emerges he doesn't "learn" that he's still some Fred: it is Fred who learns something.