Exploding desks and indistinguishable situations

I've thought a bit about belief update recently. One thing I noticed is that it is often assumed in the literature (usually without argument) that if you know that there are two situations in your world that are evidentially indistinguishable from your current situation, then you should give them roughly the same credence. Although I agree with some of the applications, the principle in general strikes me as very implausible. Here is a somewhat roundabout counter-example that has a few other interesting features as well.

I think my desk probably won't explode in the next few minutes. Now suppose the Experimental Philosophy Front tells me the following, and I have reason to believe them:

*) It's true that your desk won't explode, but the desk of your twin brother will. Until that point, we will make sure that your brother's credence that his desk will explode equals your credence that your desk will explode.

Should that make me change my mind on whether my desk will explode?

Well, there's "should" and "should". As far as epistemic rationality is concerned, it shouldn't: I was confident before that my desk won't explode, and what I've learned in the meantime only confirmed that belief. On the other hand, it might be practically beneficial if I could make myself have the irrational and false belief that my desk will explode, as that might save my brother's life. So I have a practical reason to believe that my desk will explode, but no epistemic reason.

Whether I have a practical reason to believe that my desk will explode perhaps depends on how exactly the Philosophy Front will see to it that my brother's credences are aligned with mine. Will they constantly determine my credences and implant them into my brother's brain? In this case, my credences will causally influence his. Suppose that's not how they do it. They use a simpler method: they figured out that my brother and I are deterministic reasoners of the same type, so that starting with the same relevant assumptions and presented with the same evidence, we will reach the same conclusion. Hence all they needed to do to align our credences with respect to the desk is to tell my brother (*) as well, and make sure he trusts them as much as I do. If that is what they do, it is no longer clear that I have a practical reason to make myself believe that my desk will explode: whatever I do has no influence on whether my brother will die at his exploding desk. Tricking myself to be irrational will provide evidence that my brother will survive, but it will have no causal influence on his fate. (This is of course Newcomb's problem.)

Does the precise way in which the Front makes our credences align also matter for my belief's epistemic rationality? It seems not. Whatever the Front tells me about how they make my brother's credences mirror mine, that is surely irrelevant to whether my desk will explode. In particular, suppose they decided that just to be safe, they not only present my brother with the same announcement (*), they also made his entire surroundings indistinguishable from mine: they replaced the flower pots in his office, the papers on his desk, etc, and they replaced his memories by mine. All this just to make sure that he will reach the same conclusions as I with respect to the desk. Still, I should remain confident that my desk won't explode.

If that doesn't seem obvious, note that I still haven't learned anything that is any evidence that my desk will explode. On the contrary, what I learned confirmed that it will not explode. Moreover, why should it matter that my brother and I agree about flower pots: what does this have to do with my desk? Suppose I know that we're in the exact same evidential situation except perhaps with respect to the position of the flower pots. Should I then say: if the flower pot in my brother's office is exactly where mine is, then there's a good chance that my desk will explode? That seems crazy. It doesn't matter where our flower pots are, and whether we have different evidence about their position. Finally, the view that I should now be uncertain about my desk is inconsistent with me believing what the Front said, namely that my brother's desk will explode and mine won't. So on this view, I can't rationally believe what they said. But that's odd: why can't I? After all, they didn't say anything incoherent. (There's also the problem that if I don't believe them, I lose the reason for thinking that my desk might explode.)

So here we have a case in which my situation is evidentially indistinguishable from another situation in the same world, and yet I should be certain that I am not in that other situation. And I take it that it makes no difference whether the other situation involves my twin brother or my future self. That is, if I learned that at some day in the future, I will sit at my desk shortly before it explodes, and that my credence in the desk exploding will be aligned with my current credence, and that I will be made to have the same evidence I have now, I should likewise remain certain that my desk right now will not explode.

Comments

# on 23 January 2008, 00:17

Is your or your brother's desk anywhere near where the picture below was taken? (And will you ever explain why you posted it in your blog?)
Couldn't you and your twin brother just go out shopping until one of the desks has exploded?

# on 27 January 2008, 17:18

"...whatever I do has no influence on whether my brother will die at his exploding desk. Tricking myself to be irrational will provide evidence that my brother will survive, but it will have no causal influence on his fate."

I don't know. Will tricking yourself into believing your desk will explode have no causal influence on your brother? It will have something close. If you were to trick yourself, then it would be true that something caused your brother to come to the exploding desk conclusion. You cannot cause him to come to that conclusion but you can bring it about that something caused him to come to that conclusion.

# on 28 January 2008, 06:58

Hi!

Mike: if I were to trick myself, it would probably be true that something caused my brother to escape; but I wouldn't have caused or otherwise "brought about" that something. I would only have done something that is evidence for its existence. Is that worth doing? I think not, but I'm afraid I don't have any new and non-circular arguments for this (i.e., for two-boxing in Newcomb's Problem).

Ju: the picture is from somewhere between Lake Tekapo and Lake Pukaki in New Zealand, where I've been on a bike ride; I didn't see any desks there. It would be good if we could go out shopping -- or even better, just for a walk --, but unfortunately I can't communicate with my twin brother.

# on 28 January 2008, 12:45

Well, if you know that your brother's situation is the same as yours, then you know that they can't be telling the truth to both of you. So it's hard to see how you can have good reason to believe them: why think that you're the special one, when you know they're also telling your brother that he's the special one and providing the same evidence? So I'm not sure that you should remain confident that your desk won't explode.

# on 28 January 2008, 14:49

Wo,

"...but I wouldn't have caused or otherwise "brought about" that something. I would only have done something that is evidence for its existence."

Ok, you can do something such that were you to do it your brother would be (or would have been) caused to come to the exploding desk conclusion as well. But you seem to understate the evidence you would then have that your brother would come to the expoding desk conclusion. The probability that he would do so is 1, isn't it? The two worlds cannot diverge.

# on 28 January 2008, 18:09

Thanks for the info about the photo. (NZ seems to be a lovely place...) I do get a bit worried sometimes with all those militant philosophers around and now you even have an imaginary twin brother! But feel free to ignore me.

# on 29 January 2008, 08:37

Dave: I see that line of reasoning, but I don't find it convincing. It's true that I can't be _certain_ that I'm the lucky one whose desk won't explode. But it seems to me that I have plenty of good evidence that I am, as good as inconclusive evidence can be. I don't see why I should be totally skeptical about what the Front tells me merely because I find out that at one point, they are lying to someone. What if they have always told the truth to me, and only lied to others? And I don't see why it matters to whether or not my desk will explode that my brother and I are in indistinguishable situations. I knew all along that there was a slim possibility that I'm in a situation where all my evidence is highly misleading. Why would the information that my brother's flower pots are arranged like mine increase this slim possibility? -- I realize that if you start out with the indifference reasoning, you can reach the opposite conclusions: that I really ought to completely distrust the Front, and that the flower pot information is strong evidence that my desk will explode. We have to choose between accepting the initially plausible indifference reasoning and rejecting these implausible consequences. For me, the implausibility of the consequences trumps the plausibility of indifference.

Mike: you're right, if I were absolutely certain that my brother would reach the same conclusion as me, then I should probably try to trick myself. But this case is hard to imagine, and hard to square with the idea that I am free to choose what I do: even though there are no causal links between the two of us, I am certain that I will do exactly what my brother does. I'm not sure what to say about this case. At any rate the case I described is one where I only have strong evidence to believe that my brother's actions will be the same as mine.

Add a comment

Please leave these fields blank (spam trap):

No HTML please.
You can edit this comment until 30 minutes after posting.