I'm certain that I went by the Mountains

(This is more or less the talk I gave at the "Epistemology at the Beach" workshop last Sunday.)

"A wise man proportions his belief to the evidence", says Hume. But to what evidence? Should you proportion your belief to the evidence you have right now, or does it matter what evidence you had before? Frank Arntzenius ("Some problems for conditionalization and reflection", JoP, 2003) tells a story that illustrates the difference:

...there is an ancient law about entry into Shangri La: you are only allowed to enter, if, once you have entered, you no longer know by what path you entered. Together with the guardians you have devised a plan that satisfies this law. There are two paths to Shangri La, the Path by the Mountains, and the Path by the Sea. A fair coin will be tosssed by the guardians to determine which path you will take: if heads you go by the Mountains, if tails you go by the Sea. If you go by the Mountains, nothing strange will happen: while traveling you will see the glorious Mountains, and even after you enter Shangri La you will for ever retain your memories of that Magnificent Journey. If you go by the Sea, you will revel in the Beauty of the Misty Ocean. But just as you enter Shangri La, your memory of this Beauteous Journey will be erased and replaced by a memory of the Journey by the Mountains.

Suppose that in fact you travel by the Mountains. How will your degrees of belief develop?

Since you know that the coin is fair, you will at first assign credence 1/2 to heads. Later, when you've seen the coin land heads and have started your trip by the Mountains, you will be certain that the coin has landed heads, and that you are going by the Mountains. What happens when you arrive at Shangri La? Arntzenius argues that your credence in heads should go back to to 1/2 (and he seems to have convinced everybody in the literature). Here is his argument.

...once you have arrived, you will revert to having degree of belief 1/2 in heads. For you will know that you would have had the memories that you have either way, hence you know that the only relevant information that you have is that the coin was fair.

By 'information', I suppose Arntzenius means what I would call 'evidence': the total information that is available to you from experience, introspection and memory. Given your background knowledge about the setup, this information indeed doesn't help you to determine whether the coin landed heads or tails: the probability for ending up with your present evidence is the same either way, hence the evidence lends no support to heads or tails.

But it doesn't follow that your credence in heads should revert to 1/2. This follows only by the

Present Evidence Principle: what you should believe at a certain time is a matter of the evidence you have at that time.

I believe this principle is false, or at least not determinately true. What's true is this weaker principle:

Evidence Principle: what you should believe at a certain time is a matter of the evidence you have at that or earlier times.

The two principles come apart when evidence is lost, i.e. when one has evidence for a certain proposition at an earlier time, but no longer has evidence for it later. Such cases are rare in everyday life because we can to some extent figure out our previous attitudes by introspection. For instance, I've forgotten how I came to believe that carbon has atomic number 6; I've lost whatever evidence I originally had for this. But I have new evidence for it now: my inclination to judge that carbon has atomic number 6. This is evidence that I once learned that proposition, and thereby for its truth.

To distinguish between the Present Evidence Principle and the weaker Evidence Principle, we have to look at somewhat unusual cases where this kind of introspective evidence is useless or unavailable. In the Shangri La case, it is useless. Perhaps even more telling are cases where it is unavailable.

Imagine you're shopping for a robot to pick up your tennis balls and put them into green baskets. I have two models on offer: the Cartesian model and the Conservative. Both have sensory devices by which they collect information about their environment, and a little register that stores the relative location of yellow balls and green baskets in their surrounding. This register determines the robots' movements. Neither model has internal sensory systems to introspect their prior inclination to make judgments or the like. The difference between the two models is how the register gets updated over time. The Conservative model leaves information in the register until it encounters evidence against what is stored there. The Cartesian erases its register at every instant and rewrites it with the information it gets from its sensory system at that moment. Thus when the Cartesian spots a basket in the corner, it will register this fact, but as soon as it turns around to pick up a ball, the information gets erased as it is no longer supported by the newly available evidence.

The Conservative model will obviously do its task much better than the Cartesian. Is this because the setup favours irrational agents, as when a powerful predictor rewards people for making irrational choices? I don't think so. In just about any reasonable situation, the Conservative model will end up with a better representation of its environment, and be more successful. I don't want to say that the Cartesian way is definitely irrational. All I want to say is that the Conservative way is not irrational either. This is enough to show that the Present Evidence Principle is false.

Now return to Shangri La, where unlike the robots you have the relevant introspective evidence about your previous evidence, but don't trust it. The Present Evidence Principle then advocates resetting your credences to 1/2, as if you had never learned which way you traveled. The Evidence Principle would allow you to remain confident that you went by the Mountains. Is there something to be said for or against these options?

I think one can make a case that it is at least not irrational to remain confident in the Mountain possibility (and again, that's enough to undermine the Present Evidence Principle). To be sure, it would be irrational to trust your episodic Mountain memories once you arrive at Shangri La, knowing that you would have them either way. I don't say that you should infer from the evidence available to you at this point that you probably went by the Mountains. That would be to follow the Present Evidence Principle. The proposal at issue is that you may remain confident that you went by the Mountains despite the fact that your present evidence is neutral on this matter.

Why would this be rational? Because, intuitively, one shouldn't radically change one's mind about something unless one receives evidence that is relevant to it. As we saw, the evidence you receive at Shangri La is irrelevant to whether or not you went by the Mountains: the conditional probability for the evidence is the same no matter which way you traveled. In fact, we can assume that before arriving, you knew exactly what experiences you will have at Shangri La. At this point, you were still certain that the coin landed heads and that you go by the Mountains. Once you arrive, you have the experiences you expected anyway. You learn nothing about the world you didn't already know before. Therefore you shouldn't change your mind about what the world is like.

Of course, if you are convinced of the Present Evidence Principle, these considerations will not move you. The Principle entails that one sometimes ought to radically change one's mind even when one receives no relevant evidence and doesn't learn anything new. As always, my modus tollens will be your modus ponens. But principles of epistemic rationality don't fall from the sky. If the Present Evidence Principle is correct, there must be a good reason for it, and I doubt that there is. The ultimate epistemic goal is truth, and agents who follow the Present Evidence Principle are bad truth-trackers; they constantly lose valuable information.

Comments

# on 23 February 2008, 07:37

hey wo,

it's not clear to me how your robot example is supposed to help show that the present evidence principle is false. why isn't the information currently stored in the conservative's register (but acquired earlier) present evidence?

# on 23 February 2008, 07:52

Hey Weng Hong! My idea was that what evidence you have is limited by your sensory capacities, broadly understood: if you can't see, you don't have visual evidence, and if you can't introspect your previous register settings, you don't have evidence about it. That's what's supposed to happen in the robot case.

# on 23 February 2008, 08:11

Ah, so you're not using `information' and `evidence' interchangeably. Thanks for the clarification!

# on 23 February 2008, 11:43

Hi Wo!

I'm puzzled. With respect to the arrival at Shangri La, you say:

"You learn nothing about the world you didn't already know before. Therefore you shouldn't change your mind about what the world is like."

But once I arrive, the world is different than it was before, isn't it? I am now in Shangri La, while before I wasn't. And thus, there are things I now know that I didn't know before. In particular, I now know that I am in a position where I should assign a credence of 1/2 to my memory's being fake. This is new evidence, evidence that wasn't available to me before I arrived. And it seems that this evidence undermines my belief that I went by the Mountains. Of course, it is evidence I knew in advance I would have. But thats not the same as having evidence in advance. Or so it seems to me. But as I said, I am puzzled.

Best,
Miguel.

# on 23 February 2008, 12:37

Thanks Miguel,

I was obviously unclear on this point, too. By "about the world" I meant about the un-centered world, about which of all possible universes is actual. While traveling past the Mountains, you learned that the universe is one in which at some time you travel past the Mountains. Once your arrive, you learn something about your own location in the world. But whatever you learn at this point doesn't seem to rule out any previously open possibilities for the world as a whole (at least not any relevant ones).

Of course I don't agree that one of the things you learn is that you should assign credence 1/2 to your memory being fake. This is true iff you should assign credence 1/2 to the coin landing tails. I think it is okay if instead you remain confident that the coin landed heads and therefore that your memory is alright.

# on 23 February 2008, 14:19

Ok, thanks. So if I understood you correctly, someone who tries to follow your evidence principle will - upon arrival - believe that he went by the Mountains (and keep believing that he went by the Mountains), no matter which path he actually took. But only if he in fact went by the Mountains will this be what he *should* believe. Is that correct?

# on 24 February 2008, 05:42

Right. I think I'll write a bit more on what you should and would believe if you actually went by the Sea in another entry. There are some tricky issues there, and hopefully that will also clarify what I say here.

# on 07 March 2008, 19:30

Hi Wo!

A late comment: Assume that you have convinced me. I accept your principle (P) that one shouldn't radically change one's mind about something unless one receives evidence that is relevant to it. And I believe that I receive no new evidence at Shangri La. The problem is that (P) doesn’t help me to decide what to do, it seems to me. Here is why:
Imagine I arrive at Shangri La and find myself believing that I went by the mountains. Does (P) tell me to continue to believe this? No, it tells me not to change my mind on the matter. Had I gone by the Mountains that would mean to continue believing. Had I gone by the sea, however, I would already have violated (P) because I would already have changed my mind without relevant evidence. It is not clear what exactly (P) tells me to do in a case of its own violation, but it is obvious that it won’t tell me to keep on violating it. So, how do I – being an adherent to (P) – decide whether to continue believing that I went by the mountains? It seems to me that (P) does not help me to settle this issue.
(Hence, pace Miguel, it is false „that someone who tries to follow your evidence principle will – upon arrival – believe that he went by the Mountains, no matter which path he actually took“.)
Maybe you could comment on this question in your promised next Shangri La entry.

Best,
Tobias

# on 08 March 2008, 07:48

Hey Tobias!

you're right: (P) does not tell you what to believe given your present evidence, including evidence about your previous beliefs. In fact, the point of (P) is that any such principle is misguided, because rational belief is not a matter of present evidence alone.

So how are you supposed to apply (P) once you've arrived at Shangri La? It seems to you that you once believed that you went by the mountains, but you know that this evidence is completely unreliable; you should discard it. Your true previous credence is not consciously accessible to you. Hence you can't follow (P) by setting your new credence to its previous value.

What does this show? That agents who at any time have to reset their credence in light of their current evidence cannot follow (P). Such agents are condemned to be Cartesians. A conservative agent who follows (P) will not do so by collecting evidence about their previous beliefs and then set their new beliefs accordingly. They will simply continue to believe what they believed before, no matter if they have evidence about their previous beliefs. That's why I say in the paper (http://www.umsu.de/words/belief.pdf) that conservatism isn't meant to be a guideline or recipe for actively setting one's new credences, but that it should be though of from an external point of view, assessing two different ways agents could go about updating their beliefs.

Does that make sense?

# on 10 March 2008, 00:10

It does! Thank you.
I think I prefer the formulation in the paper that (P) should be thought of from an external, „engineering“ point of view. Anti-Cartesian epistemic principles in general claim that what you should believe at a certain time is not only a matter of the evidence which is available for you at that time. The question is why some such principle should be more rational than the Cartesian Present Evidence Principle. You argue that people whose epistemic practice can, from an external point of view, be described as following the Anti-Cartesian principle (P) are better truth-trackers than their Cartesian rivals. But that cannot be the whole story for there are many Anti-Cartesian principles p such that people whose epistemic practice can, from an external point of view, be described as following p are better truth-trackers than their Cartesian rivals. Imagine someone who, by mere epistemic luck, adjusts his beliefs about x to the sum of evidence any person has about x. Let’s call him Lucky. Lucky could, from an external point of view be described as following the (very) Anti-Cartesian

Overall Evidence Principle: What you should believe at a certain time is a matter of the sum of evidence any person has at that time.

Persons who follow this principle are likely to be better truth-trackers than Cartesians. But Lucky’s epistemic practice is very irrational for his beliefs do not correspond to the evidence which is (or was) available to him.
What is the difference between the implausible Overall Evidence Principle and your much more plausible Evidence Principle? I guess you would answer: „Rationality has to do with our beliefs being guided by our evidence. I only want to give up the idea that this guidance has to be performed by conscious reflection. It is enough that we are engineered in such a way that our former evidence is still effective even if it is no longer accessible for us. Lucky, however, is not engineered in such a way that he follows the Overall Evidence Principle but does so by mere luck.“ – Am I right?

# on 10 March 2008, 05:54

Right, and this holds in general: proportioning one's beliefs to one's evidence is not sufficient for rationality; the beliefs must also be sensitive to the evidence. If by pure chance I happen to believe what is supported by my evidence, that doesn't make me rational.

So what's wrong with Lucky is that he reflects 'overall evidence' only by accident. But what if somebody was engineered to be sensitive to overall evidence? Would their beliefs be irrational? Or are we just condemned to be worse truth-trackers, in the way a hard-wired Cartesian is condemned to be insensitive about their past evidence?

We could even go further and think of agents who directly align their beliefs with the facts, completely bypassing their current evidence. That is, if they have strong but misleading evidence for p while in fact ~p is the case, they will belief ~p. And they do so by design, not by chance. (These agents resemble the notorious clairvoyants, except that they don't believe that they are guessing.)

I'm inclined to say that if such agents are possible, they are epistemically superior to us in much the same way conservatives are superior to Cartesians. Their cognitive architecture must obviously be very different from ours. If there is a process that systematically aligns their beliefs with certain external facts, this process arguably deserves the name 'perception'. On the other hand, we have stipulated that these agents do not gain any perceptual evidence about the relevant facts. Is that possible? I think so. It's a bit like blindsight.

Definitely a very interesting case -- thanks! I'll have to think more about it.

# on 10 March 2008, 14:57

Yes - I was thinking about what to say about such a case, too. I think one should distinguish between agents who are merely designed to be direct truth-trackers and those who know that they are designed to be direct truth-trackers. The latter would not correctly be described as ‚~p’-believers who „have strong but misleading evidence for p while in fact ~p is the case“. If they know about their design and find themselves believing ~p then this is very strong evidence for them that ~p is true. They would hence be unusual examples for the case where the fact that someone believes p is evidence for her that the content of that very same belief is true.

# trackback from on 24 February 2008, 05:02

This is a follow-up to the previous post on Shangri La. As before, the story is that a fair coin decides which path you take to Shangri La: on heads, you ...

# pingback from on 23 February 2008, 08:02

Add a comment

Please leave these fields blank (spam trap):

No HTML please.
You can edit this comment until 30 minutes after posting.