< 495 older entriesHome277 newer entries >

I'm certain that I went by the Mountains

(This is more or less the talk I gave at the "Epistemology at the Beach" workshop last Sunday.)

"A wise man proportions his belief to the evidence", says Hume. But to what evidence? Should you proportion your belief to the evidence you have right now, or does it matter what evidence you had before? Frank Arntzenius ("Some problems for conditionalization and reflection", JoP, 2003) tells a story that illustrates the difference:

...there is an ancient law about entry into Shangri La: you are only allowed to enter, if, once you have entered, you no longer know by what path you entered. Together with the guardians you have devised a plan that satisfies this law. There are two paths to Shangri La, the Path by the Mountains, and the Path by the Sea. A fair coin will be tosssed by the guardians to determine which path you will take: if heads you go by the Mountains, if tails you go by the Sea. If you go by the Mountains, nothing strange will happen: while traveling you will see the glorious Mountains, and even after you enter Shangri La you will for ever retain your memories of that Magnificent Journey. If you go by the Sea, you will revel in the Beauty of the Misty Ocean. But just as you enter Shangri La, your memory of this Beauteous Journey will be erased and replaced by a memory of the Journey by the Mountains.

A puzzle for causal decision theory

This is probably old, so pointers to the literature are welcome. Consider this game between Column and Row:

  C1   C2
  R1     0,0     2,2  
  R2     2,2     1,1  

What should Column and Row do if they know that they are equally rational and can't communicate with one another? The game doesn't have a Nash equilibrium has no unique Nash equilibrium, nor is there a dominant strategy (Thanks Marc!), so perhaps there is no determinate answer.

When experts disagree on probabilities

A coin is to be tossed. Expert A tells you that it will land heads with probability 0.9; expert B says the probability is 0.1. What should you make of that?

Answer: if you trust expert A to degree a and expert B to degree b and have no other relevant information, your new credence in heads should be a*0.9 + b*0.1. So if you give equal trust to both of them, your credence in heads should be 0.5. You should be neither confident that the coin will land heads, nor that it will land tails. -- Obviously, you shouldn't take the objective chance of heads to be 0.5, contradicting both experts. Your credence of 0.5 is compatible with being certain that the chance is either 0.1 or 0.9. Credences are not opinions about objective chances.

Another argument for halfing

What about this much simpler argument for halfing:

As usual, Sleeping Beauty wakes up on Monday, knowing that she will have an indistinguishable waking experience on Tuesday iff a certain fair coin has landed tails. Thirders say her credence in the coin landing heads should be 1/3; halfer say it should be 1/2.

Now suppose before falling asleep each day, Beauty manages to write down her present credence in heads on a small piece of paper. Since that credence was 1/2 on Sunday evening, she now (on Monday) finds a note saying "1/2".

Sleeping Beauty meets the Absentminded Driver

I've written a short paper arguing that the Absentminded Driver paradox is based on the thirder solution to Sleeping Beauty, and can be neatly explained away by adopting the halfer solution: "The Absentminded Driver: no paradox for halfers". As always, comments are very welcome.

Exploding desks and indistinguishable situations

I've thought a bit about belief update recently. One thing I noticed is that it is often assumed in the literature (usually without argument) that if you know that there are two situations in your world that are evidentially indistinguishable from your current situation, then you should give them roughly the same credence. Although I agree with some of the applications, the principle in general strikes me as very implausible. Here is a somewhat roundabout counter-example that has a few other interesting features as well.

Back

Away

I'm away from the internet until January 15.

Property Subtraction

Sometimes, a property A entails a property B while B does not entail A, and yet there seems to be no interesting property C that is the remainder of A minus B. For instance, being red entails being coloured, but there is no interesting property C such that being red could be analysed as: being coloured & being C. In particular, there seems to be no such property C that doesn't itself entail being coloured.

This fact has occasionally been used to justify the claim that various other properties A entail a property B without being decomposable into B and something else. I will try to raise doubts about a certain class of such cases.

Analytic constraints

Daniel Nolan and I once suggested that talk about sets should be analyzed as talk about possibilia. For simplicity, assume we somehow simply replace quantification over sets by quantification over possible objects in our analysis. This appears to put a strong constraint on modal space: there must be as many possible objects as there are sets.

But does it really? "There are as many possible objects as there are sets." By our analysis, this reduces to, "there are as many possible objects as there are possible objects". Which is no constraint at all!

< 495 older entriesHome277 newer entries >