Gibbard and Jackson on the probability of conditionals

Gibbard's 1981 paper "Two recent theories of conditionals" contains a famous passage about a poker game on a riverboat.

Sly Pete and Mr. Stone are playing poker on a Mississippi riverboat. It is now up to Pete to call or fold. My henchman Zack sees Stone's hand, which is quite good, and signals its content to Pete. My henchman Jack sees both hands, and sees that Pete's hand is rather low, so that Stone's is the winning hand. At this point, the room is cleared. A few minutes later, Zack slips me a note which says "If Pete called, he won," and Jack slips me a note which says "If Pete called, he lost." I know that these notes both come from my trusted henchmen, but do not know which of them sent which note. I conclude that Pete folded.

One puzzle raised by this scenario is that it seems perfectly appropriate for Zack and Jack to assert the relevant conditionals, and neither Zack nor Jack has any false information. So it seems that the conditionals should both be true. But then we'd have to deny that 'if p then q' and 'if p then not-q' are contrary.

Frank Jackson (in conversation) pointed out that Gibbard's passage raises another puzzle that is commonly overlooked. That puzzle is about confirmation.

Let C→W be the conditional 'if Pete called, he won'.

Let E1 be Zack's information -- more specifically, the information that Pete knows Mr. Stone's hand.

Let E2 be Jack's information -- specifically, that Mr. Stone has the better hand.

Intuitively,

(1) E1 strongly supports C→W.

(2) E2 strongly supports ~(C→W).

(3) E1 doesn't strongly support ~E2.

(4) E2 doesn't strongly support ~E1.

But if we read "strongly support" as "making highly probable" then these four assumptions are probabilistically inconsistent. (The proof is left as an exercise.)

You might question (3) or (4). Here's a simpler example where (3) and (4) are not in doubt.

We toss two independent, fair coins. There are four possible outcomes: { H1,T1 } x { H2,T2 }.

Let Same be the proposition (H1 & H2) v (T1 & T2).

Let E1 be Same.

Let E2 be T2.

Let H1→Same be the conditional 'if H1 then Same'.

Intuitively,

(1) E1 strongly supports H1→Same: P(H1→Same/E1) > 0.8 (say).

(2) E2 strongly supports ~(H1→Same): P(~(H1→Same)/E1) > 0.8.

But the following is easily provable:

(3) E1 doesn't strongly support ~E2: P(E2/E1) = 1/2.

(4) E2 doesn't strongly support ~E1: P(E1/E2) = 1/2.

(1)-(4) are probabilistically inconsistent. So (1) and (2) can't be true: either E1 doesn't make H1→Same highly probable or E2 doesn't make ~(H1→Same) highly probable (or both).

The lesson is that our intuitions about whether some piece of evidence supports a given conditional cannot be trusted.

The usual contextualist responses to Gibbard's puzzle seem to be of no help here. The only way to block the lesson would be to give up probabilistic measures of evidential support. But even then we retain the lesson that we can't trust intuitions about whether some evidence renders some conditional probable.

The lesson generalizes. If we can't trust these intuitions, then we also can't trust intuitions about the probability of a conditional in a given hypothetical scenario -- for that just is an intuition about the extent to which the assumptions of the scenario makes the conditional probable. And then we plausibly also can't trust outright intuitions about the probability of a conditional, since that's the probability of the conditional given our total evidence.

The lesson is more or less the same as the lesson taught by Lewisian triviality results. But the Gibbard-Jackson route is different from Lewis's route. In particular, we have never assumed that the intuitive probability of a conditional is the corresponding conditional probability.

That said, there is also a way of turning the Gibbard-Jackson argument into an argument against "Stalnaker's Thesis", that for any rational credence function P, P(A→B) = P(B/A). Here is how.

Return to the coin toss scenario. It is easy to see that

(5) P(Same/H1) = 1/2,

(6) P(Same/H1 & Same) = 1

(7) P(Same/H1 & T2) = 0

By Stalnaker's Thesis, it follows that

(8) P(H1→Same / Same) = 1 and

(9) P(H1→Same / T2) = 0,

since P(*/Same) and P(*/T2) are rational credence functions.

(8) and (9) are stronger versions of (1) and (2), and we know that these can't be true. So Stalnaker's Thesis is also false.

Comments

No comments yet.

Add a comment

Please leave these fields blank (spam trap):

No HTML please.
You can edit this comment until 30 minutes after posting.