Imaging, counterfactuals, and expected conditional chance

In today's installment we take a look at the "imaging analysis" of subjunctive conditional probability. We will find that the analysis is fairly empty, and therefore fairly safe. In particular, it seems invulnerable to a worry that Robbie Williams recently raised in a comment on his blog. Let's begin with an example.

What if the government hadn't bailed out the banks? Some of them would almost certainly have gone bankrupt, and other companies would probably have followed.

Here we have some sort of conditional probabilities: "if A, then probably/almost certainly C". But they aren't ordinary conditional probabilities of the kind that go in the ratio formula, P(A/B) = P(AB)/P(B). I do not believe that if the government actually didn't bail out the banks (but only made everyone believe it did), then some of the banks went bankrupt. That is, my ordinary conditional probability in the bankruptcies given that there was no bailout is fairly low. Nevertheless, I believe that if the government hadn't bailed out the banks, some of them would probably have gone bankrupt. My subjunctive conditional probability in the bankruptcies given no-bailout is high.

At first glance, one might think that subjunctive conditional probability is the probability of a counterfactual: I am fairly confident that if the government hadn't bailed out the banks, then some of them would have gone bankrupt. Or one might think that a statement of subjunctive conditional probability expresses a counterfactual conditional with a probabilistic consequent: I believe that if the government hadn't bailed out the banks, then it would have been highly probable that some of them go bankrupt.

However, as Lewis argued in 1973, neither analysis is correct. Lewis instead proposed the imaging analysis of subjunctive conditional probability. On this analysis, the subjunctive probability of C given A is the probability of C after the probability of any non-A world has been moved to the closest A world(s). In a more general version due to Gaerdenfors (and endorsed by Lewis and many others), the probability of non-A worlds may be divided among A worlds in proportion to their closeness.

More precisely, assume that for any world w1 and (suitable) proposition A there is a probability distribution Q_A(w1) such that Q_A(w1)(w2) specifies the fraction of w1's probability that lands on w2 when all probability is shoveled onto A worlds. The subjunctive probability of a world w2 given A is then analysed as \sum_w1 P(w1) Q_A(w1)(w2). The subjunctive probability of any proposition C given A is the sum of the probabilities for individual C worlds given A.

This is not much of an analysis unless more is said about the family Q_X of transition probabilities. Here are two silly ways of putting flesh on the skeleton that illustrate its skeletonosity.

  1. Define Q_A(w1)(w2) as P(w2/A). Then the subjunctive conditional probability of C given A equals the ordinary, indicative conditional probability P(C/A).
  2. Define Q_A(w1)(w2) as P(w2). Then the subjunctive conditional probability of C given A equals the unconditional probability of C.

Robbie's worry was that the imaging analysis might clash with the idea that the subjunctive conditional probability of C given A equals the expectation of the conditional chance of C given A, i.e. \sum_x x P(ch(C/A) = x). However, consider the subjunctive conditional probability of a particular world w2 given A. On the chance account, this equals \sum_w1 P(w1) ch_w1(w2/A). On the imaging account, it equals \sum_w1 P(w1) Q_A(w1)(w2). So here is a third way of fleshing out the imaging analysis.

  1. Define Q_A(w1)(w2) as ch_w1(w2/A). Then the subjunctive conditional probability of C given A equals the expectation of the conditional chance of C given A.

Robbie actually has a good reason for his worry. In this paper, he assumes the chance account and effectively shows that if the imaging analysis of subjunctive conditional probability is correct, then a certain platitude about counterfactual conditionals comes out false. Now since the chance account is actually a version of the imaging analysis, the middle if cancels out: the chance account entails the imaging analysis, and hence also the falsehood of the platitude. So what the argument refutes is not the imaging analysis, but the chance account.

At least that's what I think is going on. I have only half-heartedly worked through the details.


Update 17 Feb: The details are actually quite interesting. So here's the full argument against the chance account.

First some definitions. Let -> be a binary operator so that A->C is true at world w1 iff C holds at all the closest A worlds to w1. More precisely, given a closeness measure Q that assigns to each world w1 and proposition A a probability distribution Q_A(w1), let A->C be true at w1 iff C holds at all worlds w2 such that Q_A(w1)(w2) > 0. To have a short notation, let P^A(C) be the subjunctive conditional probability of C given A on the imaging account; that is, P^A(C) = \sum_{w2 \in C} \sum_w1 P(w1) Q_A(w1)(w2) -- again, relative to some closeness measure Q.

Fact 1. P^A(C) >= P(A->C).

Proof sketch. To compute P(A->C), we go through all worlds w1 and add up P(w1) whenever C holds at all w2 with Q_A(w1)(w2) > 0. To compute P^A(C), we go through all worlds w1 and add up \sum_w2 P(w1)Q_A(w1)(w2) over all C worlds w2. If w1 is such that C holds at all w2 with Q_A(w1)(w2) > 0, this means we simply add P(w1); so the result for P^A(C) must be at least as great as for P(A->C).

Fact 2. If the expected chance of A given C is generally >= (A->C), then the chance of C is generally >= the chance of A->C.

Proof: see Robbie's paper. The reasoning closely follows Lewis's triviality result about conditional probability. I have some reservations about the proof due to some of my opinions about chance, but set them aside for now.

Now we've seen that the expected chance account is a form of the imaging account -- namely the one for the closeness measure Q_A(w1)(w2) = ch_w1(w2/A). It thus follows from Fact 1 and Fact 2 that the chance of C is never less than the chance of A->C:

(*) ch(C) >= ch(A->C).

What does this say? By definition of the arrow, A->C says that C holds at all the closest A worlds. On the chancy closeness measure, A->w2 is true at w1 iff ch_w1(w2/A) = 1. And so the chance of A->w2 at w1 is the chance at w1 that ch_w1(w2/A) = 1. Can chance propositions themselves be chancy? Many say no. Then the chance at w1 of A->w2 is either zero or one. Assuming that propositions with chance 1 are true and propositions with chance 0 false, the chance of A->w2 at w1 is 1 if ch_w1(w2/A) = 1, and otherwise 0. So we have two cases. First, suppose the actual chance of A->w2 is 0. Then (*) says that the chance of w2 is not below 0. This is clearly unproblematic. Second, suppose the actual chance of A->w2 is 1, and hence ch(w2/A) = 1. Then (*) says that the chance of w2 must also be 1. Again, this is unproblematic. For instance, by the ratio formula, ch(w2/A) = ch(w2 & A)/ch(A), which can only be positive if w2 is an A world, in which case ch(w2/A) = ch(w2)/ch(A) = ch(w2)/ch(w2)+ch(A minus w2). If this is 1, ch(A minus w2) must be 0, and so ch(w2) must be 1. Just as (*) says.

(What if chance propositions can be chancy? What if conditional chance doesn't obey the ratio formula? What if propositions with chance 1 are sometimes false? It might be worth thinking through these variations, but the outcome is clear anyway: (*) cannot be absurd, because it is true.)

In Robbie's paper, -> is not some made-up connective, but our ordinary counterfactual conditional. On this reading, (*) is absurd. On the other hand, (*) is true if we assume that the ordinary counterfactual conditional A->C is true iff the conditional chance of C given A equals 1.

This yields the promised argument against the conditional chance analysis of subjunctive conditional probability. To bring it out more forcefully, let A=>C formalise "if A were the case then it would certainly be the case that C". Clearly A=>C is true iff the subjunctive probability of C given A is (near) 1. However, if subjunctive conditional probability equals expected conditional chance, then it follows from Fact 1 and Fact 2 that the chance of A=>C never exceeds the chance of C -- which is absurd (try A=C). Hence subjunctive conditional probability does not equal expected conditional chance.

Comments

No comments yet.

Add a comment

Please leave these fields blank (spam trap):

No HTML please.
You can edit this comment until 30 minutes after posting.