The probability that if A then B

It has often been pointed out that the probability of an indicative conditional 'if A then B' seems to equal the corresponding conditional probability P(B/A). Similarly, the probability of a subjunctive conditional 'if A were the case then B would be the case' seems to equal the corresponding subjunctive conditional probability P(B//A). Trying to come up with a semantics of conditionals that validates these equalities proves tricky. Nonetheless, people keep trying, buying into all sorts of crazy ideas to make the equalities come out true.

I am puzzled about these efforts, for two reasons.

First, as Lewis and Kratzer pointed out in the 1970s and 80s, if-clauses often (according to Kratzer, always) function as restrictors of quantificational and modal operators. So when we see an if-clause in the vicinity of a modal like 'the probability that', the first thing we should consider is whether the if-clause restricts the modal.

How would an if-clause restrict a probability modal? Well, what is the probability of B restricted by A? An obvious answer is that it's the probability of B given A. So if the if-clause in 'the probability that if A then B' restricts the probability modal, then the expression denotes the conditional probability of B given A. Which is just what we find.

In other words, there is independent evidence about if-clauses suggesting that 'the probability that if A then B' should be analysed as 'P(B/A)' rather than 'P(if A then B)'. If that's correct, then what's expressed by

(*) the probability that if A then B equals the conditional probability of B given A

is the trivial identity 'P(B/A) = P(B/A)'. There's no need to make a big effort trying to make (*) true.

The case of subjunctive conditionals is parallel. We have the intuition that

(**) the probability that if A were the case then B would be the case equals the conditional probability of B on the subjunctive supposition that A.

Again, the first thing we should check is whether the if-clause restricts the modal. And, plausibly, subjunctive if-clauses restrict probability modals by subjunctive supposition (aka imaging). And then (**) expresses the trivial 'P(B//A) = P(B//A)'.

When people try to give a semantics motivated by (*) or (**), they practically never explain what's wrong with the simple and obvious explanation of (*) and (**) that I've just given.

That's one reason why I'm puzzled by these efforts. Here's a second reason.

For concreteness, let's look at subjunctive conditionals, which I'll write 'A > B'. As Lewis shows towards the end of "Probabilities of Conditionals and Conditional Probabilities", if you want to validate 'P(A > B) = P(B//A)', you have to assume a Stalnaker-type semantics for '>' on which, for any world w and any proposition A, there is a unique A-world that is "closest" to w; 'A > B' is true at w iff B is true at the closest A-world.

But if we assume a Stalnaker-type semantics of would counterfactuals 'A > B', then what should we say about might counterfactuals, 'if A were the case then B might be the case' -- for short, 'A *> B'?

Clearly, 'A *> B' can't be the dual of 'A > B', otherwise the two would be equivalent. The only option I can think of is to say, with Stalnaker, that 'A *> B' must be analysed as Might(A > B).

But that's unappealing, especially in the present context.

For one, the idea that 'A *> B' means 'Might(A > B)' is incompatible with a broadly Kratzerian treatment of 'if' and 'might'.

Moreover, syntactically, 'would' and 'might' seem to play similar roles in 'if A then would B' and 'if A then might B'. One would at least like to see some more evidence that 'might' scopes over the conditional and 'would' does not. Relatedly, (as I mentioned in an earlier post), it seems to me that

What if A were the case? It might be that B

is equivalent to 'if A were the case then it might be that B'. But surely 'might' in the second sentence doesn't somehow scope over 'if' in the first.

Moreover, let's look at the probability of might counterfactuals. Assuming that 'Might' in 'Might(A > B)' is epistemic, 'Might(A > B)' is true relative to an information state s iff s is compatible with A > B. What is the probability that s is compatible with A > B, relative to s? Unless the information state is unsure about itself, it will be either 0 or 1. Specifically, we get the prediction that P(A *> B)) = 1 if P(A > B) > 0 and P(A *> B) = 0 if P(A > B) = 0. But intuitively, 'the probability that if A then might B' is not always 1 or 0.

So what you have to say, if you want to analyse 'A *> B' as 'Might(A > B)', is that despite surface appearance, the expression 'the probability that if A then might B' does not denote the probability of the embedded might counterfactual 'if A then might B'. Perhaps the two epistemic modals merge and the expression denotes the probability of 'if A then would B'. Or whatever. But in the present context, it's funny that you have to say such a thing, given that your whole approach is motivated by your commitment to the idea that 'the probability that if A then would B' denotes the probability of the embedded would counterfactual.

Comments

No comments yet.

Add a comment

Please leave these fields blank (spam trap):

No HTML please.
You can edit this comment until 30 minutes after posting.