When experts disagree on probabilities

A coin is to be tossed. Expert A tells you that it will land heads with probability 0.9; expert B says the probability is 0.1. What should you make of that?

Answer: if you trust expert A to degree a and expert B to degree b and have no other relevant information, your new credence in heads should be a*0.9 + b*0.1. So if you give equal trust to both of them, your credence in heads should be 0.5. You should be neither confident that the coin will land heads, nor that it will land tails. -- Obviously, you shouldn't take the objective chance of heads to be 0.5, contradicting both experts. Your credence of 0.5 is compatible with being certain that the chance is either 0.1 or 0.9. Credences are not opinions about objective chances.

What if the two experts didn't mean objective chance, but subjective probability? That is, what if you learned that expert A is pretty confident that the coin will land heads, and expert B that it will land tails? Your response should be the same. If you trust them equally, your credence should be 0.5.

What if the two experts weren't talking about their credence, given their own evidence and priors, but about what your credence should be, given your evidence and your priors?

If you are ideally rational and know that you are, then you should dismiss their claims. For suppose that beforehand, you assigned to heads credence x, taking into account all your evidence. If some alleged expert now tells you that your credence, given that evidence, ought to be some other value, you know for certain that they are wrong. You should stick to whatever your old credence was. (I assume that the expert's claim about what is supported by your evidence doesn't affect the coin toss by your lights.) Notice that when dismissing the experts' claims, you are still applying the "weighted sum" rule, with a = b = 0.

What if you're not an ideal agent, and know it? Then you can't rule out that you may have misinterpreted your own evidence, or otherwise leaped to an irrational conclusion. One expert says your evidence strongly supports heads, the other says it supports tails, but you can't tell which. Then, too, you should apply the "weighted sum" rule: you should neither be very confident in heads nor in tails. Expert A's claim is evidence that you have strong evidence for heads, and expert B's claim is evidence that you have strong evidence for tails. Since evidence for strong evidence for heads is just evidence for heads, you end up with some evidence for heads, and some for tails, and you should balance the two by their strength. Of course, if you believe that one of the experts is right, then you know that you should have a different credence, 0.9 or 0.1; you would have this other credence if you were ideal. But since you're not ideal, you should at least properly respond to your evidence this time, rather than make another irrational leap of credence.

So the weighted sum rule seems correct in all cases.

Comments

# on 09 February 2008, 14:27


Your averaging rule has a peculiar consequence. Expert A thinks coin flips are independent with probability .9 and expert B thinks they are independent with probability .1. So they agree that the flips are independent. The coin is to be flipped twice. A assigns P(h1&h2)=.81 and B assigns P(h1&h2)=.01. You trust them equally so you assign probabilities P(h1)=.5, P(h2)=.5 and P(h1&h2)=.41; i.e. though the experts think that the flips are independent if you follow the averaging rule you end up not agreeing with them. Lehrer and Wagner sugested a more complicated averaging rule years ago with the same unwelcome consequence.

# on 10 February 2008, 06:32

Thanks Barry! That's interesting. I have to admit that I don't find the consequence unwelcome. If the coin is tossed seven times so that expert A gives all-heads roughly probability 0.5 and expert B roughly 0, and I trust them equally, it seems to me that I shouldn't side with expert B and bet heaps against all-heads; I certainly shouldn't assign it a probability as low as 0.5^7 = 0.008.

Consider the case where expert A thinks the coin is double-headed and expert B thinks it is fair, and where it is tossed a million times. Should I conclude that it is practically certain to have landed tails at some point, despite the fact that I give half of my trust to the possibility that the coin doesn't have a tails side? That seems unreasonable to me.

Would I disagree with the experts on whether the flips are independent by following the averaging rule? I don't think so, at least not in any obvious way. Otherwise the rule would be inconsistent, for it demands of me to give credence 1 to any proposition to which the experts both give credence 1. On the most straightforward interpretation, "the flips are independent" means that the objective probability of H-H equals the square of the probability of H, and I can surely accept that, just as I can accept that the objective probability for H is not 0.5. If "the flips are independent" expresses an indexical claim about one's own subjective credence, the experts may accept it while I do not, but that is of course not serious disagreement.

# on 10 February 2008, 14:49


You are right. If my degrees of belief are exchangeable (as they will be on the averaging rule) then they represent my belief that the outcomes are independent even though P(h1&h2)=/= P(h1)xP(h2). I was thinking of Lehrer and Wagner's use of averaging to represent "consensus" or group belief. The consequence that the consensus is P(h1)=.5, P(h2)=.5 P(h1&h2) = .41 and the flips are independent is peculiar. Sorry I foisted their application on you.

# pingback from on 31 January 2008, 11:01

Add a comment

Please leave these fields blank (spam trap):

No HTML please.
You can edit this comment until 30 minutes after posting.