Assessing the evidence differently

Alice is randomly selected from her population to be tested for a rare genetic disorder that affects about one in 10,000 people. The test is accurate 99 percent of the time, both among subjects that have the disorder and among subjects that don't. Alice's test comes back positive.

Call the information in the previous paragraph E, and suppose it's all you know about the situation. How confident are you that Alice has the disorder?

Letting our subjective probabilities be guided by the stated frequencies, we can use Bayes' Theorem to figure out that P(disorder | positive) = P(positive | disorder) * P(disorder) / (P(positive | disorder) * P(disorder) + P(positive | ~disorder) * P(~disorder)) = 0.99 * 0.0001 / (0.99 * 0.0001 + 0.01 * 0.9999) = 0.0098. Assume then that your degree of belief is about 0.01.

Now suppose you learn that my degree of belief in the hypothesis that Alice has the disorder, based on the same evidence E, is 0.9. How should that affect your own confidence?

Note first that you now have a new piece of evidence about the situation that you didn't have before: the information that my degree of belief in Alice having the disorder, based on evidence E, is 0.9. Call this information E2. So we may ask to what extend the conjunction of E and E2 supports the hypothesis that Alice has the disorder. The answer, it seems to me, is clear: a rational prior credence function conditionalised on E & E2 would still assign probability ~0.01 to Alice having the disorder. The fact that I take E to strongly support the hypothesis merely shows that I have incorrectly assessed the evidence. It lends no support at all to the hypothesis.

On the other hand, you may not be certain that you yourself have assessed the evidence correctly. It seems to you that this is a case where Bayes' Theorem applies, and that it yields something around 0.0098, but you know that humans get easily confused in such cases, and it may well be, for all you know, that you even made a calculation mistake.

If this is your situation, we have to distinguish two things: the result of the method by which you assessed the evidence, and your actual credence. Suppose, for example, you are only 90 percent confident that you have assessed the evidence correctly, and the remaining 10 percent of your confidence go to the claim that the evidence makes the hypothesis probable to degree 0.9 (just as I claim). Then surely you would not bet at very high stakes against the hypothesis. Your actual credence will be nowhere near 0.01.

If this isn't obvious, consider a more extreme case where you take the evidence to entail a certain hypothesis, so that the probability of the hypothesis given the evidence is 1. Suppose again that you are not absolutely certain that you have assessed the evidence correctly. That is, you are not certain whether the evidence really does entail the hypothesis. It's a live possibility, you say, that the evidence is true and the hypothesis false. Then clearly your credence in the hypothesis is not 1.

If you're not certain whether your assessment of the evidence is correct, then your credence should be a mixture of the different possible assessments of the evidence, weighted by your confidence that they are the correct assessment. For example, if you are 90 percent confident that the evidence makes the hypothesis probable to degree 0.01, and 10 percent confident that it makes the hypothesis probable to degree 0.9, then your credence should be something like 0.9 * 0.01 + 0.1 * 0.9 = 0.099.

Now return to the original question. What happens when you hear that my degree of confidence in Alice having the disorder is 0.9? If you don't think that you are in general much better at assessing the evidence than I, this should lower your confidence that the correct assessment yields a probability of 0.01. Your credence should therefore rise from 0.099 to something closer to 0.9. How much it should rise depends on how likely it seems to you that I might have been better at assessing the evidence -- i.e. to what extent the distribution of weights over assessments that underlies my credence gives higher weight to more correct assessments than your own distribution.

So we have two answers: the new evidence is evidentially irrelevant to the hypothesis, and yet you ought to change your credence in response to it!

What's going on here is that we have a norm for ideal agents, and a secondary norm for agents who fail that ideal. The norm for ideal agents is to "proportion their belief to the evidence". The norm for non-ideal agents is to be cautious about their own assessments and weigh the outcome of their assessment with their degree of confidence that they didn't make a mistake. This makes the question whether or not they made a mistake relevant to the resulting credence assigned to the hypothesis. Since evidence of disagreement supports the assumption that they made a mistake, such evidence becomes relevant.

Comments

No comments yet.

Add a comment

Please leave these fields blank (spam trap):

No HTML please.
You can edit this comment until 30 minutes after posting.