Preferring the less reliable method

Compare the following two ways of responding to the weather report's "probability of rain" announcement.

Good: Upon hearing that the probability of rain is x, you come to believe to degree x that it will rain.
Bad: Upon hearing that the probability of rain is x, you become certain that it will rain if x > 0.5, otherwise certain that it won't rain.

The Bad process seems bad, not just because it may lead to bad decisions. It seems epistemically bad to respond to a "70% probability of rain" announcement by becoming absolutely certain that it will rain. The resulting attitude would be unjustified and irrational.

Can we explain the comparative badness of the Bad process solely by appeal to the overriding epistemic goal of truth? It seems not.

Let's compare the objective reliability of the two methods. Roughly speaking, the reliability of a belief-generating process measures its tendency to generate true beliefs. How do we apply this to the Good process which typically generates only partial beliefs? We don't want to count an 0.3 belief that it will rain as true iff it will rain. A natural move is to measure reliability in terms of the distance between the degree of belief and the truth value of the relevant proposition (1 for true, 0 for false). So a reliable process would tend to generate high degrees of belief in true propositions, and low degrees in false propositions. Call the distance between degree of belief and actual truth value the belief's inaccuracy.

Using this measure, it turns out that under normal circumstances, the Bad process is more reliable than the Good process. To illustrate, consider 100 occasions on which the weather forecast announces a 70 percent probability of rain. Let's also assume that in 70 of these cases, the announcement is followed by rain, and in 30 it isn't. (It is a pretty good weather forecast.) The Bad method would make you certain that it rains on each occasion, so your distance from the truth is 0 in 70 cases and 1 in 30. Average inaccuracy: 0.3. The Good method would give you a belief of degree 0.7 that it will rain, so your distance from the truth is 0.3 in 70 cases and 0.7 in 30 cases. Average inaccuracy: 0.42. On average, the Bad process brings you closer to the truth.

Note that as a counterexample to reliabilism, this is quite different from thought-experiments involving far-fetched scenarios where the reliability of a process comes apart from (what we take to be) its actual reliability. Even under perfectly normal and common conditions, the Bad process is more reliable than the Good one.

The above reasoning also shows that the Bad process has lower expected inaccuracy that the Good method from the point of view of an agent who uses the Good method. If you follow the Good method and hear the "70% chance of rain" announcement, you can calculate that the expected inaccuracy of the Bad method is 0.3, while the expected inaccuracy of your own method is 0.42. (If you follow the Bad method, you'll judge the Bad method to have 0 expected inaccuracy, compared to 0.3 for the Good method.)

This is odd. Here we have two methods; we know that one of them is more reliable, that it is more likely to bring us closer to the truth; but still we think it would be irrational to use it. We judge that, from a purely epistemic perspective, one ought to use the other, less reliable method.

A while ago, I claimed that the appearance of truth as the unifying epistemic virtue might be an illusion based on the fact that any belief-forming method whatsoever will automatically be regarded as truth-conducive by agents who use it. It looks like we have a counterexample to this as well. At least it is not true that any belief-forming method automatically appears more accuracy-conducive than its rivals to agents who use it.

Four quick comments.

First, the present puzzle is closely related to the puzzle discussed in Allan Gibbard's "Rational Credence and the Value of Truth" (2008), who should therefore get all the credit. Gibbard assumes that it is irrational to have degrees of belief which, by their own lights, are less accurate than certain other degrees of belief; and he argues that this requirement of rationality cannot be explained solely on the assumption that rational belief "aims at truth", although it can be explained on the assumption that rational belief aims at successful action.

Second, it is tempting to argue against the Bad method by considering its long-run accuracy. If you follow the Bad method and otherwise update by conditioning, you may easily end up with more inaccurate beliefs than if you follow the Good method. But if truth is your goal, why not combine a deviant response to weather forecasts with a deviant update process? It is easy to show that some such combinations have higher reliability, and higher expected long-run accuracy than the sensible combination of the Good method with conditioning (see Gibbard's follow-up note "Aiming at Truth over Time" (2008), page 6).

Third, another problem with the Bad method is that it doesn't generalise well. Suppose the weather forecast says that there's a 30 percent chance of rain, a 40 percent chance of sunshine, and a 30 percent chance of neither rain nor sunshine. And suppose you apply the Bad method not just to the rain statement, but also to the two others. You'd end up being a) certain that it won't rain, b) certain that the sun won't shine, and c) certain that it will either rain or the sun will shine. But arguably, this is an impossible state of mind; the attitude ascriptions (a)-(c) are inconsistent. So if one could show that the Bad method, even restricted to rain forecasts, may (easily) lead to impossibilities like this, that would solve the puzzle. I don't think this can be shown. But another possibility is that the compensatory methods you have to use in addition to the Bad method in order to restore consistency will have a cost in expected accuracy. And then maybe any consistent package of methods containing the Bad process would end up less accuracy-conducive than the reasonable package containing the Good process. That would be nice.

(A lot of people think that there is nothing inconsistent about the attitude ascriptions (a)-(c), even though there is something inconsistent about the ascribed attitudes. We would then have to ask whether the rationality requirement of having consistent attitudes can be explained solely by appeal to the goal of truth or accuracy; see e.g. Jim Joyce's "Accuracy and coherence: Prospects for an alethic epistemology of partial belief" (2009) for the latest moves in this game. The short answer is that it can't be done. But as I said, I don't find this very problematic, because I'm sympathetic to Ramsey's view that the requirements of probabilistic coherence are analytic and therefore not in need of epistemic defense.)

Fourth, I've measured inaccuracy simply as the distance between belief and truth value. But there are other measures. If we use squared distance, the problem disappears. There may even be good reasons for using this measure, apart from solving the present problem (Joyce mentions some at the end of the paper just cited). But none of these reasons seem to be based on truth as the overriding epistemic goal. If all that matters for epistemic rationality is closeness to the truth, it is not clear why closeness should be measured by squared distance rather than absolute distance.

On the other hand, it is also not clear why closeness should be measured by absolute distance. So I might have to qualify the claim in the other blog post: on one disambiguation of "accuracy-conduciveness" there are clear counterexamples to the hypothesis that epistemic goodness is a matter of accuracy-conduciveness. On this reading, accuracy (or truth) therefore doesn't appear to be the overriding epistemic goal. On another disambiguation, there is such an appearance, but it is an illusion based on the fact that any belief-forming method whatsoever automatically appears accuracy-conducive by agents who use it.

Comments

# on 23 November 2009, 19:22

Wow, great stuff. If only the great Dave would have his books published soon, so that I could skip this one.
Seriously, I am eagerly waiting for the English version, although that might make for a little bit redundancy in more than one way.
Kölle Alaaf!

# on 15 December 2009, 16:37

Hi Wo,

Interesting stuff (incidentally, I think Maher has some earlier stuff defending the absolute distance metric, arguing against the Joyce 1998 paper).

One thought I have on this. Joyce's treatment of accuracy suggests that the methodology here is do some conceptual analysis of accuracy, and arrive at various conclusions. The Maher/Gibbard worries do seem to respond to that---they try to develop positions appropriately described as "accuracy measures" that---it turns out---don't give Joyce what he wants.

If all we had to work on is the "caring about the truth" constraint on picking an accuracy measure, I'm not sure we'll be able to say much against this. Joyce's 1998 argument appeals to all sorts of resources beyond this---e.g. in terms of "Cliffordian" attitude to shifts of credence (a sort of risk-averseness to error). I'm not sure I quite follow it, but it's interesting to think about what the status of these sort of arguments would be, and their relationship to "caring about the truth".

Here's a different way to think about things. Suppose you have a community that are probabilists. What sort of alethic norm (i.e. what accuracy score) would make best sense of their practice? One idea would be: square distance, not absolute distance. Of course, a different community who were highly opinionated (1's and 0's all over the place) might need to be described differently. But in each case, we'd posit a tacit accuracy norm that would make sense of their respective epistemic practice in terms of caring about the truth.

This is interesting for me, in any case, since I want to figure out what the analogues to probabilism are in non-classical settings, and one way to do that is to figure out what rationalizes probablism in the classical setting, and transfer that over. So it'd be philosophically significant (arguably) even if it isn't a stick to beat non-probabilists. (I'm not sure that the correct rationale for probabilism needs will end up persuading probabilism-sceptics, though it'd be nice if it did).

It's natural to ask a different question, if one accepts the above picture. Can we give a deeper rationale for caring about truth in the probabilism-supporting style, rather than the probabilism-denying style? You might reach for all sorts of tools to tackle this question---e.g. the sort of pragmatic rationale Gibbard gives. The idea would be to argue that square-distance accuracy carers do better, all things considered, than absolute-distance carers.

Suppose that worked out (big supposition). Then the situation would be interestingly subtle. Suppose we're probabilists. Then there'd be a purely alethic-norm based argument for this---in terms of the conception of accuracy we tacitly adhere to (e.g. by Joyce's argument). But when you ask the question "why subscribe to those alethic norms" something pragmatic has to be said. That's not a straightforward pragmatic rationale for probabilism, as a naive reading of dutch book arguments give you. Nor is it a pure analytic rationale in terms of "caring about the truth" as you might have thought Joyce's originally intended (and perhaps you get from depragmatized dutch books). But for me at least, it has some appeal.

# on 16 December 2009, 18:17

Hi Robbie,

I'm not sure I completely follow. Is this the idea? We have an epistemic norm of maximising squared accuracy, revealed somehow by our response to evidence etc. The norm is fundamental as an epistemic norm, and can be used to justify probabilism (for what it's worth) and other derivative norms. However, the practice of endorsing this fundamental norm can further be justified on pragmatic grounds, along Gibbardian lines.

I think I vaguely see the appeal of this, although it presupposes that I'm wrong about the emptiness of the accuracy norm. If every epistemic practice whatsoever automatically maximises expected squared accuracy from the point of view of people who use it, we don't have to observe people's practice to discover that they follow the norm of maximising squared accuracy.

# on 17 December 2009, 16:05

Hi Wo,

I think the summary in your first para summarizes what I was after nicely....

I wasn't quite sure about this claim: "every epistemic practice whatsoever automatically maximises expected squared accuracy from the point of view of people who use it". Is this really a commitment of your earlier position? And by "expected square accuracy" we here talking about expected square distance from the truth? If so, I wasn't seeing why this specific kind of claim would ever be attractive. For example...

...it seems prima facie coherent to me to think that someone might be credence 0 in a proposition, and its negation, when they have absolutely no information (as is suggested by the "agnostic" motivations for DS functions). Is this a kind of epistemic practice one might consider? Assuming it is, I'm not sure why this would minimize expected square distance from truth values even by its own lights. Arguing for this is a bit tricky, since it's not straightforward to think about how to define expectation within the DS framework. But to start with, by the Joyce-style arguments, it'll be a priori that there's a credence assignment that is necessarily more accurate than the DS description. So DS credences are accuracy-dominated in this sense. On probabilistic ideas about expectation, it's easy to see that it'll be impossible for any credence function that's accuracy-dominated to minimize expected inaccuracy (/maximize expected accuracy); but things may be more subtle when non-probabilistic conceptions of expectation are in play.

Another case to consider are epistemic policies that might make it best, all things considered, to believe a contradiction (suppose you have apparently compelling evidence for theory T and theory T', and they in contradiction. Again, we get accuracy domination (again, it's more subtle to see how this plays out with expectation).

I should think about your earlier post on these issues---I wasn't quite sure how it'd play out in these probabilistic contexts.

A different claim is that each epistemic practice maximizes (expected) accuracy by its own lights (not necessarily distance-square). That seems a version of the sort of "emptiness" claim of your earlier post (or maybe I'm misunderstanding you here...).

It seemed to me there are two ways of reading this, depending on whether we let "by its own lights" bind the notion of accuracy in play. If the claim is that every epistemic practice maximizes expected accuracy *given how we think of accuracy* (perhaps, the square-distance version) then this seems false---practices like those just given will I suspect fail to max expected accuracy in that sense. But another reading is that every epistemic practice maximizes expected accuracy *according to its own conception of accuracy*. That has a chance of working, though I wouldn't want to endorse it without thinking through the details. It's not clear to me what way of measuring accuracy would support the practice of forming beliefs by DS methods in the absence of relevant information, or sometimes-believing-contradictory-things. I'd be super-interested to see such measures spelled out!

So on the picture I was playing with above, there's at least a chance that there's one sense in which accuracy-norms may be empty, to go along with the sense in which they aren't....

cheers
R


# on 17 December 2009, 23:10

I see, I think we're on slightly different tracks here. I'm inclined to take adherence to something more or less like the classical probability calculus as a precondition on having credences. From this point of view, the policies you describe are impossible. More cautiously, before thinking about a policy that recommends believing both a proposition and its negation, I would like to understand what it would mean to do that, and I only have a very vague idea of that. (Surely assenting to both a sentence and its negation is not enough; you could believe that "not" means "actually".) At any rate, the methods I was thinking about are not the rules of the probability calculus, but rather things like trusting weather reports and scientific instruments.

About the scope: I was thinking that every epistemic method maximizes squared distance accuracy as estimated from the point of view of sufficiently reflective agents who employ this method. What I realised in this blog post is that the same is not true for absolute distance -- even for agents who endorse absolute distance as the right standard. I guess I need an argument why there isn't a similar counterexample to the squared distance claim.

# on 28 May 2010, 00:10


Hi, Jonathan Speke Laudly here,

One could argue that the 70% chance has by experience been proven to mean definite rain more often than not and it is for practical purposes best to assume and prepare for rain (carrying umbrella and so on). And so experience is necessary to determine the actual meaning of "70% chance of rain" and the good method doesn't start with such experience and so does not start with the correct meaning of 70%.
Thus with experience the good method incorporates the bad method.
The number of occasions of rain, given 70%, necessary before one can declare the rain definite is a matter of interpretation and definition of "definite";
I mean really, "70% chance" is itself pretty ambiguous.

Add a comment

Please leave these fields blank (spam trap):

No HTML please.
You can edit this comment until 30 minutes after posting.