The unity and disunity of epistemic values

Alvin Goldman has just been giving this year's summer school here in Cologne. When he put forward his view that what distinguishes good ways of belief formation from other ways is their truth-conduciveness, I found myself disagreeing and claiming that there is no general principle that distinguishes the good ways from others. This is somewhat surprising given that I've often claimed in recent times that the only epistemic criterion for evaluating belief-formation is truth-conduciveness. Here is how I think the two claims can go together.

For simplicity, I will focus on non-probabilistic inference rules as ways of forming beliefs. The relevant rules take zero or more propositions (or sentences) as input and output another proposition. I will also focus on subjects who can think about such rules and so might have an opinion about their reliability.

Suppose you reasonably believe that some such rule is not truth-conducive: that it is likely to lead from true premises to false conclusions. Then you ought not to apply it to your beliefs and endorse the conclusion. Otherwise you would end up endorsing something of which you reasonably believe that it is probably false, and that's irrational. In the other direction, suppose you reasonably believe that a rule is truth-conducive. Then you ought to apply it -- at least in the weak sense of not rejecting its conclusion. Otherwise you would end up rejecting something of which you reasonably believe that it is probably true, and that's irrational. (I ignore the cognitive effort of applying a rule.) Putting both directions together, we get that under normal circumstances,

1) you ought to: apply rule R iff you reasonably believe that R is truth-conducive.

Why 'normal circumstances'? Because you may have evidence that even though R generally leads from true inputs to true outputs, you may also have evidence that the present situation is an exceptional one where this may fail. Until further notice, I assume you have no such extra-ordinary evidence, so we can leave the 'normal circumstances' implicit.

On the plausible assumption that the epistemic 'ought' operator satisfies the K principle, (1) is equivalent to

2) you ought to follow rule R iff you ought to reasonably believe that R is truth-conducive.

Moreover, the double modality in the consequent can arguably be reduced, so we have

3) you ought to follow rule R iff you ought to believe that R is truth-conducive.

This gives us a tight connection between epistemically good rules and truth-conduciveness. Indeed, the connection is so tight that it leaves no room for other, competing values: if you (reasonably) think that R is truth-conducive, you ought to apply it, no matter whether it lacks other values X, Y, or Z; and if you think that R is not truth-conducive, you ought not to apply it, no matter whether it has X, Y, and Z.

Now imagine we have made a list of good rules: of rules that rational people would apply under normal circumstances. Remember that we include zero-argument rules. These amount to propositions which rational people would normally accept. We can even focus entirely on such propositions, since by (3) they contain representatives of all the good rules. Thus enumerative induction with predicates like 'green' is represented by the proposition that such inferences are truth-conducive.

So we have a list of propositions rational people would believe (under normal circumstances). What distinguishes these propositions from others that are not on the list? What is the difference-maker? Why would rational people take the unobserved to resemble the observed in terms of greenness and not in terms of grueness? Why would they assume that their senses are reliable? Why would they assume that the world is not cluttered by causally inert and imperceptible spirits? What is the unifying feature that makes these attitudes (and the corresponding rules) rational?

It is here where I think there is no interesting answer: there isn't any unifying feature. In particular, it is not truth. We might of course introduce a technical notion of rationality or justifiedness on which a belief is rational or justified iff it is true. But on the ordinary, pre-theoretic sense, believing rationally and justifiedly is not the same thing as believing truly.

Goldman suggests that the good rules are all and only those that are truth-conducive. Given (3), this amounts to suggesting that within a certain domain of propositions, namely about the reliability of inference rules, justifiedness coincides with truth.

We can see why Goldman's proposal looks attractive. It follows from (3) that people who take themselves to be rational will under normal circumstances judge all and only the good rules to be truth-conducive. But this is so no matter what distinguishes the good rules from the bad rules, and no matter whether there is any difference-maker at all. Hence the fact that we judge all and only the good rules to be truth-conducive reveals absolutely nothing about the underlying difference-maker.

To test Goldman's proposal, we have to consider non-normal circumstances. Imagine tomorrow God tells you that the world is full of epicycles and invisible spirits, and that Fox News is a reliable source of information -- despite all appearances to the contrary. Would you then judge that beliefs based on thorough scientific investigations were unjustified, whereas beliefs based on Fox News reports were justified? Would you say that our scientists have formed their beliefs irrationally, those who trusted Fox News rationally? I would not. So truth-conduciveness is not the criterion that guides my judgments about rationality and justification.

It is well known that Goldman's account has problems with hypothetical scenarios where actually reliable rules are unreliable (or the other way round) and where people have no evidence about this. But these are the only cases where the proposal can be tested!

Of course the fact that truth, or truth-conduciveness, is not the unifying epistemic value doesn't show that there is no other unifying value. And I don't really have a positive argument for the disunity of epistemic values. It's mainly that I can't see a pattern when I look through the list. Here is another consideration that might be relevant, though I'm not sure. It seems to me that my judgments about, say, the rationality of assuming that the unobserved resembles the observed (in the absence of evidence to the contrary) is independent of whether the unobserved really does resemble the observed. But if there is some non-empirical feature that makes my belief justified, shouldn't I be able to point it out and thereby defend myself against an inductive skeptic? What could that feature be? Once empirical considerations are excluded, there is nothing one can reply to an inductive skeptic.

So I side with the disunity of epistemic values. Nevertheless, I also accept that an epistemic rule is good if and only if it is truth-conducive. The two claims live happily together.

(Actually, I think there is a unified non-normative feature that, with analytical necessity, distinguishes the good rules from the bad ones, but it is not the kind of feature Goldman has in mind. Compare the list of things I desire: that I speak Russian, that ebay goes bankrupt, that Hannibal destroyed Rome, etc. Here, too, there doesn't seem to be a unifying property. For instance, it is not true that all these things are such that, if true, I would be happy. The hypothesis of unified egoistic motivation is popular because humans like to see patterns in heterogeneous lists, but it is false nonetheless. What characterises the things I ultimately desire is just that I happen to desire them. But this is not a primitive property. Suppose it is a complex causal/functional and relational property. Then it is not a metaphysical mystery why these things are on the 'desired' list and not others; there is a difference-maker, and it can be expressed in non-evaluative terms. A similar story, I think, should be true for epistemic values, but don't ask for details.)

Comments

# on 02 September 2009, 10:53

Hey Wo,
I'm not sure whether I'm on the right track, but as far as I can see, your conception of a good epistemic rule is somewhat internalist. It seems to depend on whether the person reasonably *beliefs* that the method applied or the belief formed is truth conducive.
Goldman, on the other hand, seems to be interested only in whether the method applied is *in fact* truth conducive.
So, with respect to your Fox-News example: if one is a hard core externalist, one might bite the bullet and say: if it is *actually true* that rule R (watching Fox News) is a reliable way of forming true beliefs, then one ought to follow R, whether or not one *beliefs* it to be truth-conducive.
So in this way, the unifying feature would be something along the lines of "being in fact truth conducive in the actual world". Of course, this means that we sometimes (or maybe very often) do not know whether or not we are applying a good epistemic rule or not. So maybe the distinction to make is between "applying a good epistemic rule" and "believing that one applies a good epistemic rule"?

# on 02 September 2009, 13:54

Hi erik,

you're right about Goldman. He recommends rules that are truth-conducive, whether or not the subject knows it. My (1)-(3) may look like internalised versions of Goldman's proposal: you should apply rules that you *reasonably believe to be* truth-conducive. But I take this to be uninformative and uncontroversial -- unless more is said about which rules you should regard as truth-conducive. Goldman has a unified answer: you should regard those rules as truth-conducive that really are truth-conducive. As a consequence, he has to bite the bullet on the Fox News example. But not only that. His proposal gives the intuitively incorrect verdict in just about any scenario where it can be tested.

What I'm suggesting is that people have wrongly thought that Goldman's proposal is supported by the fact that we generally judge good methods to be truth-conducive and vice versa. But this fact is already explained by the uninformative principles (1)-(3), which are fully compatible with my view that there is no unifying epistemic value. To test Goldman's proposal, we have to consider cases like the Fox News scenario. And in such cases, he almost always gets things wrong.

# on 05 September 2009, 10:47

Hi Wo,

maybe there is also another way to use the Fox News case, for especially externalists often distinguish between rationality and justification. So, maybe people who did base their beliefs on science in the Fox News case were rational in the sense that they formed their beliefs in a responsible way and/or such that they were coherent from their subjective perspective. Nevertheless, it seems, one can still say that their beliefs were unjustified because they were based on an unreliable method. The downside of this option is, of course, that it probably turns "justification" into a technical notion. Maybe this could be mitigated a bit by tying justification closely to knowledge, which arguably is a non-technical notion. Anyway, it is not yet clear to me that your conclusion - that epistemic values are disunified - fares overall better than the alternative conclusion - that rationality and justification are simply two different notions.

# on 05 September 2009, 11:34

I see. But that avoids the disunity worry only by changing the topic, right? Truth-conduciveness will be the unified principle for "justification" in the technical sense, but what is the principle for justification (or rationality) in the ordinary sense?

# on 05 September 2009, 12:45

Presumably, the unified principle for rationality would be a different one (e.g. Bayesian), if any. So, we would have at least two different dimensions of epistemic evaluation then - rationality and justification. However, each of them might by itself have a unified principle, so there could still in some sense be more theoretical unity than on the 'disunificationism' that you envisage in your post.

Add a comment

Please leave these fields blank (spam trap):

No HTML please.
You can edit this comment until 30 minutes after posting.