Evidential externalism and evidence of irrationality

Let Ep mean that your evidence entails p. Let an externalist scenario be a scenario in which either Ep holds without EEp or ¬Ep holds without E¬Ep.

It is sometimes assumed, for example in Gallow (2021) and Isaacs and Russell (2022), that any externalist scenario is a scenario in which you have evidence that you don't rationally respond to your evidence. On the face of it, this seems puzzling. Why should there be a connection between evidential externalism and evidence of irrationality? But the assumption actually makes sense.

To begin, let's look at two popular examples of putative externalist scenarios.

First, the red wall. Compare a "good case" in which you're looking at a red wall with a subjectively indistinguishable "bad case" in which you're looking at a white wall illuminated by red light. To make this an externalist scenario, we assume that in the good case, but not in the bad case, your evidence entails that the wall is red. In the bad case, moreover, you can't tell that you're not in a good case. Your evidence is compatible with looking at a red wall. The bad case is then an externalist scenario. We have ¬Er but not E¬Er.

Now where's the evidence of irrationality?

Your evidence, in the bad case, rules out neither being in a good case nor being in a bad case. It confers some probability on good-case scenarios and some on bad-case scenarios. What do you believe in these scenarios?

Suppose you proportion your beliefs to your evidence in the relevant good-case scenarios. If not, we already have our conclusion: your evidence confers significant probability on scenarios in which you are irrational. Since your evidence says that the wall is red you are certain that the wall is red. Plausibly, you have the same beliefs in the relevant bad-case scenarios. Your belief state isn't sensitive to the difference between the two kinds of scenario. In the bad-case scenarios, you are therefore also certain that the wall is red. Your evidence, however, does not entail that the wall is red. So your beliefs are not proportioned to your evidence. You are, in that sense, irrational.

Next, consider Williamson's clock, from Williamson (2011). You are looking at a clock with a single hand pointing at 3 minutes past 12. Due to your imperfect discriminatory powers, your evidence is compatible with nearby possibilities in which the hand is pointing at 2 or 4 (minutes past 12). In situations in which the hand is pointing at 2, your evidence is similarly compatible with nearby possibilities in which the hand is pointing at 1 or 3. It's an externalist scenario because we have E{2,3,4} without EE{2,3,4}.

Where is the evidence of irrationality here?

It lies in your imperfect discriminatory powers. Presumably this means that your cognitive system is not perfectly sensitive to the true hand position. In other words, there is no reliable functional relationship between the precise hand position and your cognitive response. By assumption, however, there is a reliable functional relationship between the precise hand position and your evidence: if the hand points at 2, your evidence is {1,2,3}, if it points at 3, your evidence is {2,3,4}, etc. It follows that there is no reliable functional relationship between your evidence and your belief state. It's easily possible that you proportion your beliefs to, say, {1,2,3} even though your actual evidence is {2,3,4}. That is, it's easily possible that you don't rationally proportion your beliefs to your actual evidence. Assuming that you have evidence about your imperfect discriminatory powers, it follows that your evidence confers significant probability on situations in which you are irrational.

We can perhaps strengthen this argument by assuming, with Williamson, that evidence = knowledge. I said that when you're looking at the hand pointing at 3, you don't know that the hand is pointing at 3. But why? Not just because you don't have the relevant belief. If you became certain that the hand is pointing at 3, you still wouldn't know that it is pointing at 3. Why not? Presumably because that belief could too easily have been false. You could easily have arrived at the same belief even if the hand were pointing at 2 or 4. This strongly suggests that your cognitive system isn't perfectly sensitive to the precise hand position. Your belief formation can easily "misfire".

So much for the two examples. Here is a more general line of thought, suggested to me by Dmitri Gallow.

Suppose your cognitive system is perfectly sensitive to your evidence. In some sense, then, you could adopt any update process that maps evidence propositions to posterior credence functions. The optimal such process, in terms of expected accuracy and a range of other considerations (such as Dutch Book arguments), is what Hild (1998) calls "auto-epistemic conditionalisation". Here you conditionalise not on your evidence E but on the proposition TE that E is your total evidence.

In the clock example, auto-epistemic conditionalisation would reliably make you certain of the true hand position. If the hand is pointing at 3, E is {2,3,4} and TE is {3}. From a veritist perspective, this is obviously better than conditionalisation, which would leave you unsure about the hand position.

But it's natural to think that in an externalist scenario you couldn't reliably update your beliefs by auto-epistemic conditionalisation. You can't tell where exactly the hand is pointing. In the wall scenario, you can't tell that the wall is white and illuminated by red light. Any argument against evidential internalism – that Ep entails EEp and ¬Ep entails C¬Ep – is plausibly also an argument against what Gallow (2021) calls certainty internalism – that Ep entails CEp and ¬Ep entails C¬Ep, where CA means that you can be rationally certain that A.

Also, if the optimal way of proportioning your beliefs to your evidence is to conditionalise not on E but on TE then we should reconsider whether E is really your evidence. Doesn't TE play the evidence role better than E? But if TE is your real evidence then the supposedly externalist scenario isn't an externalist scenario after all because the possible propositions of the form TEi are guaranteed to form a partition, and evidential externalism requires non-partitionality.

Here is another spin on this last argument, suggested by the discussion in Isaacs and Russell (2022).

Suppose your cognitive system is perfectly responsive to whether the world is in state E1 or E2 or E3, etc. That is, your cognitive system maps (or is capable of mapping) E1 states to credence Cr1, E2 states to credence Cr2, etc. What if the world is in both state E1 and state E2? You can't have both Cr1 and Cr2, if these are different. So if your cognitive system is perfectly sensitive to whether the world is in E1 or E2 or E3, etc., and these possibilities don't form a partition, then your cognitive system must actually be sensitive to all combinations of truth-values of E1, E2, E3, etc. These combinations form a partition. Your cognitive system is responsive to which cell in that partition the world belongs to. If that is so, then what is your evidence? Arguably, we should identify your evidence with the relevant (actual) cell in that partition.

By contraposition, it follows that the candidate evidence propositions fail to form a partition only if there are no world states E1, E2, E3, etc. to which your cognitive system is perfectly sensitive.

Together, I think all these arguments make a strong case that externalist scenarios involve evidence of irrationality.

Admittedly, the case is not conclusive.

Return to the red wall case. I've assumed that you have the same beliefs in the good case and in the bad case. This could be denied. Why not assume that you are sure to proportion your beliefs to your evidence in either case, so that you are certain that the wall is red if and only if you're in a good case? In the bad case, you then don't know that you are in a bad case because you don't know what you believe. (If you knew that you are unsure about the wall's colour, and you are sure that you are rational, you could infer that you're in the bad case.) Perhaps that's not the most intuitive version of the red wall case, but why should it be impossible?

Similarly for the clock case. Couldn't there be a version of the clock scenario in which you are certain to proportion your beliefs to your evidence? In that scenario, you don't know the precise hand position because you don't know what you believe.

What about the argument that (a) if your cognitive system is sensitive to your actual evidence then you should update by auto-epistemic conditionalisation, and that (b) if evidential internalism is false then so is certainty internalism? Both premises could be rejected. Das (2022), for example, argues that auto-epistemic conditionalisation violates plausible assumptions about the connection between evidence and belief. He concludes that we should reject "instrumentalist" arguments from expected accuracy or diachronic Dutch Books, which support auto-epistemic conditionalisation in externalist scenarios. (One might also argue that auto-epistemic conditionalisation is not an available option, despite the perfect sensitivity of your cognitive system to your evidence. Bronfman (2014) suggests that if you don't know what your total evidence is then you can't adopt a rule to conditionalise on your total evidence.)

What about the argument that if your cognitive system is sensitive not just to E but to TE then we should regard TE as your real evidence? Here one might appeal to some other job description for evidence. Perhaps evidence is what your senses tell you, and we can simply stipulate that, for example, your senses in the clock scenario don't tell you the exact hand position, and that your senses in the red wall scenario don't tell you that the wall isn't red.

Why does it matter if externalist scenarios involve imperfect rationality?

It matters because the answer helps clarify how we should think about these scenarios. If the connection between externalist scenarios and imperfect rationality is real then ideally rationally agents who know that they are ideally rational can never find themselves in an externalist scenario. Externalist scenarios can only arise for non-ideal agents, or at least for agents who have reason to think that they are not ideal. More specifically, they always involve agents whose belief state is not reliably sensitive to their evidence. The epistemic norms that apply in such a case are plausibly different from the norms of ideal rationality. (If you're unable to reliably adopt belief state CrE in response to evidence E, it's not helpful to say that this is what you should do.)

Bronfman, Aaron. 2014. “Conditionalization and Not Knowing That One Knows.” Erkenntnis 79 (4): 871–92. doi.org/10.1007/s10670-013-9570-0.
Das, Nilanjan. 2022. “Externalism and Exploitability.” Philosophy and Phenomenological Research 104 (1): 101–28. doi.org/10.1111/phpr.12742.
Gallow, J. Dmitri. 2021. “Updating for Externalists.” Noûs 55 (3): 487–516. doi.org/10.1111/nous.12307.
Hild, Matthias. 1998. “Auto-Epistemology and Updating.” Philosophical Studies 92 (3): 321–61. doi.org/10.1023/A:1004229808144.
Isaacs, Yoaav, and Jeffrey Sanford Russell. 2022. Updating Without Evidence.” Manuscript. Try PhilPapers
Williamson, Timothy. 2011. “Improbable Knowing.” Evidentialism and Its Discontents, 147–64.

Comments

No comments yet.

Add a comment

Please leave these fields blank (spam trap):

No HTML please.
You can edit this comment until 30 minutes after posting.