Christensen on ideal rationality

I want to say something about a passage in Christensen (2023) that echoes a longer discussion in Christensen (2007).

Here's a familiar kind of scenario from the debate about higher-order evidence.

You have come to believe a complex logical truth P on the basis of some reasoning. Now you get evidence suggesting that the reasoning faculty you have employed is unreliable.

Christensen thinks that the evidence should reduce your confidence in P. I'm not sure about this, but I'm inclined to agree. Christensen also says something else that I don't think is true. He says that you should reduce your confidence in P even if you're ideally rational.

"I do not see how it could be ideally rational to be confident in P while thinking that one's reasoning to one's belief in P was likely unreliable."

This raises a (fairly obvious) puzzle. Whatever specific evidence you may have about your reasoning faculty, the evidence logically entails that your reasoning led to a correct conclusion when you derived P. That's because P is a logical truth and hence entailed by everything. But how could evidence which entails that your reasoning led to the correct conclusion call for a reduced credence in that very conclusion?

As an ideal agent, you could have evidence that your reasoning faculties are unreliable in general. But you could not have evidence that any particular instance of your reasoning led to a false conclusion, assuming that you still remember the conclusion. Otherwise it would follow that you have evidence for something whose negation is entailed by that evidence, and how could that be?

I'm not sure what Christensen thinks about this. I suspect he bites the bullet and accepts that your evidence can support a proposition even though it entails its negation. In Christensen (2007, 16ff.), he argues that evidence about the general unreliability of your reasoning faculty must cast doubt on any particular instance in which you use the faculty. If you could be rationally certain for each instance that it led to a correct conclusion, then you could become rationally confident that your reasoning faculty is reliable simply by using it over and over. He thinks that this sort of bootstrapping would be problematic.

All this makes some sense, I think, if you're a non-ideal agent. Suppose you have found a subtly faulty proof of ¬P, and you don't spot the mistake. Then your evidence entails P (and that your proof is mistaken), but you should be somewhat confident that P is false. The problem is that you don't realise what your evidence supports, because you can't see through all its consequences. Ideal agents don't have this problem.

I suspect that what's leading Christensen astray is the false assumption that an ideal agent would base their beliefs in logical truths on some kind of reasoning.

An ideally rational agent has probabilistic credences (I think). Probabilistic coherence implies that logical truths have credence 1. As a probabilistically coherent agent, you are certain of P not because you have gone through a proof, or because you have a "special way of seeing clearly and distinctly that occurs when [you] contemplate claims like [P]" (Christensen (2007, 19)). No, you are certain of P simply because you are coherent.

If your certainty in P is not the result of some cognitive process, then empirical evidence about the unreliability of your cognitive processes is obviously irrelevant to whether you should retain your certainty.

Suppose you nonetheless go through some a priori reasoning. You wouldn't do this in order to find out whether the conclusion is true, for you already know the answer to this question. But you might do it, for example, to find out whether your reasoning faculties are reliable – an empirical question whose answer you may not yet know. If, for example, you do a number of tableau proofs in your head, and you observe that they all lead to conclusions of which you already knew, independently of the proofs, that they were true, then you can reasonably infer that you are reliable at this kind of proof. There's nothing puzzling or problematic about this. It's not a kind of bootstrapping.

An ideally rational agent has no use for reasoning as a means of extending her knowledge. If something follows from what she knows, she already knows it; otherwise she wouldn't be ideally rational. That's why familiar models of ideal rationality have nothing to say about reasoning. Reasoning is a sign of cognitive imperfection.

Christensen, David. 2007. “Does Murphy’s Law Apply in Epistemology? Self-doubt and Rational Ideals.” Oxford Studies in Epistemology, 1.
Christensen, David. 2023. “Epistemic Akrasia: No Apology Required.” Noûs. doi.org/10.1111/nous.12441.

Comments

No comments yet.

Add a comment

Please leave these fields blank (spam trap):

No HTML please.
You can edit this comment until 30 minutes after posting.