DiPaolo on second best epistemology

Covid finally caught me, so I fell behind with everything. Let's try get back to the blogging schedule. This time, I want to recommend DiPaolo (2019). It's a great paper that emphasizes the difference between ideal ("primary") and non-ideal ("secondary") norms in epistemology.

The central idea is that epistemically fallible agents are subject to different norms than infallible agents. An ideal rational agent would, for example, never make a mistake when dividing a restaurant bill. For them, double-checking the result is a waste of time. They shouldn't do it. We non-ideal folk, by contrast, should sometimes double-check the result. As the example illustrates, the "secondary" norms for non-ideal agents aren't just softer versions of the "primary" norms for ideal agents. They can be entirely different.

One might think that an agent is subject to secondary norms iff they can't satisfy the primary norms. Take Professor Procrastinate, from Jackson and Pargetter (1986), who is asked to review a manuscript. Ideally, he should accept the invitation and complete the task. If he can bring himself to do this, it's what he ought to do. If he can't, a secondary norm kicks in: he should decline the invitation.

DiPaolo reject this diagnosis. He seems to argue (on p.2051) that ability constraints aren't relevant to when an agent is subject to secondary norms. I don't understand his argument. Perhaps his point is only that secondary norms can kick in even if the agent is able to conform to the ideal norms. In the bill example, one might argue that we should double-check even though we aren't unable to correctly divide the bill. It's not impossible that we get the answer right. The secondary norm seems to kick in because there's a chance that we make a mistake.

Throughout the paper, DiPaolo's focus is on cases in which there's some such risk of noncompliance with ideal (primary) norms. Risk of noncompliance, he says, is fallibility – a disposition to make mistakes. And fallibility is enough to trigger secondary norms.

DiPaolo doesn't really argue for this claim. He seems to find it obvious. By way of analogy, he describes (on p.2048) a case in which a doctor doesn't know which of two drugs would cure her patient and which would kill him. Ideally, DiPaolo says, the doctor should administer the curing drug. But due to risk of noncompliance with this ideal norm, a secondary norm kicks in: the doctor should administer neither drug.

DiPaolo then goes on to apply these ideas to the problem of peer disagreement and higher-order evidence. Suppose you and I arrive at different beliefs, based on the same evidence: you are confident that the defendant is guilty (say), I am confident that he is innocent. Conciliationism says that when you learn of my verdict, you should reduce your confidence in the defendant's guilt. The Right Reasons view says that your credence should be insensitive to the information about my verdict. In particular, if you assessed the evidence correctly, then you should stick to your assessment. DiPaolo suggests that the Right Reasons verdict is right about what you ideally ought to do. But there's a risk of noncompliance, due to your fallibility in assessing evidence. As a result, the secondary norm of conciliationism kicks in.

I'm inclined to agree with all this. (See here.) But I don't think DiPaolo is quite right about the conditions under which secondary norms kick in.

Return to the doctor who can't tell which drug would cure and which would kill her patient. Is it true that the doctor should ideally administer the curing drug? Is it true that due to her fallibility, she should administer neither drug? This isn't obvious to me. One might instead say that administering the curing drug is what she objectively ought to do, even given her fallibility, and that administering neither drug is the subjectively best option – best in light of the doctor's information. Alternatively, one might say that it is objectively wrong to impose a severe risk of death on the patient, so that administering the curing drug is in no sense the right thing to do. It isn't clear to me that this is a case in which primary and secondary norms come apart. Some of DiPaolo's epistemic cases arguably have a similar structure.

In general, DiPaolo seems to suggest that secondary norms kick in whenever the agent is disposed to not comply with the primary norms. This doesn't seem right. What if the agent has such a disposition only because they don't even try to comply with the primary norms? In that case, they arguably should comply with the primary norms.

It's also not clear if what matters is objective risk of noncompliance with the ideal norms or subjective uncertainty about compliance. In many of DiPaolo's applications, the agent has evidence that she has made a mistake. This is compatible with the hypothesis that it is in fact impossible for the agent to make a mistake. If this is so, should the agent compensate for the merely subjective possibility of a mistake? Conversely, suppose an agent is disposed to make a mistake but is rationally confident that she doesn't make a mistake. Should the agent compensate for the merely objective possibility of a mistake? Neither answer is obvious.

I'm also not convinced by the case against an ability-based account of when secondary norms kick in. If you're not sure if you will correctly perform a calculation then there's a sense in which it's not true that you can perform the calculation. This is the "transparent" sense that I wrote about in Schwarz (2020). Perhaps secondary norms kick in iff the agent can't transparently comply with the primary norms?

There's more work to be done on this question – of when an agent is subject to secondary norms.

The issue is probably connected to substantive issues in epistemology. Consider our fallibility in telling apart red things from white things under red light. (We are disposed to make mistakes when we judge whether an object is red.) Many philosophers are attracted to the idea that we should be confident that we are seeing something red when in fact we do, even though this comes with a risk of believing something to be red that is actually white. The philosophers in question hold that this risk should be accepted, not compensated for by secondary norms.

The same could be said about peer disagreement. If you stick to your original assessment, there's a risk that you are badly wrong. Should you compensate for this risk, given your fallibility? I'm inclined to say yes, but it would be good to have an argument for this.

As it stands, DiPaolo offers us an option of reconciling apparently different demands. In a case of peer disagreement, we can say, if we want, that there's a ("primary") sense in which you should believe whatever your evidence supports, and another ("secondary") sense in which you should not. That's great. But it would be even better if we could convince people who don't want to say this that it is the right thing to say.

DiPaolo, Joshua. 2019. “Second Best Epistemology: Fallibility and Normativity.” Philosophical Studies 176 (8): 2043–66. doi.org/10.1007/s11098-018-1110-y.
Jackson, Frank, and Robert Pargetter. 1986. “Oughts, Options, and Actualism.” The Philosophical Review 95: 233–55.
Schwarz, Wolfgang. 2020. “Ability and Possibility.” Philosophers’ Imprint 20.

Comments

No comments yet.

Add a comment

Please leave these fields blank (spam trap):

No HTML please.
You can edit this comment until 30 minutes after posting.