Higher-order evidence and non-ideal rationality

I've read around a bit in the literature on higher-order evidence. Two different ideas seem to go with this label. One concerns the possibility of inadequately responding to one's evidence. The other concerns the possibility of having imperfect information about one's evidence. I have a similar reaction to both issues. I haven't seen it in the papers I've looked at. Pointers very welcome.

I'll begin with the first issue.

Let's assume that a rational agent proportions her beliefs to her evidence. This can be hard. For example, it's often hard to properly evaluate statistical data. Suppose you have evaluated the data, reached the correct conclusion, but now receive misleading evidence that you've made a mistake. How should you react?

Some (e.g. Christensen (2010)) say you should reduce your confidence in the conclusion you've reached. Others (e.g. Tal (2021)) say you should remain steadfast and not reduce your confidence.

Before we look at these options, I think we should get clear about what exactly is going on in the relevant scenarios.

Let's model degree of evidential support by a conditional probability measure Pr: Pr(H/E) is the degree to which H is (absolutely) supported by E. If you proportion your beliefs to your evidence, and E is your total evidence, then Cr(H) = Pr(H/E), where Cr is your credence function.

Now, facts about evidential support are plausibly non-contingent. If E entails H, then this doesn't depend on what the world is like. Similarly, if the absolute level of support that H receives from E is x, then this doesn't vary from world to world.

If something is true at all worlds then it automatically has probability 1. So if Pr(H/E) = x, then Pr(Pr(H/E)=x) = 1, and Pr(Pr(H/E)=x / E') = 1 for any E' for which Pr(Pr(H/E)=x / E') is defined.

In words: it is not possible to have misleading evidence E' about the degree to which E supports H. But isn't this what's supposed to happen in the puzzle cases?

The problem is even more obvious in the commonly discussed case where someone has evidence that they have made a calculation error (e.g. Schoenfield (2018), Topey (2021)). Suppose you have correctly figured out that 23 times 17 is 391, say. Now you receive evidence that you've made a mistake. Such evidence would appear to be evidence that 23 times 17 is not 391. But there's no world at which 23 times 17 is not 391. And if H is true at no world, then Pr(H/E) is either zero or undefined. It's not possible to have evidence that 23 times 17 is not 391.

If you proportion your beliefs to your evidence, you can't be unsure about what your evidence supports (assuming you aren't unsure about what your evidence is, and this is not supposed to be the problem in our puzzle cases).

In the calculation error case, the assumption that your beliefs are proportioned to your evidence immediately entails that you must be steadfast. Pr(23*17=391 / E) is 1 for every E for which it is defined, so if Cr(23*17=391) = Pr(23*17=391 / E) then Cr(23*17=391) must be 1, no matter what evidence E you have.

There is a little more wiggle room in the statistics case.

For one thing, when you receive the "higher-order" evidence E' then your total evidence is no longer E. And while there is (arguably) no world at which Pr(H/E) is anything other than its actual value x, it is possible that Pr(H/E&E') is not x.

It's easy to come up with toy examples where this happens. Let H be the hypothesis that people rarely make mistakes when evaluating statistical data. Since E' suggests that you've made a mistake when evaluating the data E, E' is clearly relevant to H. But here the "higher-order" evidence is really first-order evidence. It's a data point that directly speaks to the generalisation H. Nobody can seriously think you should remain steadfast in light of such evidence. Let's set these cases aside.

The puzzle cases are meant to be cases where the higher-order evidence E' is not directly relevant to H. But then it's hard to see how it could be indirectly relevant. When you (correctly) assess the statistical data, your conclusion is based on the relevant data E, not on the assumption that you haven't made a mistake. If you presented your reasoning, you wouldn't list the hypothesis that you haven't made a mistake as part of the evidence for your conclusion. How, then, could evidence that you have made a mistake be relevant?

In sum, if we assume that you proportion your beliefs to your evidence then (1) a common description of what happens in the puzzle cases – that you receive evidence about the evidential support relation – is false, and (2) steadfasting seems obviously correct.

But we should not assume that you (the agent in the puzzle cases) proportion your beliefs to your evidence.

Let's grant, for the moment, that ideal agents proportion their beliefs to their evidence. But if "you" are a real person then you are not ideal.

We real people fall short of ideal Bayesian rationality. We aren't logically omniscient. We aren't probabilistically coherent. We don't conditionalise on our evidence. We don't have infallible memories.

More importantly, our physiological limitations make it impossible to satisfy the Bayesian ideal. In some sense, I guess, it's still true that we should be ideal. We should be logically omniscient. We should be probabilistically coherent. We should have infallible memory. But this is a practically useless kind of 'should'. It can't guide us. We also need to know what we should do given that we're stuck with our limitations.

Return to a scenario in which you are presented with difficult-to-analyse statistical data E. Assume that the data, together with your background knowledge, strongly supports a surprising hypothesis H. In principle, you could figure this out, by stratifying on the right variables and crunching the numbers. But assume you are not able to do that, perhaps only because you don't have enough time. Should you become confident in H? Surely not. It follows that you should not proportion your beliefs to your evidence.

Similarly, we should all be open-minded about Goldbach's conjecture and other unsolved mathematical problems. It would be irrational for you or me to go around insisting that the conjecture is true. Again, it follows that we should not proportion our beliefs to our evidence, for it is almost certain that our evidence either renders Goldbach's conjecture certainly true or certainly false.

There is more to say about this, and I might say more if others haven't made the point. But let's return to the puzzle cases.

This time, let's not assume that you proportion your beliefs to your evidence. Let's not assume that you are logically omniscient and probabilistically coherent. In that case, you plausibly can receive evidence suggesting that 23 times 17 is not 391, or that the hypothesis H is not supported by the data E. This sort of thing happens all the time, at least to me.

What should you do, as a non-ideal agent, if you receive evidence that you've made a mistake on a particular occasion? In ordinary cases, I think you should reduce your confidence in the conclusions that you have reached.

To properly assess the question we would need a good model of non-ideal agents.

It's worth returning briefly to ideal agents. Suppose I'm right that non-ideal agents should not proportion their beliefs to their evidence. What, then, should you do if you're ideal but have good reason to think that you are not ideal?

Suppose, for example, that you have perfect memory but you also have strong evidence that your memory has been tampered with. Should you trust your memory? I'd say you should not.

Arguably, ideal agents who don't know that they are ideal should make adjustments to compensate for their possible non-ideality. (They should not be epistemically akratic.)

If that's right, then it doesn't matter if "you" in the puzzle cases are an ideal agent. Even as an ideal agent, you should not proportion your beliefs to your evidence.

That's where I disagree with most of the literature I've looked at. People simply assume that you should proportion your beliefs to your evidence, or conform to some other norm of ideal rationality.

Now a few words on the second topic that is associated with "higher-order evidence". Here we are interested in the possibility of getting non-trivial information about what one's (first-order) evidence consists in. This is only possible if one can be unsure about one's evidence.

What kinds of constraints does rationality impose on the connection between an agent's evidence and their beliefs about their evidence? Equivalently, assuming that rational beliefs are proportioned to the evidence: how is evidence of evidence related to evidence?

Let Ep mean that your evidence entails p. On a simple "internalist" picture, rational agents can never be unsure about their evidence: Ep entails EEp and ¬Ep entails E¬Ep. This assumption is arguably built into classical Bayesianism. As Hild (1998) pointed out, without internalism it is doubtful that rational agents should conditionalise (or Jeffrey-conditionalise) on their evidence. What should they do instead? Hild (1998) and Schoenfield (2017) suggest a simple alternative, but Gallow (2021) makes a strong case that it still presupposes internalism. Gallow (2021) presents a more complicated alternative, which is generalised (and further complicated) in Isaacs and Russell (2022).

All that is really nice and interesting. But do we need to go down this rabbit hole?

My own inclination is to stick with the classical Bayesian picture: ideal Bayesian agents are never unsure about their evidence. On the way I like to spell out the Bayesian picture, explained in Schwarz (2018), the evidence is always an "imaginary proposition". I'm inclined to build the relevant kind of internalism into the design template for ideal agents.

Admittedly, it is not obvious that we should do this. My main reasons are that (1) I like simple models, and externalist update rules aren't simple, (2) externalism rationalises some akrasia-type states that are intuitively irrational (see e.g. Dorst (2020)), and (3) I find the usual arguments against internalism (or "luminosity") unconvincing.

Point (3) brings me back to the topic of non-ideal rationality. Many arguments against evidential internalism turn on the observation that when we have a certain perceptual experience (which, in my model, corresponds to an imaginary proposition), we generally don't become certain that we have this particular experience. For all we know, says Williamson (2000), we could have a slightly different experience. We don't know, says Gallow (2019), how many stars are in our visual field when we look into the night sky.

That's true. In fact, I don't even know how many windows are on the East side of the library building just across from my office window, although I can clearly see all the windows. Real people don't conditionalise on the imaginary propositions given by their sensory input. The sensory input contains too much information, most of which is unimportant. Our cognitive system filters and compresses this information. I don't know how exactly this works. Nor do I know how it should work, given the constraints set by our physiology.

But these are good and important questions. To address them, it is likely that we would need to drop the simplifying assumption that our cognitive system represents the world by a simple, unique probability measure. Our brain has a variety of representational systems. There is sensory representation, short-term memory, and many different kinds of long-term memory. All of these make use of clever tricks and shortcuts for filtering, storing, and retrieving information. No sub-system is logically omniscient.

(I'm looking at a smaller building next to the library. I don't know how many windows it has, on the side that faces me: I'd need to count. But I see that the building has four floors. I also see that the rainwater downpipe divides that face of the building into two halves. The halves look almost identical. I see that there are the same number of windows on either side of the pipe, on each floor. The windows are arranged in a grid. I also see that there are three windows to the left of the pipe on the ground floor. What I see entails that there are 24 windows, but this is not something I see.)

We need to distinguish between norms for ideal agents (who know that they are ideal) and norms for non-ideal agents like us (and agents who think they are such agents).

For ideal agents (who know that they are ideal), higher-order evidence, about what the first-order evidence is or about what it supports, is plausibly irrelevant. For non-ideal agents (and agents who don't know that they are ideal), such evidence plausibly matters. But it's hard to say how it should matter, without a good model of non-ideal rationality.

Christensen, David. 2010. “Higher-Order Evidence.” Philosophy and Phenomenological Research 81 (1): 185–215. doi.org/10.1111/j.1933-1592.2010.00366.x.
Dorst, Kevin. 2020. “Higher-Order Evidence.” The Routledge Handbook for the Philosophy of Evidence. Routledge.
Gallow, J. Dmitri. 2019. “Diachronic Dutch Books and Evidential Import.” Philosophy and Phenomenological Research 99 (1): 49–80. doi.org/10.1111/phpr.12471.
Gallow, J. Dmitri. 2021. “Updating for Externalists.” Noûs 55 (3): 487–516. doi.org/10.1111/nous.12307.
Hild, Matthias. 1998. “Auto-Epistemology and Updating.” Philosophical Studies 92 (3): 321–61. doi.org/10.1023/A:1004229808144.
Isaacs, Yoaav, and Jeffrey Sanford Russell. 2022. Updating Without Evidence.” Manuscript. Try PhilPapers
Schoenfield, Miriam. 2017. “Conditionalization Does Not (in General) Maximize Expected Accuracy.” Mind 126 (504): 1155–87. doi.org/10.1093/mind/fzw027.
Schoenfield, Miriam. 2018. “An Accuracy Based Approach to Higher Order Evidence.” Philosophy and Phenomenological Research 96 (3): 690–715. doi.org/10.1111/phpr.12329.
Schwarz, Wolfgang. 2018. “Imaginary Foundations.” Ergo 29: 764–89.
Tal, Eyal. 2021. “Is Higher-Order Evidence Evidence?” Philosophical Studies 178 (10): 3157–75. doi.org/10.1007/s11098-020-01574-0.
Topey, Brett. 2021. Higher-Order Evidence and the Dynamics of Self-Location,” Manuscript. Try PhilPapers
Williamson, Timothy. 2000. Knowledge and Its Limits. Oxford: Oxford University Press.

Comments

# on 28 April 2022, 16:58

Hey Wo, I agree with much of this. There may be some detailed differences between us, but I think my view is broadly congenial to yours, and vice versa. Here's my view in outline:

1. Facts about what your evidence is and what it supports are probabilistically luminous in the sense that they always have evidential probability 1. As a result, you cannot have misleading higher-order evidence about what your evidence is or what it supports, although you can have misleading higher-order evidence about whether your beliefs are properly based on your evidence.

2. There is no single answer to questions about what you "should" believe when you acquire misleading evidence of this latter kind. Deontic language is context-sensitive: one dimension of context-sensitivity concerns degrees of idealization. At a minimum, we need to distinguish what you should believe by ideal versus non-ideal standards of epistemic rationality or justification.

3. By ideal standards, you should always believe what your evidence supports. In particular, you should always be certain about what your evidence is and what it supports. But this ideal is unachievable for non-ideal agents: indeed, the policy of trying to satisfy ideal standards is counterproductive in the sense that it is liable to make you less epistemically rational than you would be by following non-ideal standards that you can more reliably follow.

4. By non-ideal standards, you shouldn't always believe what your evidence supports. In particular, you should sometimes be uncertain about what your evidence is and what it supports. When you acquire misleading higher-order evidence that you've responded improperly to your evidence, you should often "bracket" that evidence rather than steadfastly maintaining your beliefs. So conciliationism is right about non-ideal rationality, although steadfastness is right about ideal rationality.

For more details, check out Chapter 10 on "Higher-Order Evidence" in my book, The Epistemic Role of Consciousness (Oxford, 2019), which you can access on Oxford Scholarship Online. I'd also recommend my paper, "The Epistemic Function of Higher-Order Evidence" in Propositional and Doxastic Justification: New Essays on their Nature and Significance, edited by Paul Silva and Luis Oliveira (Routledge, 2022). There's also an earlier paper, "Ideal Rationality and Logical Omniscience" in Synthese from 2015, but the later work covers similar ground in what I hope is a much more detailed and comprehensive way. You can find all this stuff here: https://philpapers.org/s/Declan%20Smithies

I hope we get a chance to discuss all this at some point!

# on 29 April 2022, 03:27

You don't have any references from the Bayesian statistical literature. If you have accumulated higher order evidence about unreliability of a particular source of evidence, then I think the ideal observer has to then recalculate the entire joint probability of all previous judgements so affected (ie revise the priors for the current observation).

Consider (random-effects) Bayesian meta-analysis, where one is combining difference sources of evidence about the same hypothesis - one uses estimated error distributions (which can be subjective prior distributions based on eyeballing simple quality scores for each data source) to down-weight contributions that one considers more unreliable. I don't see much difference between the mental processes of the idealized Bayesian decision maker and the idealized scientific knower carrying out causal inference using Bayesian networks, say, a la Pearl. I was going to recommend
https://academic.oup.com/philmat/article-abstract/28/1/1/5650475?login=false
as a philosophical example (counterfactuals in structural equation modelling) but it
is not directly applicable (no "higher order" reliability information of the type I, at least, have to routinely include in the model).

# on 29 April 2022, 08:26

Thanks both!

@Declan: Your view does sound almost exactly like what I want to say. I'll definitely check out the texts you mentioned. Do you know why the view is so unpopular? Are there any good objections to it?

@David: Right, I haven't thought of this from a Bayesian statistics angle, and I agree this might be useful. It's not entirely obvious how the philosophical concept of evidence maps onto the data in Bayesian meta-analysis. I don't think a case in which, say, we learn that the sampling error for a certain study was higher than we originally assumed, is a case of higher-order evidence in either of the two senses I have discussed, although it is higher-order evidence in some other sense. I need to think more about this.

Add a comment

Please leave these fields blank (spam trap):

No HTML please.
You can edit this comment until 30 minutes after posting.