Belief downloaders and epistemic bribes

Greaves (2013) describes a case in which adopting a single false belief would (supposedly) be rewarded by many true beliefs.

Emily is taking a walk through the Garden of Epistemic Imps. A child plays on the grass in front of her. In a nearby summerhouse are n further children, each of whom may or may not come out to play in a minute. They are able to read Emily's mind, and their algorithm for deciding whether to play outdoors is as follows. If she forms degree of belief 0 that there is now a child before her, they will come out to play. If she forms degree of belief 1 that there is a child before her, they will roll a fair die, and come out to play iff the outcome is an even number. […]

Greaves assumes that Emily could maximize the expected accuracy of her belief state by becoming certain that there is no child before her and that all the other children will come out to play.

As she points out, our intuitive concept of rationality does not endorse these attitudes. Intuitively, Emily ought to remain in the less accurate state in which she is certain that there is a child before her and unsure about what the other children will do.

I'm interested in how the Emily case relates to the following case, brought up in Christensen (2000).

Suppose that I have a serious lay interest in fish, and have a fairly extensive body of beliefs about them. At a party, I meet a professional ichthyologist. […] I have a belief-downloader, which works as follows: If I turn it on, it scans both of our brains, until it finds some ichthyological proposition about which we disagree. It then replaces my belief with that of the ichthyologist, and turns itself off.

Christensen wonders whether epistemic rationality supports using the downloader. He intuits that the answer is yes. My intuitions are a little hazy, but I'm inclined to agree.

Why is Christensen allowed to trade his doxastic state for a more accurate one, but Emily is not?

Before we look at this question, we need to tidy up Christensen's scenario.

To make the scenario case more concrete, let's assume that when activated, the downloader will detect the following disagreement: Christensen believes that emperor angelfish have green stripes, the ichthyologist believes they do not.

If Christensen is at all normal, he has many other beliefs related to his belief that emperor angelfish have green stripes. He believes that some angelfish have green stripes. If he is good at disjunction introduction, he might also believe that emperor angelfish have either green stripes or orange legs. Let's also assume that he remembers the basis for his belief about the colour of emperor angelfish: he remembers seeing a picture in a book with the caption 'emperor angelfish', showing a fish with green and yellow stripes.

Now what happens when Christensen's belief, that emperor angelfish have green stripes, gets "replaced" by the ichthyologist's belief that emperor angelfish do not have green stripes? Does Christensen retain all the other beliefs? Does he still believe that some angelfish have green stripes, even if that belief was based solely on his belief about emperor angelfish? Does he still believe that emperor angelfish have either green stripes or orange legs – so that he can now infer that they have orange legs? Does he still remember seeing the green-and-yellow fish picture in the book, and does he still believe that the picture was an accurate representation of an emperor angelfish?

If the downloader would leave Christensen in such an incoherent mess of a state, I don't see how using the downloader could be epistemically acceptable. Doing so would certainly not improve the accuracy of Christensen's belief state.

Let's assume, then, that the downloader would not only replace the relevant belief itself. It would make further adjustments to Christensen's belief state to restore consistency and coherence.

This process is likely to involve some trade-offs. Perhaps there really are angelfish with green stripes. Perhaps Christensen really saw that picture in that book (but unbeknownst to him there was a flaw in the printer colour). The downloader would make him lose these true beliefs, along with a range of true beliefs formed by disjunction introduction. To compensate for this, we may assume that the downloader would give him a whole range of new true beliefs, not just about the colour of emperor angelfish but also about the colour and habits of other fish about which Christensen did not previously have an option.

Let's return to the question why the trade-off seems acceptable in Christensen's case but not in Emily's.

One superficial difference between the two cases is that Christensen doesn't know which beliefs will be replaced if he uses the downloader, nor does he know by what they would be replaced.

But this doesn't seem relevant. Let's assume that the downloader, when activated, first displays the beliefs that will be removed and the new beliefs that will be added, offering a choice between continuing with the replacement and aborting. (If Christensen chooses to abort, he is put back into the belief state before he saw what's on the display.)

In our example, the "to be removed" list on the display might contain: "emperor angelfish have green stripes", "some angelfish have green stripes", "I saw a picture of a fish with green and yellow stripes in a book with the caption 'emperor angelfish'", and so on. The "to be added" list contains things like "emperor angelfish have blue and yellow stripes" and "humphead parrotfish eat coral".

We may also imagine that the display indicates which of the beliefs on the two lists are true and which are false.

If it's rational to use the downloader without the display, it's hard to see how it could be irrational to use the improved downloader with the display. If you prefer a certain treatment to no treatment, and the treatment might unfold in different ways, then you can't rationally prefer no treatment to each of the specific ways in which the treatment might unfold.

So it would be epistemically OK for Christensen to choose 'continue'.

Another difference between the Christensen case and the Emily case is that Christensen is facing a practical choice. Emily is not. Could that make a difference?

Suppose Emily could, by the push of a button, bring about the belief state in which she is certain that there's no child in front of her and that the other children will come out to play. Would it be rational for her to push the button?

Here we assume, as we did all along in order to make sense of Christensen's question (whether he should use the downloader), that epistemic rationality has something to say about an agent's practical options. This isn't obvious, but it's also not obviously false. Some modes of inquiry are good means to gather knowledge, others are not, and epistemic rationality arguably supports using the good ones.

So, would be it be epistemically OK for Emily to push the button? I'm not sure.

It's time to clarify what exactly the button would do.

We know that pushing the button would cause Emily to become convinced that there is no child in front of her. Would she still believe that she has visual experiences as of a child, and that her experiences are trustworthy? Presumably not. The alternative belief state should not be an inconsistent mess.

What, then, is the alternative belief state we are meant to evaluate in the Emily scenario?

Here is one possibility. The alternative belief state is one in which she is certain that she has no visual experiences as of a child. Her belief state is just as it would be if she weren't looking at a child.

I think it would be irrational for Emily to be in this state.

If you have visual experiences as of a child, but you are convinced that there's no child before you and that you not hallucinating etc., then something's deeply wrong with you.

It is a well-known problem for coherentism that it can't explain this fact. Accuracy-first epistemology seems to inherit the problem.

In the present version of the story, we may agree that pushing the button would render Emily's belief state more accurate, but it would not make it epistemically better. Christensen's downloader, by contrast, would bring about a genuine improvement. It wouldn't cause a mismatch between experience and belief. This could explain the difference.

There is another difference between the two cases. In the Emily case, the accuracy of the alternative belief state depends on whether she adopts that belief state. If adopted, the belief state will be highly accurate because the other children will come out to play in response to Emily's adoption of the belief state. In the Christensen case, the truth-value of the potentially new beliefs about fish does not depend on whether he uses the downloader.

Greaves assumes that this is a crucial feature of the Emily case. So do Carr (2017), Konek and Levinstein (2019), and others who have discussed her puzzle.

But I don't think it matters.

The reason why Emily should not adopt the alternative belief state is that it would bring about an irrational mismatch between her beliefs and her experiences. This has nothing to do with whether adopting the belief state would affect the truth-value of her beliefs about the other children.

(Corollary: Savage's decision theory can't rescue accuracy-first epistemology.)

We can tell another version of the Emily story that does not involve a mismatch between experiences and beliefs.

Here, the alternative belief state that we need to evaluate is a state in which she correctly believes that she has experiences as of a child, but she also (falsely) believes that she has taken a drug that causes hallucinations of children, and that her present experience of the child is caused by the drug.

Let's assume, again, that this state is more accurate than Emily's actual (current) state. Would it be rational for Emily to be in the alternative state? Would it be rational for her to bring the state about by pushing a button?

My intuitions are not as clear as in the previous version, where the alternative state involves a mismatch between experience and belief. But I'm still inclined to say that it would be irrational for Emily to be in the alternative state, and also that it would be irrational for her to push the button.

Why?

Perhaps because Emily has no reason to think that she has taken a drug and that she might be hallucinating. If she were in the alternative state she would be certain of something that is not only false but that also isn't supported by any evidence she has ever received. That's irrational, and the irrationality can't be compensated for by extra true beliefs.

But isn't this also true for the Christensen case? Christensen has good reason to think that emperor angelfish have green stripes. In the display-free version of the scenario, he has never received any evidence suggesting that emperor angelfish have blue stripes. How could it be rational for him to be certain that they have blue stripes?

Well, Christensen knows what the downloader will do. If, after having used the downloader, Christensen finds himself convinced that emperor angelfish have blue stripes, without remembering an evidential basis for this conviction, he can infer that the conviction probably originates from the ichthyologist via the downloader, and therefore that it is likely to be accurate. The new beliefs are evidence of their own truth.

Not so in Emily's case. On the contrary, if Emily remembers pushing the button, she remembers causing herself to have an irrational belief about having taken a drug. How could she rationally hold on to that belief?

Perhaps we should assume that pushing the button would make Emily forget that she has pushed the button.

We can make the same assumption about the Christensen case. What if after using the downloader, Christensen will forget that he has used it, and even that he had the option of using it? His new fish beliefs then aren't evidence of their own truth.

Still, the beliefs are true, and that might make a difference. Emily would become certain of something that is false and for which she has never had any evidence. This might be worse than becoming certain of something that is true for which one has never had any evidence.

But couldn't the downloader also bring about the latter kind of state? Perhaps among the many fish beliefs the downloader would transfer is a strong belief in some falsehood. If Christensen uses the downloader, with or without the display, and forgets that he has been given the option, he will become certain of something that is false for which he has never had any evidence.

It still seems OK to use the downloader, if that would also make him sure of many equally important truths.

"Equally important" points at one remaining difference between the two cases. Our beliefs about whether our senses are reliable are hugely important to our cognitive life. Beliefs about the colours and eating habits of different kinds of fish are not.

Many true beliefs about fish possibly make up for a single false belief about fish. But many true beliefs about whether some children will come out to play do not make up for the single false belief that one's senses are unreliable.

Carr, Jennifer Rose. 2017. “Epistemic Utility Theory and the Aim of Belief.” Philosophy and Phenomenological Research 95 (3): 511–34. doi.org/10.1111/phpr.12436.
Christensen, David. 2000. “Diachronic Coherence Versus Epistemic Impartiality.” The Philosophical Review 109 (3): 349–71.
Greaves, Hilary. 2013. “Epistemic Decision Theory.” Mind 122 (488): 915–52. doi.org/10.1093/mind/fzt090.
Konek, Jason, and Benjamin A Levinstein. 2019. “The Foundations of Epistemic Decision Theory.” Mind 128 (509): 69–107. doi.org/10.1093/mind/fzw044.

Comments

No comments yet.

Add a comment

Please leave these fields blank (spam trap):

No HTML please.
You can edit this comment until 30 minutes after posting.