Realistic Newcomb Problems (EDC, ch.4)

Chapter 4 of Evidence, Decision and Causality considers whether there are any "realistic" Newcomb Problems – and in particular, whether there are any such cases in which EDT gives obviously wrong advice.

Arif goes through some putative examples and rejects most of them. The only realistic Newcomb Problems he admits are versions of the Prisoners' Dilemma (as suggested in Lewis (1979)). Here EDT recommends cooperation while CDT recommends defection. Neither answer is obviously wrong.

For what it's worth, I don't think even Newcomb's original Problem is all that unrealistic. To be sure, it's hard to find a suitable predictor. But we don't need that. As long as the subject believes that their choice has been predicted, they face Newcomb's Problem. And why couldn't you (rationally) come to have that belief? The subjects in Shafir and Tversky (1992) arguably did, although Arif disagrees. Here's a possibly even better setup:

You've just swallowed a pill. I enter the room, carrying two boxes, and tell you the following. "The pill you just swallowed blocks the formation of episodic memories for the next few minutes. Now look at these two boxes. The transparent box contains $10, as you can see. The opaque box contains either nothing or $1000. You can take just the opaque box or both. The money you find in your box(es) will go to your favourite charity. Yesterday, you were given the same choice, and the same information. (You also took the pill at the start, that's why you don't remember.) Based on what you did yesterday, we can reliably predict what you will do today. If you took both boxes yesterday, we put nothing in the opaque box today. If you took just the opaque box, we put $1000 into the box." You have seen me present other people with the same choice on many occasions. You have also seen me offer the choice to the same person on many successive days, and you've seen that I correctly foresaw their choice.

To set this up, the other subjects you observed could be paid actors. I don't know if there is an efficient way to temporarily block the formation of episodic memories. If there is, one could even run the setup on two successive days and actually use your choice in the first round to determine the content of the boxes. But as I said, it's enough if you are reasonably confident that this is what I did. Under suitable circumstances, I think many people in your position would assign a credence greater than 0.8 to the hypothesis that I'm telling the truth.

Also, isn't Resnik (1987) right that Calvinists who believe in divine predestination effectively face a real-life Newcomb Problem? (The case is discussed in chapter 1 of EDC, but not mentioned in the present chapter.)

Anyway, I don't think it's important whether Newcomb Problems could easily arise in a world like ours, as long as we can clearly understand what such a situation would involve. (Price (1992), for example, claims that we can't. According to Price, if you believe that the content of the opaque box evidentially depends on your choice, then you must also believe that the content of the box causally depends on your choice, even though it is stipulated that there is no such causal dependence.)

In the original Newcomb Problem, intuitions are notoriously divided. Some have argued that there are more realistic cases in which intuitions are clearer. The most famous example are "medical Newcomb Problems":

Smoking Lesion: You are convinced that there is a strong correlation between smoking and lung cancer. But you don't believe that smoking doesn't cause cancer. Instead, you think the correlation arises from a common cause – a gene variant that causes both smoking and cancer, by separate causal mechanisms. You slightly prefer smoking to not smoking, and you strongly prefer not getting cancer to getting cancer. Should you smoke?

Here there seems to be an evidential connection between smoking and getting cancer – given your beliefs, it is more likely that you'll get cancer if you smoke than if you don't smoke. As a result, EDT appears to say that you should refrain from smoking. Almost everyone agrees that this is the wrong advice.

The standard response to this objection, which Arif endorses, is the "tickle defence". It begins by asking how the faulty gene is believed to cause smoking. The most plausible answer is that the gene causes smoking by causing a strong urge or desire to smoke. But you can plausibly tell whether your have such an urge or desire. If you feel the desire, you can infer that you probably have the gene. If you don't feel it, you can infer that you probably don't have the gene. Either way, the act of smoking then provides no further evidence about the gene and thereby about whether you'll get cancer. And so EDT advises you to smoke.

The argument rests on two assumptions: you believe that the faulty gene causes smoking by causing a desire to smoke, and you have introspective knowledge of your desires. Plausible or not, both of these assumptions can be questioned.

With respect to the first, van Fraassen once suggested a version of the case in which the faulty gene affects not (or not only) your desires but also whether an intention to smoke actually leads to smoking behaviour.

Arif discusses this in footnote 12 on p.92. He argues that in such a case you don't really have a choice between not smoking and smoking but between something like not smoking and trying to smoke. While smoking may be evidence of the faulty gene, trying to smoke will not. EDT therefore says you should try to smoke. This seems right.

I'm more worried about the second assumption. I see no good reason to think that a rational decision-maker must have full information about her desires. Citing Horwich, Arif gestures at the idea that without full information about one's credences and utilities, one could not "apply" decision theoretic rules. But that's just confused. If an agent doesn't know her utilities, she may not be in a position to know the expected utility of her options. But decision theory doesn't require any such knowledge. Decision theory merely requires that the agent choose an act that maximises expected utility, relative to their actual beliefs and desires. It is easy to imagine an agent who systematically satisfies this requirement even though they are completely ignorant of their beliefs and desires.

In fact, earlier in the book Arif invited us to think of an agent's beliefs and desires as nothing but a "complicated amalgam of preferences", derived by Bolker's representation theorem. The Jeffrey-Bolker axioms do not imply that the agent has full knowledge of her beliefs and desires.

One might argue that cases in which your acts are driven by unknown desires are not "realistic". But are they? Many popular psychological theories would disagree.

On pp.104f., Arif suggests that if you don't know about your desire to smoke, you would acquire that information through deliberation. Suppose making a decision to perform an act goes ahead with becoming confident that one will perform the act, as I suggested in the previous post. If you decide to smoke you then become confident that you will smoke, and one might think that this informs you about the desire, breaking the evidential connection between smoking and getting cancer.

But this can't be right. True, if you become certain that you will smoke, then the hypothesis that you smoke doesn't carry any further bad news. (It doesn't carry any news at all.) But then you have already made up your mind. The evidential expected utility of not smoking has become undefined. If, on the other hand, you have not yet made up your mind and you are merely somewhat confident that you will smoke, then the hypothesis that you will smoke is still bad news.

On pp.96-99, Arif suggests that the problematic second premise can be weakened. The argument with the weakened second premise looks rather unlike the standard tickle defence, but it's interesting nonetheless. It goes like this.

Suppose for reductio that Smoking Lesion is a Newcomb-type scenario in which EDT recommends not smoking. Assume also that in the scenario you know the following two facts: (1) smoking is worse news than not smoking; (2) if you refrain from smoking then it will be for this reason. If you now decide to refrain, then all you learn is that you did that because smoking is bad news. This might tell you that you are inclined to follow EDT, but it doesn't shed light on whether you have the gene. So smoking is evidentially irrelevant to whether you have the gene, contradicting (1). End of reductio.

I'm not sure I entirely follow this reasoning. But it's worth thinking through what the Smoking Lesion case would have to look like. Let's fill in the details, and check if conditions (1) and (2) are satisfied.

I will assume that you don't know by introspection whether you have a strong desire to smoke. This is not enough to make smoking evidence for the faulty gene. Somehow, you must think that you're more likely to smoke if you have the faulty gene than if you don't. How so? Let's take the two cases in turn.

What do you think happens if you have the gene? We're assuming that (according to your beliefs) the faulty gene causes smoking by causing a relevant desire. In principle, that desire might cause smoking behaviour in some arational manner, bypassing your rational faculties. But then we might just as well assume that the gene causes smoking without any detour through desire, and we've seen that this weakens the case. So let's assume that (according to your beliefs) having the gene makes it likely that you will smoke because it would make smoking rational.

What do you think happens if you don't have the faulty gene? Again, we don't want to assume that absence of the gene causes non-smoking in some arational manner, bypassing your deliberative faculties. Rather, absence of the faulty gene should make it rationally permissible not to smoke.

It's easy to see how these conditions could be satisfied if we use CDT standards for evaluating what's rationally permissible. But we shouldn't presuppose CDT at this point. Ideally, the scenario should be compatible with the assumption that (a) EDT is the correct theory of practical rationality, and (b) you believe that you are practically rational.

The following version of the story seems to work.

Smoking Lesion 2: You believe that there is a gene variant that causes both cancer and, by a separate causal route, an overwhelming desire to smoke. The desire is so strong that people with the faulty gene prefer smoking and getting cancer to not smoking and not getting cancer. People who don't have the gene (you believe) at most have a weak desire to smoke. In fact you don't have the gene, but you don't know this. You slightly prefer smoking to not smoking, and you strongly prefer not getting cancer to getting cancer. Should you smoke?

In this situation, not smoking appears to be good news. You are convinced that the faulty gene causes a practically irresistable desire to smoke. Everyone who has the gene ends up smoking. If you don't smoke, you can be highly confident that you don't have the gene and that you won't get cancer. By assumption, you prefer not smoking and not getting cancer to smoking and getting cancer. EDT therefore says that you should refrain from smoking. CDT says you should smoke.

I'm not fully convinced that the scenario as described is coherent, but it looks OK to me. (Has it been discussed in the literature?) Notice that it's not any different from the original Smoking Lesion scenario. I've only added some details.

Smoking Lesion 2 violates condition (1) in Arif's reductio argument. Since you don't know whether you have the gene, you don't know whether smoking is bad news. Perhaps Arif would say that this makes the case "unrealistic". I don't agree. But I also think this isn't the crucial question. As long as we have a coherently entertainable scenario in which EDT gives clearly bad advice, EDT is in trouble.

Lewis, David. 1979. “Prisoner’s Dilemma Is a Newcomb Problem.” Philosophy and Public Affairs 8: 235–40.
Price, Huw. 1992. “Agency and Causal Symmetry.” Mind 101: 501–20.
Resnik, Michael D. 1987. Choices: An Introduction to Decision Theory. U of Minnesota Press.
Shafir, Eldar, and Amos Tversky. 1992. “Thinking Through Uncertainty: Nonconsequential Reasoning and Choice.” Cognitive Psychology 24 (4): 449–74. https://doi.org/10.1016/0010-0285(92)90015-T.

Comments

No comments yet.

Add a comment

Please leave these fields blank (spam trap):

No HTML please.
You can edit this comment until 30 minutes after posting.