Why would you do that?

I'm generally happy with Causal Decision Theory. I think two-boxing is clearly the right answer in Newcomb's problem, and I'm not impressed by any of the alleged counterexamples to Causal Decision Theory that have been put forward. But there's one thing I worry about. It is what exactly the theory should say: how it should be spelled out.

Suppose you face a choice between two acts A and B. Loosely speaking, to evaluate these options, we need to check whether the A-worlds are on average better than the B-worlds, where the "average" is weighted by your credence on the subjunctive supposition that you make the relevant choice. Even more loosely, we want to know how good the world would be if you were to choose A, and how good it would be if you were to choose B. So we need to know what else would be the case if you were to choose, say, A.

The answer depends in part on what is the case in the actual world. If A is a normal kind of act and if it is actually raining in Sao Paolo, then it would still be raining in Sao Paolo if you were to choose A. A simple idea is that for each complete way the actual world might be there is a unique "closest A-world": a unique world that would be the case if you were to choose A. More generally, it is natural to assume that each world determines a (more or less objective) probability measure over A-worlds, which tells us how likely A-world w would be if A were the case. For example, if A is an act of tossing a coin, then the complete truth about the actual world might tell us that if you were to choose A, then there's an equal chance of getting heads and getting tails.

But what do these A-worlds look like, more concretely?

Let's say you don't in fact choose A. Then what exactly are the most likely worlds in which you choose A?

One notorious problem arises if the world is deterministic. Any world in which you choose A then differs from the actual world either in its laws or in the distant past. Which is it? Would the laws be different if you were to choose A? Or would the distant past be different? Or would there be some chance of either? Every answer leads to trouble. I've talked about this problem in an earlier post.

Another problem, which I haven't seen discussed, concerns your motivation in the relevant A-worlds. Suppose you will choose B because you have every reason to do so. What would be the case if you were to choose A instead? Would you choose A despite having every reason to choose B, perhaps through some glitch in your brain? Or would you have different beliefs and desires that would give you reason to choose A? Again, every answers leads to trouble.

Let's start with the second horn, where we don't hold fixed your actual motivation. We then effectively assume that you have voluntary control over your beliefs and desires, which yields all sorts of wrong results.

For example, suppose you are afraid of heights and therefore (rationally) decide not to go to the cliff edge. If you had gone, then on the present approach, you presumably wouldn't have been afraid of heights. And you might well judge that worlds at which you go to the cliff edge and aren't afraid of heights are better than worlds at which you don't go and are afraid. So decision theory would wrongly say you should have gone to the cliff edge.

For another example, consider Professor Procrastinate, who is asked to write a review, knowing that if he accepts he will never complete the task. Having no control over his future motivation, he rationally declines. What would have happened if he had accepted? How could this choice have made sense in light of his beliefs and desires? Presumably the relevant worlds are worlds at which he believes he will complete the task. But why does he have this belief? If the belief itself is not a mysterious glitch, perhaps what's happening at the relevant worlds is that the Professor is really disposed to complete the review, because he somehow has a stronger desire to fulfil his duties. But then decision theory will wrongly say that he should accept.

Also, if we look only at scenarios in which the agent's choice coheres with her beliefs and desires in the sense that it maximizes expected utility, then it is guaranteed that whatever the agent chooses, she would maximize expected utility. That seems wrong. Surely it is sometimes in our power to make choices by which we wouldn't maximize expected utility.

The other horn of the dilemma is to hold fixed the agent's actual beliefs and desires. The relevant worlds where you choose A are then worlds at which you still have every reason to choose B.

This also risks making all sorts of wrong predictions.

Return to the example of the cliff edge. Suppose you have every reason to stay away from the edge, but through a glitch of your brain nonetheless move towards it. What would you do next? Presumably you would quickly turn around. You would also be puzzled and disturbed, and seek medical advice. Intuitively, these are not the kinds of situations we are interested in when we consider how good it would be if you went to the cliff edge.

We could mix the two horns: we could say that if you were to choose A, then there is some chance that your choice would cohere with your motivation and some chance that it would be a glitch. But that only combines the problems for the unmixed answers.

Comments

# on 09 August 2019, 18:38

Thanks for the interesting post.

I think this is an instance of a more general problem with our theories of subjunctive supposition (I think the same of Ahmed's cases involving bets on the laws, but that's a different, more general, problem with our theories of subjunctive supposition).

Here's a puzzle that I learned from Ned Hall (I think it's in "Causation in the Sciences"). Consider this conditional: "Were I to have worn a pink shirt today, Bob wouldn't have noticed". If Bob doesn't pay much attention to my clothes, that could easily be true. But suppose that when we evaluate it, we imagine a world where there's an intervention or a tiny miracle to change the color of my shirt. Then, Bob certainly *would* notice, as my shirt would suddenly and unexpectedly change colors. He'd exclaim "How the hell did you do that???"

When we evaluate the conditional, we're not imagining that my shirt spontaneously changes color. Instead, we're imagining that my shirt was pink all day long, that I put on a pink shirt that morning, and so on and so forth.

Here's a gesture at my favored solution to that puzzle: when we make subjunctive suppositions like these, we typically do so against the backdrop of a host of propositions that we presuppose to be true, and which we hold fixed even under subjunctive supposition (call these 'subjunctive presuppositions'). For instance, we usually presuppose that my shirt doesn't change colors, and hold this fixed even under subjunctive supposition. So that my shirt won't change color is a subjunctive presupposition. Then, when we evaluate the conditional, we don't go to the 'closest' world in which the shirt is a different color; instead, we go to the 'closest' world in which my shirt is a different color and all of the subjunctive presuppositions are satisfied. So we go to a world in which I put on a different shirt this morning, and that shirt was pink.

As yet, this is just a formal framework, and not a theory. I've not said where these subjunctive presuppositions come from. But I think there's a story we could tell (I won't go into it here).

I think this solution can be imported straightforwardly into the decision theory context, and I think that it handles these kinds of worries. Say that we subjunctively presuppose that decisions performed at the end of rational deliberation are performed for reasons which continue to motivate you to act even after deliberation ends, and we won't consider possibilities in which you immediately change your mind and get confused about why you were walking towards the cliff. Say that we subjunctively presuppose that your beliefs and desires don't change during deliberation and we won't consider possibilities in which you end up maximizing expected utility when you choose other options. Say that we subjunctively presuppose that Professor Procrastinate is incapable of sticking to his plans, and we'll say that he wouldn't follow through if he accepts the assignment. And so on and so forth.

Of course, in the absence of some more general theory of where subjunctive presuppositions come from, this can seem a bit ad hoc. But some kind of theory like this is needed already for our theories of conditionals and causation. And once that work is done, it's easily imported into our theory of rational decision. Since it's incumbent on such a theory that conditionals like "Had you decided to walk over to the cliff, you'd have chosen irrationally, you'd have been scared, you wouldn't have become immediately disoriented and sought medical attention,..." come out true, if the theory is adequate, it should deliver the right results in these kinds of cases.

# on 10 August 2019, 14:00

Hi Dmitri, thanks!

A lot of what you're saying sounds plausible to me. Have you told the story about subjunctive presupposition in print somewhere?

I do suspect the problem arises more sharply in the context of decision theory, in part because here we can't ignore what merely /might/ have happened, and in part because we can often make sense of ordinary counterfactuals by invoking "presuppositions" (in your terminology) that are illegitimate in a decision-theoretic context. For example, we can make presuppositions that render the counterfactuals backtracking, like in the shirt example. For another example, I think there's a reading of "if I had decided to go to the cliff edge, I would immediately have regretted that decision and turned around" on which it is true; but again that's not the right reading for decision theory.

I also don't quite see how your proposal avoids the dilemma I raised. Let's redefine the "closest worlds" to be the closest worlds among those that meet our subjunctive presuppositions. If you actually choose B but could have chosen A, are the closest A-worlds worlds where you act against your reasons or not? Either answer still leads to trouble.

Perhaps you're saying that different cases should come out differently? You seem to suggest that when we evaluate the cliff edge case, we presuppose that I would continue to be motivated to walk towards the edge, but when we evaluate the Procrastinate case, we do not presuppose that he would continue to be motivated to write the review. But doesn't the dilemma even arise in a single case?

# on 12 August 2019, 14:08

No, I've not written any of this up, unfortunately.

That's helpful. I guess I'm now inclined to agree that the problem has additional difficulties in the decision-theory context, for the reasons you say. There's the additional task of specifying which kinds of presuppositions are to be made when making subjunctive suppositions for the purposes of decision-making.

To clarify what I was thinking about the dilemma, let me talk about the cliff case. There, I take it the dilemma was: either we say that at the nearest possibility you act on the basis of what you have most reason to do, or we don't. If we say you do, then you would maximize expected utility were you to walk to the cliff's edge. If we say that you don't, then you'll immediately re-think the decision and turn around, confused about why you decided to walk that way in the first place, and seek medical attention.

I was thinking that, among the subjunctive presuppositions will be the following: 1) when you make a choice, your beliefs and desires don't spontaneously change; and 2) when you make a choice, you do so for reasons, and those reasons will continue to motivate you throughout the completion of the chosen act (though you have reason to make that choice, that choice needn't be the thing you have all-things-considered most reason to make).

So I was taking the second horn of the dilemma, but using the 2nd subjunctive prespposition to block the conclusion that you'd immediately rethink the decision.

Wrt cases like Professor Procrastinate, I'm inclined to think that there are issues here about what we regard as 'one and the same' decision, and what we regard as a multi-stage, sequential decision. Perhaps, if the review is due in the next 30 minutes, we might feel more inclined to say that Professor Procrastinate is facing a decision in which he has available the option of sitting down and writing the review. In that case, when evaluating that option, we wouldn't want to consider possibilities in which he sits down, immediately gets bored, and ends up going on facebook to complain about people who disagree with him about politics. On the other hand, we might just as well feel inclined to regard him as facing a decision in which he has to option to *start* writing the review, but whether to continue is a separate decision he'll face in another 10 minutes. In that case, we would want to take those same possibilities seriously. I agree that's another interesting respect in which the decision theory case is more challenging than the general problem.

# on 13 August 2019, 10:31

@Dmitri: I see, thanks.

I still feel uneasy about the proposed solution. What if you have no reason at all for going to the cliff, and strong reasons not to?

On your proposal, the scenario in which you go is a scenario in which you go to the cliff for no reason and despite having strong reasons not to go. OK, you don't turn around, by stipulation. But what are you thinking about your chosen act? What do you say if someone asks why you're going to the cliff? Don't you notice that you have made a bizarre mistake? If you do, why doesn't that trouble you? If you don't, why not? Have you become incapable of reflecting on your choices, or deluded about your desires?

Add a comment

Please leave these fields blank (spam trap):

No HTML please.
You can edit this comment until 30 minutes after posting.