More troublesome sequential choices

Two recent papers – Oesterheld and Conitzer (2021) and Gallow (2021) – suggest that CDT gives problematic recommendations in certain sequential decision situations.

Oesterheld and Conitzer discuss the following scenario.

Adversarial Offer With Opt-Out. In stage 1, you have the option of paying $0.20. In stage 2, nothing happens if you paid the $0.20 in stage 1. If you didn't pay, you are presented with two boxes of which you may purchase one for a price of $1. (You may also purchase neither.) A reliable predictor has put $3 in each box that she predicted you wouldn't buy.

According to Oesterheld and Conitzer, "orthodox CDT" says that if you are presented with the choice in stage 2 then you should buy any box of which you were sufficiently confident, before making your choice, that you won't buy it. In expectation, this has a negative expected payoff. You should therefore pay the $0.20 in stage 1. You make a guaranteed loss. Agents who follow EDT, by contrast, would not pay in stage 1 and take no box in stage 2, avoiding the loss.

Oesterheld and Conitzer assume that "orthodox CDT" evaluates the options by the agent's pre-deliberation credences. The kind of CDT I prefer instead says that any option you choose should maximise expected utility at the time of choice. You then couldn't rationally choose to take one of the boxes in the stage 2 problem. Nor could you rationally choose to take none of them. You should remain undecided. More precisely, you should remain undecided if the predictor can't foresee how this state of indecision will be resolved. In expectation, you then make a profit, and you shouldn't pay the $0.20. You will outperform the EDTers.

But what if the predictor can foresee how a state of indecision gets resolved? Then the stage 2 problem has "no equilibrium": there is not stable choice, and no stable state of indecision. Intuitively, not buying any box is nonetheless better than buying one of the boxes. Then, again, the problem would disappear.

I can't see any appeal in the idea that buying a box is the right choice in the stage 2 problem. But I could understand if someone says that the norms of rationality here fall silent, so that no option is permissible and none is forbidden. Then we really do get a case in which you should pay the $0.20 in stage 1 if, for some reason, you think you'll buy a box in stage 2.

I'm not sure how bad that would be. It would be somewhat problematic if we found that CDT licenses choices that together amount to a sure loss. But in this example, at least, I don't think CDT licenses any such choices. (We also don't get an interesting difference here between planning and implementing.)

Gallow (2021) discusses a more puzzling scenario.

Utility Cycle With Switching. In stage 1, you have to choose one of three boxes: A, B, or C. In stage 2, you can pay $60 to swap the box you have taken for the "next" box – meaning that if you took A, you can swap it for B; if you took B, you can swap it for C; if you took C you can swap it for A. A reliable predictor has predicted both choices. If she predicted that you'd end up with box A, she has put $0 into A, $100 into B, and $-100 into C. If she predicted that you end up with box B, she put $0 into B, $100 into C, and $-100 into A. If she predicted that you put take C, she put $0 into C, $100 into A, and $-100 into B.

Oddly, CDT says that you should switch in stage 2, whatever you did in stage 1.

The three options in stage 1 are completely symmetrical. So let's assume without loss of generality that you took box A. If you now choose to switch in stage 2, you can be confident that the predictor has predicted that you'll end up with box B. Box A then contains $-100 and box B $0. So switching is better. If, on the other hand, you choose not to switch then you can be confident that the predictor predicted that you'll end up with A. Box A then contains $0 and box B $100. Switching would have been better.

Dmitri doesn't discuss what you should do in stage 1, and it doesn't seem relevant, but let's have a look. Knowing that you'll switch in stage 2, your choice in stage 1 looks like this (where 'Pred-A' means 'you are predicted to take box A in stage 1'):

Pred-A Pred-B Pred-C
A $-160 $40 $-60
B $-60 $-160 $40
C $40 $-60 $-160

(For example, if you've been predicted to take box A in stage 1 then you've been predicted to end up with B, so $-100 is in A, $0 in B, and $100 in C. In addition, you lose $60 in stage 2.)

The problem has no equilibrium. None of the pure options is stable, and there is no stable state of indecision, given your knowledge that the predictor can tell which box you will take. Suppose, for example, that you're perfectly undecided between the three options. Then you give credence 1/3 to taking box A, in which case you'll end up with $-160 (because the predictor will have foreseen your action); you give credence 1/3 to taking box B, in which case you'll also end up with $-160; same for taking box C. The state of indecision is worth $-160. It is unstable because all pure options are better.

Due to the symmetry of the problem, no plausible decision rule could favour one of the three out-of-equilibrium options. I'm inclined to say that none is rationally permitted, and none is forbidden.

To retain the puzzle, we must then assume that the predictor can foresee your non-rational choice in stage 1. If she can't, the scenario appears to blow up into paradox. Suppose the predictor can't predict what you do in a decision problem without equilibria. If this is true for the problem in stage 1, then not switching is best in stage 2, and then the problem in stage 1 becomes solvable. That is, if you can't make a rational choice in stage 1, and the predictor can't foresee your non-rational choice, then you can make a rational choice in stage 1!

From a distance, the Utility Cycle case resembles the Adversarial Offer case, on a charitable interpretation of the latter. Both figure an unsolvable decision problem in one stage and an opportunity in the other stage to undo or prevent the choice made in the unsolvable problem.

But the Utility Cycle case looks more problematic. In Adversarial Offer, it seems OK to pay $0.20 if you know you are inclined to (stupidly) buy one of the boxes in stage 2. In Utility Cycle, by contrast, all the options in the unsolvable stage 1 problem are obviously on a par. It really seems odd that you would pay $60 to swap whatever box you chose in stage 1.

For one thing, by switching you will probably make a net loss of $60. If the predictor is infallible, you'll make a sure loss of $60. The loss is avoidable. Agents who follow EDT would not switch in stage 2 and break even. And they wouldn't make a different choice in stage 1.

In addition, if you take (say) box A and then switch, it would have been better to take box B and not switch. Whatever sequence of acts you choose is (causally) dominated by another. The best plan, it seems, would involve not switching in stage 2. If you switch, you therefore violate the principle that any rationally acceptable plan should be rationally implementable, as well as its converse. You also appear to violate the principle that the continuation of a rationally acceptable plan should remain rationally acceptable after some parts of it were implemented.

Finally, if you switch you appear to violate the principle of Preference Reflection. Consider your attitude towards switching in stage 2 before you make the choice in stage 1. Let's assume you are currently undecided between the three boxes. Switching then has lower expected utility then not switching. Yet after making the choice in stage 1, you suddenly prefer to switch.

Let's go through these issues in turn.

First, your avoidable loss of $60. Dmitri points out that CDTers are used to underperforming EDTers. In Newcomb's Problem, we CDTers complain that the EDTers were presented with a more favourable decision situation. Dmitri argues that the same cannot be said here. Instead, he suggests that the outcomes in sequential choice situations don't necessarily shed light on the rationality of the individual choices. Since our temporal parts are like separate agents, the fact that they can be led to predictable ruin is, he says, just an intrapersonal tragedy of the commons.

I agree. But I think a tragedy of the commons can never arise between utilitarian agents who only care about the total good in the community. The analogue of this condition is satisfied in Utility Cycle With Switching. In both stages, you only care about maximising your total payoff. So we can't have an intrapersonal tragedy of the commons here. Something else is going on.

I think the case is closer to Newcomb's Problem. The setup favours EDTers over CDTers, although in a more subtle manner than in Newcomb's Problem.

Consider stage 2. If you're a CDTer, you can be confident that whatever box you now have contains $-100, while the box for which you can trade it contains $0. If you're an EDTer, you can be confident that your box contains $0 and that the alternative contains $100. (Silly you, to reject the switch.)

One might say that if you're a CDTer, then you have inflicted this Newcomb-like situation upon yourself, by whatever you did in stage 1. Your bad options are your own fault. But that's not true.

Imagine we observe many repetitions of the scenario, with both EDTers and CDTers taking part. We see what's inside the boxes. First comes an EDTer who has been predicted to take box A in stage 1. Based on this prediction, box A has been filled with $0. Next comes a CDTer who has been predicted to take box A. Based on this prediction, box A has been filled with $-100. And so on. In general, the predictor has predicted what box the agent would choose in stage 1 and then put $0 into the box if the agent is an EDTer and $-100 if they are a CDTer.

The setup clearly disfavours CDTers, even though, strictly speaking, CDTers are not given worse options (in stage 1).

The second worry about switching in stage 2 was that it amounts to some kind of dynamic inconsistency.

Let's look at the possible plans. Here is the decision matrix for the hypothetical planning problem. ('A¬S' means taking box A in stage 1 and not switching in stage 2.)

Pred-A¬S v Pred-CS Pred-AS v Pred-B¬S Pred-BS v Pred-C¬S
A¬S $0 $-100 $100
AS $40 $-60 $-160
B¬S $100 $0 $-100
BS $-160 $40 $-60
C¬S $-100 $100 $0
CS $-60 $-160 $40

The only equilibrium is perfect indecision between the three ¬S plans. According to the form of CDT that I like, there is no "choosable" plan. But three plans are "weakly acceptable" in the sense that you could rationally perform them through a resolution of your indecision.

The situation resembles Arif's Psycho Insurance case from this post. Here, too, we have a weakly acceptable plan that's no longer acceptable after its first stage has been (irrationally) implemented. I'm not worried about this.

A violation of Preference Reflection would worry me more. But arguably we don't have that. Suppose you can already choose whether you will switch before you make your choice in stage 1. One might think that you should prefer to not switch, since you give equal credence to all three predictions, which makes switching irrational. But we have to be careful.

Recall how the situation is set up. The predictor has predicted which box you will take in stage 1 and whether you will switch in stage 2. If she has predicted that you will switch she has put $-100 into your box (the one you're predicted to take) and $0 into the alternative. If she has predicted that you don't switch, she has put $0 into your box and $100 into the alternative. You can't change her prediction about whether you'd switch. And switching is better either way. So you should switch – even if you don't yet know what you'll do in stage 1.

Gallow, J. Dmitri. 2021. “Escaping the Cycle.” Mind. doi.org/10.1093/mind/fzab047.
Oesterheld, Caspar, and Vincent Conitzer. 2021. “Extracting Money from Causal Decision Theorists.” The Philosophical Quarterly 71 (4). doi.org/10.1093/pq/pqaa086.

Comments

No comments yet.

Add a comment

Please leave these fields blank (spam trap):

No HTML please.
You can edit this comment until 30 minutes after posting.