Dicing with death

In his "Dicing with Death" (2014), Arif Ahmed presents the following scenario as a counterexample to causal decision theory (CDT):

You are thinking about going to Aleppo or staying in Damascus. Death has predicted where you will be and is waiting for you there. For a small fee, you can delegate your choice to a coin toss the outcome of which Death can't predict.

Tossing the coin promises to reduce the chance of death from about 1 to 1/2. Nonetheless, CDT seems to suggest that you shouldn't toss the coin. To illustrate, suppose you are currently completely undecided and thus give equal credence to Death being in Aleppo and to Death being in Damascus. Then you're 50 percent confident that if you were to stay in Damascus, you would survive; similarly for going to Aleppo. You're also 50 percent confident that you would survive if you were to toss the coin, but in that case you'd have to pay the small fee. So it's not worth paying.

Like almost every apparent counterexample to CDT, this reasoning neglects the dynamics of rational deliberation. Suppose a causal decision-maker goes through the above line of thought and consequently becomes more inclined to stay in Damascus. This changes the probabilities on which the argument was based: the agent should no longer assign equal probability to Death being in Damascus and Death being in Aleppo. On the revised probabilities, going to Aleppo has greatest (causal) expected utility.

So if you obey CDT, you can't rational resolve to stay in Damascus. For parallel reasons, you can't resolve to go to Aleppo. The only deliberation equilibrium, it seems, is the point where you're exactly undecided between Damascus and Aleppo, and certain not to toss the coin. (Arif mentions this response in footnote 5, but misleadingly presents it as an alternative to CDT.)

If deliberation can end in a state of indecision, some non-deliberational mechanism must break the tie. From the agent's perspective, that mechanism will appear stochastic, breaking in favour of a given option precisely with the probability set by the agent's state of indecision. As William Harper once put it, the agent "will have reasoned herself into becoming a chance device".

Now Arif assumes that while Death cannot predict the outcome of the coin toss, he can predict the outcome of whatever breaks the tie in your state of indecision. It's worth going through the alternatives:

  1. Suppose Death can predict neither outcome. Then CDT requires indecision, with an average payoff of 5 (assuming death is worth 0, survival 10), while EDT allows for randomization, with an average payoff of 5-d (d=the fee). So here CDT beats EDT.
  2. Suppose Death can predict the outcome of the coin toss, but not the outcome of the indecision. Then CDT still requires indecision and scores 5 on average, while EDT allows deciding in favour of Aleppo, for an average score of 0. Again, CDT beats EDT.
  3. Suppose Death can predict the outcome of the indecision, but not the outcome of the coin toss. Then it seems that CDT requires indecision (although more on that in a minute) while EDT requires randomization, with average scores of 0 and 5-d, respectively. Here, EDT seems to beat CDT.
  4. Suppose Death can predict both outcomes. Then it seems that CDT requires indecision (more on that in a minute) while EDT allows any choice; both approaches will score 0.

So in two of the four scenarios, the case actually presents a counterexample to EDT rather than CDT; in one case, the two theories seem on a par; and in one, EDT seems to do better than CDT.

Still, what should we say about the troublesome case 3?

Let's have a closer look. Does CDT really recommend indecision between Damascus and Aleppo? Arguably not. On Brian Skyrms's classical models of rational deliberation, deliberation involves comparing the expected utility of each (pure) option with the ("virtual") expected utility of the present state of indecision; if the expected utility of some option exceeds that of the present indecision, the agent becomes more inclined towards that option. Normally, the (virtual) expected utility of a state of indecision is equal to the corresponding mixture of the expected utilities of the pure options. But arguably that is not always true. For example, if indecision is punished (say, it causes severe pain to the agent), then states of indecision can be worse than the corresponding mixture of pure states.

Something similar happens in case 3. Here, if you're undecided between Aleppo and Damascus, you can be certain that you will face death. Then randomizing looks clearly better; so does going to Aleppo, and so does staying in Damascus. Thus indecision between Aleppo and Damascus is not a rational equilibirium. A state of indecision is a rational equilibrium only if it has greater ("virtual") expected utility than any pure choice. And this one does not.

So on reasonable assumptions about rational deliberation, in case 3 (and 4) CDT does not in fact recommend perfect indecision between Damascus and Aleppo.

What then does it recommend? In any state where you give some credence to going to Aleppo, going to Damascus has greater expected utility than the state itself; so no such state is a rational equilibrium. By symmetry, so is no state in which you give some credence to going to Damascus. But if you're certain that you randomize, then (as Arif points out) going straight to Aleppo has a slightly greater expected utility. So that isn't an equilibrium either.

So we seem to have a case where no choice, and no state of indecision, is licensed by CDT. (For a simpler case of this kind, remove the randomization option, but still assume that Death predicts the resolution of indecision.)

Admittedly that's a little awkward. But to me as a friend of CDT, it does have intuitive appeal. Arif assumes that randomizing is obviously the rational choice; according to EDT randomizing is rational even if the fee were quite significant. So suppose the cost of randomizing is that you'll lose your arms. If you're fairly certain that you will randomize, then you're fairly certain that Death will wait for you at place determined not by his prediction of your choice, since he can't predict the outcome of the coin toss; but then going straight to Aleppo looks like a clearly better option: it gives you a fair chance of survival without losing your arms. So if you're confident that you'll randomize then it looks like randomizing isn't the best idea after all. Of course deciding to go straight to Aleppo or Damascus would be even worse. Perhaps that's what drives the intuition that one should randomize: the implicit assumption that one of the options must be rational, and the realization that Aleppo and Damascus clearly aren't. But if the alternative is that none of the options are rational, then it's hardly an argument for randomization to point out that neither going to Aleppo nor staying in Damascus are rational.

The general problem brought out by case 3 is that there may be no rational deliberation equilibrium. In that case, should we say that any state of decision or indecision is wrong? That seems odd. Perhaps we should rather say that decision theory here falls silent. Or perhaps we should add a further rule to select among non-equilibrium states.

At the very least, we can say that some non-equilibrium states are better or worse than others, as measured by their virtual expected utility. On that measure, deciding to randomize is the optimal state.

Comments

No comments yet.

Add a comment

Please leave these fields blank (spam trap):

No HTML please.
You can edit this comment until 30 minutes after posting.