## More troublesome sequential choices

Two recent papers – Oesterheld and Conitzer (2021) and Gallow (2021) – suggest that CDT gives problematic recommendations in certain sequential decision situations.

< 710 older entries | Home | 47 newer entries > |

Two recent papers – Oesterheld and Conitzer (2021) and Gallow (2021) – suggest that CDT gives problematic recommendations in certain sequential decision situations.

In my recent posts on decision theory, I've assumed that friends of CDT should accept a "ratifiability" condition according to which an act is rationally permissible only if it maximises expected utility conditional on being chosen.

Sometimes no act meets this condition. In that case, I've assumed that one should be undecided. More specifically, I've assumed that one should be in an "stable" state of indecision in which no (pure) option is preferable to one's present state of indecision. Unfortunately, there are decision problems in which no act is ratifiable and no state of indecision is stable. I'm not sure what to say about such cases. And I wonder if whatever we should say about them also motivates relaxing the ratifiability condition for certain cases in which there are ratifiable options.

The eighth and final chapter of *Evidence, Decision and Causality* asks whether the actions over which we deliberate should be evidentially independent of the past. It also presents a final alleged counterexample to CDT.

A few quick comments on the first topic.

It is often assumed that there can be evidential connections between what acts we will choose and what happened in the past. In Newcomb's Problem, for example, you can be confident that the predictor foresaw that you'd one-box *if* you one-box, and that she foresaw that you'd two-box *if* you two-box. Some philosophers, however, have suggested that deliberating agents should regard their acts as evidentially independent of the past. If they are right then even EDT recommends two-boxing in Newcomb's Problem.

One might intuit that any rationally choosable plan should be rationally implementable. In the previous post, I discussed a scenario in which some forms of CDT violate that principle. In this post, I have some more thoughts on how this can happen. I also consider some nearby principles and look at the conditions under which they might hold.

Throughout this post I'll assume that we are dealing with ideally rational agents with stable basic desires. We're interested in the attitudes such agents should take towards their options in simple, finite sequential choice situations where no relevant information about the world arrives in between the choice points.

Pages 201–211 and 226–233 of *Evidence, Decision and Causality* present two great puzzles showing that CDT appears to invalidate some attractive principles of dynamic rationality.

First, some context. The simplest argument for two-boxing in Newcomb's Problem is that doing so is guaranteed to get you $1000 more than what one-boxing would get you. The general principle behind this argument might be expressed as follows:

Could-Have-Done-Better (CDB): You should not choose an act if you know that it would make you worse off than some identifiable alternative.

Why should you take both boxes in Newcomb's Problem? The simplest argument is that you are then guaranteed to get $1000 more than what you would get if you took one box. A more subtle argument is that there is information – about the content of the opaque box – of which you know that if you had that information, then you would rationally prefer to take both boxes. Let's have a closer look at this second argument, and at what Arif says about it in chapter 7 of *Evidence, Decision, and Causality*.

The argument is sometimes presented in terms of an imaginary friend. Imagine you have a friend who has inspected the content of the opaque box. No matter what the friend sees in the box, she would advise you to two-box. You should do what your better-informed friend advises you to do. In the original Newcomb scenario, you don't have such a friend. But you don't need one, for you already know what she would say.

Chapter 7 of *Evidence, Decision and Causality* looks at arguments for one-boxing or two-boxing in Newcomb's Problem. It's a long and rich chapter. I'll take it in two or three chunks. In this post, I will look at the main argument for one-boxing – the only argument Arif discusses at any length.

The argument is that one-boxing has a foreseeably better return than two-boxing. If you one-box, you can expect to get $1,000,000. If you two-box, you can expect $1000. In repeated iterations of Newcomb's Problem, most one-boxers end up rich and most two-boxers (comparatively) poor.

Chapter 6 of *Evidence, Decision and Causality* presents another alleged counterexample to CDT, involving a bet on the measurement of entangled particles.

The setup is Bohm's version of the Einstein, Podolsky, Rosen experiment, as described in Mermin (1981) (see esp. pp.407f.).

We have prepared a "source" S that, when activated, emits two entangled spin 1/2 particles, travelling towards causally isolated detectors A and B. The detectors contain Stern-Gerlach magnets whose orientation is controlled by a switch with three settings (1, 2, 3). When the switches on the two detectors are on the same setting, the magnets have the same orientation. Detector A flashes 'y' if the measured spin is along the magnetic field and 'n' otherwise. Detector B uses the opposite convention, flashing 'n' if the measured spin is along the magnetic field.

Chapter 5 of *Evidence, Decision and Causality* presents a powerful challenge to CDT (drawing on Ahmed (2013) and Ahmed (2014)).

Imagine you have strong evidence that a certain deterministic system S is the true system of laws in our universe. You also believe that questions about what you should do are meaningful even in a deterministic world. Now consider the following two decision problems.

Chapter 4 of *Evidence, Decision and Causality* considers whether there are any "realistic" Newcomb Problems – and in particular, whether there are any such cases in which EDT gives obviously wrong advice.

Arif goes through some putative examples and rejects most of them. The only realistic Newcomb Problems he admits are versions of the Prisoners' Dilemma (as suggested in Lewis (1979)). Here EDT recommends cooperation while CDT recommends defection. Neither answer is obviously wrong.

< 710 older entries | Home | 47 newer entries > |

- wo on Champollion, Ciardelli, and Zhang on de Morgan's law
- Here's a derivation that exh(exh(AvB > C)) entails A > C: Assume Alt(exh(AvB > C))...
- Hüseyin Güngör on Champollion, Ciardelli, and Zhang on de Morgan's law
- Interesting! Can you share your derivation of exh[exh[(Av(A&B))>C]], since I feel like I...
- wo on Champollion, Ciardelli, and Zhang on de Morgan's law
- If I did the calculations right then exh(exh(Av(A&B) > C)) entails A&B > C. So if...
- Hüseyin Güngör on Champollion, Ciardelli, and Zhang on de Morgan's law
- Thanks, Wo! I did not consider applying double exhaustification, because as far as I remember they...
- wo on Champollion, Ciardelli, and Zhang on de Morgan's law
- Interesting, thanks! I need to have a closer look at your paper in particular, as I'm currently...
- Hüseyin Güngör on Champollion, Ciardelli, and Zhang on de Morgan's law
- Nice to see you discussing the latest iterations of 'logical equivalents in antecedents'. I...
- Matthew Adelstein on Counterexamples to Good's Theorem
- In, for example, the crime novel, can't we just describe it as a different act then. The act in...
- Matthew Adelstein on Harsanyi's trick
- Maybe I'm missing something, but where does the trick come in. It just seems to be plausible sounding...