< 713 older entriesHome58 newer entries >

Sobel's strictly causal decision theory

In Jordan Howard Sobel's papers on decision theory, he generally defines the (causal) expected utility of an act in terms of a special conditional that he calls "causal" or "practical". Concretely, he suggests that

\[ (1)\quad EU(A) = \sum_{w} Cr(A\; \Box\!\!\to w)V(w), \]

where 'A □→ B' is the special conditional that is true iff either (i) B is the case and would remain the case if A were the case, or (ii) B is not the case but would be the case as a causal consequence of A if A were the case (see e.g. Sobel (1986), pp.152f., or Sobel (1989), pp.175f.).

Counterexamples to Good's Theorem

Good (1967) famously "proved" that the expected utility of an informed decision is always at least as great as the expected utility of an uninformed decision. The conclusion is clearly false. Let's have a look at the proof and its presuppositions.

Suppose you can either perform one of the acts A1…An now, or learn the answer to some question E and afterwards perform one of A1…An. Good argues that the second option is always at least as good as the first. The supposed proof goes as follows.

A plan you shouldn't follow (even if you think you will)

Here is a case where a plan maximises expected utility, you are sure that you are going to follow the plan, and yet the plan tells you to do things that don't maximise expected utility.

Middle Knowledge. In front of you are two doors. If you go through the left door, you come into a room with a single transparent box containing $7. If you go through the right door, you come into a room with two opaque boxes, one black, one white. Your first choice is which door to take. Then you have to choose exactly one box from the room in which you find yourself. A psychologist has figured out which box you would take if found yourself in the room with the two boxes. She has put $10 into the box she thinks you would take, and $0 into the other.

More troublesome sequential choices

Two recent papers – Oesterheld and Conitzer (2021) and Gallow (2021) – suggest that CDT gives problematic recommendations in certain sequential decision situations.

Decision problems without equilibria

In my recent posts on decision theory, I've assumed that friends of CDT should accept a "ratifiability" condition according to which an act is rationally permissible only if it maximises expected utility conditional on being chosen.

Sometimes no act meets this condition. In that case, I've assumed that one should be undecided. More specifically, I've assumed that one should be in an "stable" state of indecision in which no (pure) option is preferable to one's present state of indecision. Unfortunately, there are decision problems in which no act is ratifiable and no state of indecision is stable. I'm not sure what to say about such cases. And I wonder if whatever we should say about them also motivates relaxing the ratifiability condition for certain cases in which there are ratifiable options.

A dutch book against CDT? (EDC, ch.8)

The eighth and final chapter of Evidence, Decision and Causality asks whether the actions over which we deliberate should be evidentially independent of the past. It also presents a final alleged counterexample to CDT.

A few quick comments on the first topic.

It is often assumed that there can be evidential connections between what acts we will choose and what happened in the past. In Newcomb's Problem, for example, you can be confident that the predictor foresaw that you'd one-box if you one-box, and that she foresaw that you'd two-box if you two-box. Some philosophers, however, have suggested that deliberating agents should regard their acts as evidentially independent of the past. If they are right then even EDT recommends two-boxing in Newcomb's Problem.

More on dynamic consistency in CDT

One might intuit that any rationally choosable plan should be rationally implementable. In the previous post, I discussed a scenario in which some forms of CDT violate that principle. In this post, I have some more thoughts on how this can happen. I also consider some nearby principles and look at the conditions under which they might hold.

Throughout this post I'll assume that we are dealing with ideally rational agents with stable basic desires. We're interested in the attitudes such agents should take towards their options in simple, finite sequential choice situations where no relevant information about the world arrives in between the choice points.

Dynamic Causal Decision Theory (EDC, ch.s 7 and 8)

Pages 201–211 and 226–233 of Evidence, Decision and Causality present two great puzzles showing that CDT appears to invalidate some attractive principles of dynamic rationality.

First, some context. The simplest argument for two-boxing in Newcomb's Problem is that doing so is guaranteed to get you $1000 more than what one-boxing would get you. The general principle behind this argument might be expressed as follows:

Could-Have-Done-Better (CDB): You should not choose an act if you know that it would make you worse off than some identifiable alternative.

Preference Reflection (EDC, ch.7, part 2)

Why should you take both boxes in Newcomb's Problem? The simplest argument is that you are then guaranteed to get $1000 more than what you would get if you took one box. A more subtle argument is that there is information – about the content of the opaque box – of which you know that if you had that information, then you would rationally prefer to take both boxes. Let's have a closer look at this second argument, and at what Arif says about it in chapter 7 of Evidence, Decision, and Causality.

The argument is sometimes presented in terms of an imaginary friend. Imagine you have a friend who has inspected the content of the opaque box. No matter what the friend sees in the box, she would advise you to two-box. You should do what your better-informed friend advises you to do. In the original Newcomb scenario, you don't have such a friend. But you don't need one, for you already know what she would say.

Why ain'cha rich? (EDC, ch.7, part 1)

Chapter 7 of Evidence, Decision and Causality looks at arguments for one-boxing or two-boxing in Newcomb's Problem. It's a long and rich chapter. I'll take it in two or three chunks. In this post, I will look at the main argument for one-boxing – the only argument Arif discusses at any length.

The argument is that one-boxing has a foreseeably better return than two-boxing. If you one-box, you can expect to get $1,000,000. If you two-box, you can expect $1000. In repeated iterations of Newcomb's Problem, most one-boxers end up rich and most two-boxers (comparatively) poor.

< 713 older entriesHome58 newer entries >