< 706 older entriesHome65 newer entries >

Dynamic Causal Decision Theory (EDC, ch.s 7 and 8)

Pages 201–211 and 226–233 of Evidence, Decision and Causality present two great puzzles showing that CDT appears to invalidate some attractive principles of dynamic rationality.

First, some context. The simplest argument for two-boxing in Newcomb's Problem is that doing so is guaranteed to get you $1000 more than what one-boxing would get you. The general principle behind this argument might be expressed as follows:

Could-Have-Done-Better (CDB): You should not choose an act if you know that it would make you worse off than some identifiable alternative.

Preference Reflection (EDC, ch.7, part 2)

Why should you take both boxes in Newcomb's Problem? The simplest argument is that you are then guaranteed to get $1000 more than what you would get if you took one box. A more subtle argument is that there is information – about the content of the opaque box – of which you know that if you had that information, then you would rationally prefer to take both boxes. Let's have a closer look at this second argument, and at what Arif says about it in chapter 7 of Evidence, Decision, and Causality.

The argument is sometimes presented in terms of an imaginary friend. Imagine you have a friend who has inspected the content of the opaque box. No matter what the friend sees in the box, she would advise you to two-box. You should do what your better-informed friend advises you to do. In the original Newcomb scenario, you don't have such a friend. But you don't need one, for you already know what she would say.

Why ain'cha rich? (EDC, ch.7, part 1)

Chapter 7 of Evidence, Decision and Causality looks at arguments for one-boxing or two-boxing in Newcomb's Problem. It's a long and rich chapter. I'll take it in two or three chunks. In this post, I will look at the main argument for one-boxing – the only argument Arif discusses at any length.

The argument is that one-boxing has a foreseeably better return than two-boxing. If you one-box, you can expect to get $1,000,000. If you two-box, you can expect $1000. In repeated iterations of Newcomb's Problem, most one-boxers end up rich and most two-boxers (comparatively) poor.

Betting on collapse (EDC, ch.6)

Chapter 6 of Evidence, Decision and Causality presents another alleged counterexample to CDT, involving a bet on the measurement of entangled particles.

The setup is Bohm's version of the Einstein, Podolsky, Rosen experiment, as described in Mermin (1981) (see esp. pp.407f.).

We have prepared a "source" S that, when activated, emits two entangled spin 1/2 particles, travelling towards causally isolated detectors A and B. The detectors contain Stern-Gerlach magnets whose orientation is controlled by a switch with three settings (1, 2, 3). When the switches on the two detectors are on the same setting, the magnets have the same orientation. Detector A flashes 'y' if the measured spin is along the magnetic field and 'n' otherwise. Detector B uses the opposite convention, flashing 'n' if the measured spin is along the magnetic field.

Fixing the Past and the Laws (EDC, ch.5)

Chapter 5 of Evidence, Decision and Causality presents a powerful challenge to CDT (drawing on Ahmed (2013) and Ahmed (2014)).

Imagine you have strong evidence that a certain deterministic system S is the true system of laws in our universe. You also believe that questions about what you should do are meaningful even in a deterministic world. Now consider the following two decision problems.

Realistic Newcomb Problems (EDC, ch.4)

Chapter 4 of Evidence, Decision and Causality considers whether there are any "realistic" Newcomb Problems – and in particular, whether there are any such cases in which EDT gives obviously wrong advice.

Arif goes through some putative examples and rejects most of them. The only realistic Newcomb Problems he admits are versions of the Prisoners' Dilemma (as suggested in Lewis (1979)). Here EDT recommends cooperation while CDT recommends defection. Neither answer is obviously wrong.

CDT for reflective agents (EDC, ch.3)

Chapter 3 of Evidence, Decision and Causality is called "Causalist objections to CDT". It addresses arguments suggesting that while there is an important connection between causation and rational choice, that connection is not adequately spelled out by CDT.

Arif discusses two such arguments. One is due to Hugh Mellor, who rejects the very idea of evaluating choices by the lights of the agent's beliefs and desires. I'll skip over this part because I agree with Arif's response.

The other argument is more important, because it touches on an easily overlooked connection between rational choice and rational credence.

Consider the "Psycho Button" case from Egan (2007).

Reading Evidence, Decision and Causality

How odd. I'm in the office. I'm not terribly exhausted. I have some time to read and think and write. Where do I start?

Here's a book that I've long wanted to read carefully, but never got around to: Arif Ahmed's Evidence, Decision and Causality (Ahmed (2014)). I'll work my way through it, and post my reactions. This first post covers the preface, the introduction, and the first two chapters.

The book is an extended defence of Evidential Decision Theory. When I read a text with whose conclusion I disagree, I often find that the discussion already starts off on the wrong foot, with dubious presuppositions about the topic and how to approach it. Not so here. I'm largely on board with how Arif frames the disagreement between Evidential Decision Theory (EDT) and Causal Decision Theory (CDT). I like his broader philosophical outlook – his positivism, his distrust of metaphysics, his conviction that decision-makers should see themselves as part of the natural world. It should to be interesting to see where we'll end up disagreeing.

Two Puzzles About Truthfulness

1. Suppose you have strong evidence that L are the true laws of nature, where L is a system of deterministic laws. You also have strong evidence that the universe started in the exact microstate P. Your have a choice of either affirming or denying the conjunction of L and P. You want to speak truly. What should you do?

Intuitively, you should affirm. But what would happen if you denied?

Since L is deterministic, L & P either logically entails that you affirm, or it logically entails that you don't affirm. Let's consider both possibilities.

Counterpart theory in the SEP

Until recently, the Stanford Encyclopedia of Philosophy didn't have anything on counterpart theory. The editors thought the topic isn't worth an entry of its own, but at least it now has a section in the entry on "David Lewis's Metaphysics". This isn't ideal, since counterpart-theoretic approaches to intensional constructions are best seen as metaphysically non-committal. But it's better than nothing.

I also wrote an "appendix" to the entry with an overview over counterpart-theoretic interpretations of quantified modal logic. It explains some unusual features of counterpart-theoretic logics, how they arise, and how they could be avoided.

< 706 older entriesHome65 newer entries >