IDT again

In my recent post on Interventionist Decision Theory, I suggested that causal interventionists should follow Stern and move from a Jeffrey-type definition of expected utility to a Savage-Lewis-Skyrms type definition. In that case, I also suggested that they could avoid various problems arising from the concept of an intervention by construing the agent's actions as ordinary events. In conversation, Reuben Stern convinced me that things are not so easy.

Let's look at Newcomb's Problem. Here Savage and Lewis and Skyrms would distinguish two relevant causal hypotheses (two cells of the K partition): according to K1, the opaque box is empty, one-boxing yields $0, and two-boxing $1K; according to K2, the opaque box contains $1M, one-boxing yields $1M, and two-boxing $1M1K. We could shoehorn these hypotheses into hypotheses about objective probabilities in causal graphs. The two hypotheses would share the same causal structure, but K1 would give probability 1 to the opaque box being empty and K2 probability 1 to the opaque box containing $1M. But if these node values have probability 1, then they plausibly also have probability 1 conditional on different values of their ancestors. And that would make the graph violate not only the 'Faithfulness Condition' (that d-connected nodes must be correlated), but also the 'Minimality Condition', that no subgraph of a DAG satisfies the Causal Markov Condition. The Minimality Condition is widely taken as axiomatic for causal models.

To avoid the clash with Minimality, we'd have to say that in K1 the probability of the opaque box being empty non-trivially depends on its causal ancestor, the predictor's prediction, even though the probability of the opaque box being empty is 1. That's not entirely unreasonable. But now we arguably get a probabilistic dependence between the Newcomb agent's choice and the content of the box, which we don't want: one-boxing increases the probability of the predictor having predicted one-boxing, which increases the probability of the box containing $1M. To avoid this, we would have to say that the dependence is asymmetrical: conditional on the predictor having predicted one-boxing, both one-boxing and the box containing $1M are probable, bot not the other way round: conditional one-boxing, the predictor having predicted one-boxing is still low (in K1). Again, I don't think that's an entirely unreasonable thing to say, but it means we're now dealing with very unorthodox conditional probabilities that don't fit what's usually assumed in causal models. We're effectively building causal relations into the conditional probabilities. No surprise we then get a causal decision theory without also postulating interventions.

So if we want to use orthodox causal models as dependency hypotheses, we arguably have to model Newcomb's problem with a single dependency hypothesis. (At least if the predictor's success rate is known.)

But then it's hard to see how two-boxing could come out as rational on the interventionist account. The problem is that the Newcomb agent's decision can hardly be an error term in the causal graph, as the agent should be confident that whatever she actually decides to do has been predicted by the predictor.

So even Stern's 'Interventionalist Decision Theory' recommends acting on spurious correlations in Newcomb type problems. That makes me wonder how much IDT really gains over the Meek & Glymour theory which uses Jeffrey's definition of expected utility.

The upshot is that I'm even more reserved now about the prospects of employing causal models in decision theory. On any way of spelling out the resulting theory, it seems to recommend acting on merely evidential correlations under certain circumstances.

Comments

No comments yet.

Add a comment

Please leave these fields blank (spam trap):

No HTML please.
You can edit this comment until 30 minutes after posting.