One-boxing and objective consequentialism

I've been reading about objective consequentialism lately. It's interesting how pervasive and natural the use of counterfactuals is in this context: what an agent ought to do, people say, is whichever available act would lead to the best outcome (if it were chosen). Nobody thinks that an agent ought to choose whichever act will lead to the best outcome (if it is chosen). The reason is clear: the indicative conditional is information-relative, but the 'ought' of objective consequentialism is not supposed to be information-relative. (That's the point of objective consequentialism.) The 'ought' of objective consequentialism is supposed to take into account all facts, known and unknown. But while it makes perfect sense to ask what would happen under condition C given the totality of facts @, even if @ does not imply C, it arguably makes no sense to ask what will happen under condition C given @, if @ does not imply C.

So the 'ought' of objective consequentialism evaluates acts "causally", rather than "evidentially". This provides some (intuitive) motivation for using a causal evaluation for the decision-theoretic 'ought' as well. Can we strengthen this observation? How bad would it be to combine objective consequentialism with evidential decision theory?

Here's one attempt to bring out a tension. Imagine an agent whose personal utility function orders possible states of the world in just the way some form of objective consequentialism does, giving highest utility to the "best" states and lowest to the "worst" ones. Suppose also the agent has perfect information about which state would result from each of the options presently available to her. Intuitively, what this agent ought to do in light of her beliefs and desires is precisely what she ought to do according to objective consequentialism. That is, the subjective 'ought' of decision theory and the objective 'ought' of objective consequentialism should here coincide.

In fact, however, the two oughts plausibly do coincide even in evidential decision theory. That's because, as Lewis pointed out in "Causal Decision Theory", conditional on any particular dependency hypothesis (about what the available options would bring about), evidential expected utility and causal expected utility are plausibly equivalent.

So we need a different case to bring out the tension. Here's such a case, inspired by Jack Spencer and Ian Wells.

Consider a Newcomb Problem in which the outcomes are measured not in dollars but in consequentialist utilities. As before, assume the agent facing the problem has subjective utilities that match the consequentialist utilities.

It is clear what the agent ought to do, from the perspective of objective consequentialism: she ought to take both boxes. (Recall that the 'ought' of objective consequentialism evaluates acts causally, by looking at the outcomes the acts would bring about, given all relevant facts about the world -- known and unknown. One relevant fact is the content of the opaque box. If the opaque box is in fact empty, then one-boxing would lead to zero consequentialist utilities and two-boxing to a thousand; if the opaque box is non-empty, then one-boxing would lead to 1 million utilities and two-boxing to 1 million and 1 thousand. Either way, two-boxing would lead to the better state.)

Now here we have an agent with perfectly consequentialist values who knows that she ought to two-box, in the objective sense. Yet evidential decision theory says it would be irrational for her to two-box! That's not a logical contradiction. But it surely sounds unappealing. It would be better to have a decision theory on which it can't happen that a morally perfect agent is irrational for choosing an act of which she knows that she morally ought to choose it.

The argument generalizes. For one thing, it generalizes beyond evidential decision theory to other decision theories that recommend one-boxing, such as "timeless decision theory", "disposition-based decision theory", Spohn's recent spin on causal decision theory, and whatever decision theory Teddy Seidenfeld thinks is right.

The argument also generalizes beyond objective consequentialism, given that almost every (sensible) moral theory can be consequentialised. In general, if you think the notion of an objective moral ought is coherent, you probably shouldn't say that one-boxing is the rational choice in Newcomb's Problem.

Comments

# on 03 January 2020, 00:15

Doesn't Wolpert and Benford's 2013 paper "The lesson of Newcomb’s paradox" give a satisfying answer here? They present the apparent conflict between dominance and expected utility principles as being due to different assumptions about the underlying probabilistic structure.

Further, they show (Sect. 3.4) that Newcomb's problem is time-reversal invariant. The "prediction" could occur after you choose whether to one-box or two-box: In this time-reversed variant, people who one-box would think that their antagonist can observe their choice with high accuracy (similarly to predicting it with high accuracy as in the normal Newcomb's problem); people who two-box think they can conceal their choice (just as in the normal Newcomb's problem they think they can make their choice such that their antagonist's prediction is independent of their choice). I have just been thinking about that because you stress that the "'ought' of objective consequentialism evaluates acts causally", but then how is a causal evaluation possible in a time-invariant problem?

If whether one-boxing or two-boxing is correct simply depends on the exact interpretation of the problem (the underlying probabilistic structure), then your rational agent whose subjective utilities coincide with objective utilities might very well one-box depending on what game she thinks she is playing.

Add a comment

Please leave these fields blank (spam trap):

No HTML please.
You can edit this comment until 30 minutes after posting.