## What are our options?

Lewis, in "Causal Decision Theory" (1981, p.308):

Suppose we have a partition of propositions that distinguish worlds where the agent acts differently ... Further, he can act at will so as to make any one of these propositions hold, but he cannot act at will so as to make any proposition hold that implies but is not implied by (is properly included in) a proposition in the partition. ... Then this is the partition of the agent's alternativeoptions.

That can't be right. Assume I "can act at will so as to make hold" the proposition P that I raise my hand. Let Q be an arbitrary fact over which I have no control, say, that Julius Caesar crossed the Rubicon. Then I can also act at will so as to make P & Q true. (By raising my hand, I make it true, by not raising it I make it false.) So, by Lewis's definition, P is not an option, since I can act at will so as to make a more specific proposition P & Q true (a proposition that implies but is not implied by P). By the same reasoning, all my options must entail Q. So they don't form a partition: they don't cover regions of logical space where Q is false.

How could we fix Lewis's definition?

One might argue that P & Q isn't something that I "can make true" in the appropriate sense, perhaps because in that sense (i) making true a conjunction requires making true both conjuncts, and (ii) truth-making implies causal relevance. Clause (i) entails that I can only make true P & Q if I can make true Q, and clause (ii) entails that I can't make true Q, because my actions are not causally relevant to Caesar's crossing the Rubicon.

This line of thought is a dead-end, I think. For one thing, in terms of truth-conditions, the proposition P (that I raise my hand) is also a conjunction. For example, it is the conjunction of the proposition H that I have a hand and the material conditional ~H v P. Let's assume that intuitively, my choice is between raising and lowering my hand, so that it is not causally relevant to H. On the present proposal, P is then not an option -- at most ~H v P is. By the same reasoning, the proposition P' that I lower my hand is not an option -- only ~H v P. But then my options again don't form a partition: they are both true in ~H worlds.

(We could go hyperintensional and distinguish between P and the logically equivalent H & (~H v P). But, first, this is certainly not what Lewis had in mind; second, it clashes with the intensionality of decision theory; and third, the problem raised by P & Q is also raised by propositions whose hyperintensional form isn't that of a conjunction.)

In any case, the real problem with Lewis's definition, brought out by the P & Q example, is arguably that it ignores the agent's epistemic state. This is what an adequate response should address.

Notice that it doesn't really matter if all my options entail that Caesar crossed the Rubicon, as long as I assign credence 1 to that proposition. My options don't really need to form a partition of logical space; they should only form a partition of my doxastic space. Lewis's definition goes wrong if I'm not sure whether Caesar crossed the Rubicon. In that case I should consider the outcome of my options both under the assumption that Caesar crossed the Rubicon and under the assumption that he didn't. For example, suppose I'm in a quiz show where raising my hand would amount to betting that Caesar crossed the Rubicon. When calculating the expected utility of raising my hand, we should not individuate my option as entailing that Caesar crossed the Rubicon -- otherwise it would trivially comes out as best.

This leads to a general constraint.

Constraint 1:If some proposition Q is outside the agent's crontrol, and the agent doesn't know (with certainty) that it is true, then none of her options should entail Q.

It seems to me that Lewis's proposal can be fixed by adding that constraint: options are maximally specific propositions that satisfy Constraint 1.

It may seem that Constraint 1 is too weak. What if Q is not outside the agent's control, but it is unaffected by some particular option P? For example, suppose my shoelaces are undone, but I'm not sure that they are. Let Q be the proposition that my shoelaces will be undone in a minute. I could prevent Q, but instead I raise my hand. In this case, we want to consider the utility of raising my hand both in Q scenarios and in not-Q scenarios. But Constraint 1 doesn't seem to cover that, because Q is under my control.

However, on reflection, it looks like Constraint 1 does cover the case. To simplify, suppose the only alternative to raising my hand (P) is to tie my shoelaces (P'), which would prevent Q. Then the disjunction (P & Q) v (P' & ~Q) is outside my control: it is true, and nothing I can do would render it false. Moreover, it has intermediate credence, since I'm not sure that my shoelaces are undone. By Constraint 1, no option must entail the disjunction. Which rules out (P & Q) as an option, as desired.

Constraint 1 is not the only way to take into account the agent's epistemic state. A more popular alternative, proposed for example by Jeffrey and Sobel, is this:

Constraint 2. If P is an option for an agent, then the agent must be certain that she can make P true.

Presumably this is motivated by the thought that if decision theory is to be action-guiding, then the space of options must be accessible to the agent.

Like Constraint 1, Constraint 2 would fix the problem with Lewis's definition: P & Q does not count as an option because I'm not sure that I can make it true.

However, it seems to me that Constraint 1 is better justified, as it does not rely on elusive intuitions about what must be accessible to the decision-maker.

Note also that Constraint 1 already implies that the disjunction D of all options has probability 1. For D is true no matter what the agent chooses; if it had intermediate credence, then by Constraint 1 no option would entail D; since every option entails D, it must therefore have probability 1.

One might still worry that Constraint 1 is too "externalistic",
since it considers what the agent can actually control, rather than
what she *believes* she can control. What if Q is under my
control but I don't know that it is? Suppose one of my options P would
bring about Q while another option P' would bring about ~Q, but I
don't know this. Instead I give positive credence to P & Q and P
& ~Q, as well as P' & Q and P' & ~Q. But then consider the
disjunction (P & Q) v (P' & ~Q). This is outside my control:
it is true no matter what I do, and it has intermediate credence. By
constraint 1, none of my options entails (P & Q) v (P' &
~Q). So P & Q can't be an option -- as desired.

What about the other direction: I can't control Q but believe I do?
Let's say I am certain that choosing P would bring about Q and P'
would bring about ~Q, while in fact Q is true no matter what. Then I
give zero credence to P & ~Q, as well as to P' & Q.
Constraint 1 says that P & Q is not an option, for it entails Q,
and Q is outside my control. However, it is not clear why P & Q,
as opposed to P, *should* count as an option. The expected
utility calculations come out the same either way.

I'm not suggesting that constraint 1 entails constraint 2. All I'm suggesting is that the "internalist" considerations that support constraint 2 are generally satisfied also under constraint 1.

Unlike Constraint 2, Constraint 1 has the further advantage that it doesn't allow for options which the agent can't actually make true. If an agent falsely believes that she can fly, and we only consider her beliefs about what she can bring about, decision theory might say that the right choice for her is to fly -- even though she can't. That seems wrong.

There's another constraint one could have used instead of Constraint 1 or Constraint 2. (Like Constraint 1, I haven't seen this one in the literature.)

Constraint 3. If P is an option for an agent, then it would be rational for her to become certain that P is true merely by choosing P.

As Anscombe said, decisions provide "knowledge without observation". Constraint 3 is supported by Skyrms's model of rational deliberation, on which making a choice goes along with becoming certain that the relevant proposition is true. And it also helps with the original P & Q problem: by constraint 3, the conjunction of raising my hand and Julius having crossed the Rubicon it is not an option because it would not be rational for me to become certain that Caesar crossed the Rubicon merely through raising my hand.

What is the relationship between constraint 1 and constraint 3?
Suppose P satisfies constraint 1. If the agent ends up making P true,
can she become rationally certain that it is true (so that P satisfies
constraint 3)? Suppose I toss a six with a die, but my choice alone
can't make me certain that I'll toss a six; that is, before looking at
the outcome, I should assign positive credence to other outcomes. If
before reaching my choice, I had been certain that the die would land
six if I made the choice, then presumably it *would* have been
rational for me to become certain that the die lands six (at least if
something like Skyrms's model is correct). Since that didn't happen,
we can infer that I wasn't sure how the die would land. So the
disjunction (P & Get 6) v (~P & ~Get 6) had intermediate credence, where P is a more narrow description of what I chose. Plausibly, the disjunction is true no matter what. By
constraint 1, it follows that P & Get 6 was not an option. So from
the fact that the option doesn't satisfy constraint 3 we can work
out that it doesn't satisfy constraint 1.

Again, I don't say that constraint 1 always entails constraint 3. But it does look as if -- somewhat mysteriously -- constraint 1 can do at least most of the work of constraint 3, just as it can do most of the work of constraint 2.

This time, I don't have an objection against constraint 3. However, its justification rests on a model of deliberation that is not universally accepted. So it seems better to use the safer constraint 1.

So that's how I think we should fix Lewis's definition.

To spell it out in full, let's use some abbreviations. A
proposition P is **unknown fixed** if (i) P is true and the agent
cannot act at will so as to make P false; (ii) P does not have
probability 1. A proposition P is **open** if (i) P entails no
unknown fixed proposition, and (ii) the agent can act at will so as to
make P true. Then

The options in a decision situation are the maximally specific open propositions; i.e., the open propositions that are not implied by other open proposition.