Decision-making under determinism
Suppose you have a choice between two options, say, raising your arm and lowering your arm. To evaluate these options, we should compare their outcomes: what would happen if you raise your arm, what if you don't? But we don't want to be misled by merely evidential correlations. Your raising your arm might be evidence that your twin raised their arm in a similar decision problem yesterday, but since you have no causal control over other people's past actions, we should hold them fixed when evaluating your options. Similarly, your choice might be evidentially relevant to hypotheses about the laws of nature, but you have no causal control over the laws, so we should hold them fixed. But now we have a problem. The class of facts outside your causal control is not closed under logical consequence. On the contrary, if the laws are deterministic then facts about the distant past together with the laws logically settle what you will do. We can't hold fixed both the past and the laws and vary your choice.
Arif assumes that Causal Decision Theory advises us to hold fixed the past but not the laws. It is easy to see that this leads to trouble.
Imagine you have strong evidence that some proposition L expresses the fundamental laws of nature, where L is deterministic. As part of a group of scientists, you are asked about your opinion on L. By raising your arm, you can signal that L is false, by lowering that it is true. You want to give a true signal. Intuitively, you should lower your arm. But if L is indeed true and you lower your arm, then any possible situation in which you raise your arm, and in which the distant past is just as it actually is, must be a situation in which L is false. And in any such situation you signal a truth if you raise your arm.
The problem can brought into sharper focus by assuming a combination of Gibbard & Harper's (1979) formulation of decision theory and Lewis's (1979, 1981) account of counterfactuals. Let P be the hypothesis that the physical state of the early universe together with L entails that you'll lower your arm. According to Lewis, if L is true and in fact you're going to lower your arm, then what would have been the case if you had raised your arm is that P would still have been true but L false, and so you would have signalled something true. Indeed, according to Lewis, in every possible situation in which you would give a true signal by raising your arm, you would also give a true signal by lowering it. (If the closest Raise worlds to w are L worlds, then w is an L worlds, and then the closest Lower worlds are not-L worlds.) So raising your arm would never yield a better outcome than lowering it. On the other hand, since you are not absolutely certain that L is true, you give non-zero credence to not-L situations in which lowering your arm would signal something true and raising something false. (Assuming that in these not-L situations, L would still be false if you were to raise your arm, which should be true at least for almost all of them.) Thus if you evaluate your options by considering what would be the case if you were to choose the options (a la Gibbard and Harper 1978), it looks like you should raise your arm.
(This is the "counterexample to Causal Decision Theory" discussed in Ahmed 2013.)
Note that if you go ahead and raise your arm, then you're signalling something of which you're confident that it is false. Imagine your colleague who hasn't raised their arm wonders, "don't you accept L, given all our evidence?" -- Your response would have to be something like this: "I do believe in L. But since I'm raising my hand, L would have been false if I hadn't raised it. So I had no choice but to signal something false. (Incidentally, you should be grateful, for if I hadn't raised my hand, your signal would have been false.)" Your colleague will hardly be satisfied with this bizarre defense.
The problem is that our Gibbard-Harper-Lewis theory wrongly assumes that the laws of nature are under our control. How can we fix this? The most obvious alternative is to change the standards for the relevant counterfactuals in such a way that the laws are privileged over the distant past. In this case, if L is true and you lower your arm, then what would have been the case if you had raised your arm is that L would still have been true but P false.
But this also looks problematic. P is a proposition about the intrinsic physical state of the world in the distant past, and intuitively this is not affected by your present choice. Worse, it looks like we can recover our problem by considering a situation in which you have strong independent evidence for P (perhaps God told you) and in which raising your arm would signal that P is false. Here presumably you shouldn't raise your arm. But on the revised Gibbard-Harper-Lewis account, if you had raised your arm, it is likely that P would have been false, so you would still have signalled something true. More generally, there is no world with deterministic laws in which lowering your arm is better than raising it: if the closest Lower worlds are P worlds then the closest Raise worlds are ~P worlds. On the other hand, in some deterministic worlds P is false no matter what you do, and then you're better off raising your arm. So if your credence is concentrated on deterministic worlds, it looks like you should raise your hand and signal that P is false, despite your strong evidence in favour of P.
We can tighten the knot. Assume you have strong independent evidence for both L and P. By raising or lowering your arm you can signal whether you accept or reject their conjunction. (We may also assume that your evidence is not misleading: L and P are in fact true.) Intuitively, you should signal acceptance by lowering your arm. But note that raising your arm is (not just counterfactually, but logically) incompatible with L and P. Thus on the supposition that you raise your arm, the conjunction of L and P must be false, no matter how exactly the supposition works -- even if it works not subjunctively but by indicative conditionalization, as "Evidenital Decision Theory" suggests. In general, on the supposition that you raise your arm it is logically guaranteed that you signal truly. Not so under the supposition that you lower your arm. So on this way of evaluating options you should raise your arm.
What shall we make of all this? Perhaps the lesson is that we shouldn't evaluate options by looking at possible situations in which you choose them. Then we don't face the choice of holding fixed either the laws or the past.
Consider Savage's 1954 formulation of Causal Decision Theory. Here the space of possibilities is partitioned into states which together with any option determine an outcome. In this framework, we could take L & P as a state, relative to which the option Raise leads to the outcome Signal falsely (in all three problems mentioned above) while Lower leads to Signal truly. Since most of your credence lies on the L & P state, and you want to signal truly, Savage's theory then says that you should choose Lower.
Of course, more needs to be said about what makes this the correct representation of your decision problem. In particular, why is the option Raise adequately represented by a function that maps the L & P state (which logically entails that you signal truly!) to the outcome Signal falsely?
One somewhat attractive way of rendering this more plausible is to assume that (necessarily) the laws of nature specify (implicitly or explicitly) the results of various possible "interventions". Then the causal structure represented by L & P might entail that (1) you will lower your hand, but also that (2) if the Lower event were replaced by a Raise event, a Signal falsely event would occur, where (2) is a "interventionist counterfactual" in the style of (say) Pearl 2000.
Obviously, the hypothetical Signal falsely event is not a causal consequence of the Raise intervention. The counterfactual (2) is a non-causal counterfactual. But we might still suggest that in the L & P world, the "variable" Signal truly is fixed to equal the "variable" Lower, due to the robust convention that Lower means to signal L and the fact that L is indeed true. Fortunately, the interventionist counterfactuals required for this application all have quite specific antecedents, representing an agent's choice. So we don't need to enter the tricky issue of how to interpret interventionist counterfactuals with unspecific antecedents.
We might relabel and generalize the interventionist counterfactuals as special kinds of "conditional chance" statements, making contact with Skyrms's 1984 formulation of Causal Decision Theory. Here the expected utility of an option A is computed as
EU(A) = \sum_K Cr(K) \sum_C Ch_K(C / A) V(C),
where K ranges over complete chance hypotheses and C over outcomes. Now intuitively, the laws alone don't fix the chances -- we also need boundary conditions. L and P together certainly do fix the chances. What do they say about the chance of Signal falsely conditional on Raise? Presumably, Raise will have chance zero, so the conditional chance is not defined by the usual ratio formula. But conditional chances are better taken as primitive anyway. And then, by the same sketchy reasoning as above -- that Raise means Signal not-L and that L is true -- we might argue that Ch_K(Signal falsely / Raise) = 1, where K is the chance hypothesis captured by L & P. Since most of your credence goes to this hypothesis, Skyrms's formula says that you should Lower.
(The problem of unspecific antecedents emerges here as the problem that conditional chances are only defined for rather specific conditions. Again, we can plausibly avoid this problem.)
It would be nice if we could be a little less sketchy. Skyrms in fact offers an informative analysis of chance and conditional chance. On this account, the chance of A at world w relative to a given agent is the agent's prior credence Cr_0 conditional on w's cell within a certain partition (which is determined by symmetries in Cr_0). Accordingly, the conditional chance of B given A at w is Cr_0 of B conditional on the conjunction of w's cell and A. To get Ch_K(Signal falsely / Raise) = 1, we would need to assume that the cell of the L & P world contains Raise worlds that verify L. But then those worlds won't verify P, and we'll run into similar problems as with our revised Gibbard-Harper-Lewis account.
Another aspect of these proposals that bothers me is that they make use of outcomes that are either incomplete or inconsistent. Return to the interventionist counterfactual: if Raise then Signal falsely. One wants to ask what else would would be the case if Raise. Would L still be true? Would P still be true? Would Raise be true? If the answer is `yes' each time, then contradictory things would be true (for L & P entails Lower). But how do we assign values to contradictory outcomes? Or, how do we calculate the value of option A under condition K if the counterfactual consequences are not closed under conjunction? (A similar problem arises for Lewis because he rejects the Limit Assumption.)