## Gustafsson on decision-making under ignorance

Decision theory textbooks often distinguish between *decision-making under risk* and *decision-making under uncertainty* or *ignorance*. The former is supposed to arise in situations where the agent can assign probabilities to the relevant states, the second in situations where they can't.

I've always found this puzzling. Why would a decision maker be unable to assign probabilities (even vague or indeterminate ones) to the states? I don't think there are any such situations.

I haven't looked at the history of this distinction, but I suspect it comes from von Neumann, who (I suspect) had no concept of subjective probability. If the only relevant probabilities are objective, then of course it may happen that an agent can't make their choice depend on the probability of the states because these probabilities may not be known.

Anyway, while I don't think there are real situations in which a decision-maker can't assign probabilities to the states, I think it is nonetheless useful to study such situations. It is useful because it helps explain *why* we – and any minimally rational agent – can always assign probabilities to the states. The reason is that there is no good way to make decisions without. All rules for decision-making under ignorance are terrible.

For example, consider the most famous such rule: Maximin. It says to choose the option with the best worst-case outcome. Imagine, for example, that you and I are walking in a park, and we come across a discarded plastic bag. It looks like something might be inside. You know that I have no more information about the bag than you. I offer you a deal: if the bag contains a red tetrahedron with the letters 'R.H.S.' inscribed on it in green ink, you have to give me a penny, otherwise I will give you a million pounds. Maximin says that you should reject the offer. It's a terrible rule.

OK, but this just shows that Maximin is terrible. There are infinitely many rules for decision-making under ignorance. How do I know that all of them are terrible?

Good question.

Gustafsson (2023) provides some steps towards an answer. He shows that the following five conditions are unsatisfiable. (Maximin violates the third.)

Transitivity: If option x is at least as preferred as option y and y is at least as preferred as option z, then x is at least as preferred as z.

Expansion Consistency: Whether option x is at least as preferred as option y does not change if another option is added to the situation.

Strong Statewise Dominance: If the outcome of option x is at least as preferred as the outcome of option y in every possible state of nature and the outcome of x is preferred to the outcome of y in some possible state of nature, then x is preferred to y.

Pairwise State Symmetry: If x and y are the only available options and the outcome of x is just like the outcome of y except for a permutation of two states of nature, then x is equally preferred as y.

State-Individuation Invariance: Whether option x is at least as preferred as option y does not depend on whether a state of nature is split into two states.

The first three conditions are standard assumptions about rational choice. The last two pertain specifically to decision-making under ignorance. Pairwise State Symmetry, in particular, is obviously absurd if the agent has probabilities over the states. In the absence of any such probabilities, however, it looks plausible.

One might have thought that an adequate rule for decision-making under ignorance should lead to preferences that satisfy all five conditions. Gustafsson shows that this is not possible.

The force of this observation depends on the plausibility of the five conditions. Unfortunately, many of them have been contested. For example, it has been argued that risk-averse agents don't need to comply with Strong Statewise Dominance (Buchak (2013)), and that polite agents don't need to comply with Expansion Consistency (Sen (1993)).

We can strengthen Gustafsson's challenge by noting that his result doesn't depend on the agent's values. Consider a simplistic agent who doesn't care about risk or politeness. This agent only cares about their immediate degree of pleasure (say). One might think that in a case of "ignorance" the agent's preferences should satisfy all five conditions. And that's impossible, no matter which decision rule the agent uses.

Admittedly, this doesn't show that every rule for decision-making under ignorance is *terrible*. It's not obvious that giving up State-Individuation Invariance, for example, is always terrible.

In fact, giving up this condition is precisely what Gustafsson suggests. His starting point is *LaPlace's Rule*. The rule says that in a situation of ignorance one should give equal probability to each state of nature and then maximize expected utility. This violates State-Individuation Invariance. But that's OK, Gustafsson says, because we can describe a privileged partition of states to which the rule should be applied:

State S should be distinguished from S' iff there is a possible option for which it is permissible to prefer its outcome in S to its outcome in S'.

I've explained in this post why I think it's a bad idea to impose rational constraints on the individuation of outcomes, as Gustafsson here does. But let that pass. The important point is that one can try to identify a privileged partition for applying Laplace's indifference requirement. See, for example, Mikkelson (2004), White (2010), and Weisberg (2011) for proposals that don't involve constraints on the individuation of outcomes.

Depending on how we choose the partition, Laplace's Rule may or may not be terrible. I'm not sure if Gustafsson's version of the rule is coherent, but if it is then I suspect that it is fairly terrible.

To see why, imagine you have seen 1000 ravens, all of which were black. You now have to choose between deal A and deal B. Deal A gives you £1 if the next raven is black, deal B gives you £1 if it is not. The problem is that there are more ways for the next raven to be non-black than for it to be black (informally speaking). We have to treat all these ways as different because it is easy to imagine possible options that lead to relevantly different outcomes depending on the colour distribution over as yet unobserved ravens. Gustafsson's version of Laplace's Rule therefore suggests that you should choose deal B. Having seen 1000 black ravens you would, in effect, bet that the next raven isn't black. That's terrible advice.

(In fact, all the cardinalities involved are probably infinite and I'm not sure how Gustafsson's rule would then apply. That's why I said that I'm not sure it is coherent.)

But I must admit that other versions of Laplace's Rule seem to look better. Suppose we combine the Rule with Roger White's way of identifying the privileged partition. White (2010) suggests that two states should be given equal probability iff the evidence is neutral between them. White isn't interested in decision rules, so his "states" are just propositions. We're going to need something more fine-grained. But it's possible that this can be made to work.

For example, let's follow Lewis (1981) and identify outcomes with value-level propositions and states with dependency hypotheses. We may hope that each dependency hypothesis contains infinitely many fine-grained possible worlds, so that we can divide the dependency hypotheses until we have a partition over which the evidence is neutral.

If this works, the relevant form of Laplace's Rule looks OK. It's a non-terrible rule for decision-making under ignorance.

But of course it's not *really* a rule for decision-making under ignorance. The rule tells an agent who doesn't have degrees of belief that they should, first, adopt certain degrees of belief and then act in accordance with them.

I don't know how to properly delineate rules for decision-making under ignorance. When I say that all such rules are terrible, I don't have in mind rules like the Laplace-White rule that effectively introduce degrees of belief.

*Risk and Rationality*. Oxford: Oxford University Press.

*Thought: A Journal of Philosophy*. doi.org/10.5840/tht202331416.

*Australasian Journal of Philosophy*59: 5–30.

*British Journal for the Philosophy of Science*55 (1).

*Econometrica*61 (3): 495–521. doi.org/10.2307/2951715.

*Handbook of the History of Logic*, edited by Dov Gabbay, Stephan Hartmann, and John Woods, 10:477–551.