Is it ever rational to calculate expected utilities?

Decision theory says that faced with a number of options, one should choose an option that maximizes expected utility. It does not say that before making one's choice, one should calculate and compare the expected utility of each option. In fact, if calculations are costly, decision theory seems to say that one should never calculate expected utilities.

Informally, the argument goes as follows. Suppose an agent faces a choice between a number of straight options (going left, going right, taking an umbrella, etc.), as well as the option of calculating the expected utility of all straight options and then executing whichever straight option was found to have greatest expected utility. Now this option (whichever it is) could also be taken directly. And if calculations are costly, taking the option directly has greater expected utility than taking it as a result of the calculation.

Let's fill in the details by working through an example, "without loss of generality".

There are two straight options, Left and Right. In addition, there's the option Calculate which eventually leads to either Left or Right, depending on which of these if found to maximize expected utility. (If they are tied, let's stipulate that Calculate leads to Left.) In a non-trivial decision problem, the outcome of choosing Left and of choosing Right depends on the state of the world. We assume there are two relevant states, Bright and Dark. The expected utility of the straight options is given by (1) and (2). Here I've abbreviated Bright and Dark as 'B' and 'D', respectively; 'BL' denotes the outcome that results from going Left in a Bright world (similarly for 'DL', 'BR', and 'BL'). It will be useful to think of outcomes a collections of features the agent cares about.

(1) EU(Left) = P(B) U(BL) + P(D) U(DL).
(2) EU(Right) = P(B) U(BR) + P(D) U(DR).

What if Left is taken as a result of Calculate? In principle, completely different outcomes could come about. For example, there might be a state of the world in which the agent gets richly rewarded if she goes Left as a result of Calculating, but punished if she goes Left without Calculating. But clearly that's not the kind of case we're interested in, and it's not a case where calculating expected utilities is (known to be) costly. We're interested in cases where the outcomes that may result from going Left as a result of Calculate coincide with those that may result from going Left directly except for one feature in which they are worse, reflecting the cost of calculation.

So Calculate can lead to four possible outcomes which coincide with the four outcomes in (1) and (2) except for one respect in which they are worse. I'll therefore abbreviate these outcomes as 'BL-', 'DL-', 'BR-', and 'DR-', respectively. Thus BL- is the outcome of going Left in a Bright world as a result of Calculating, which is somewhat worse than BL (going Left in a Bright world without Calculating).

Let's keep track of the present assumption about the intrinsic cost of Calculating:

(3) U(BL-) < U(BL); U(DL-) < U(DL); U(BR-) < U(BR); U(DR-) < U(DR);

We have four possible outcomes because the result of Calculate depends not only on whether the world is Bright or Dark (B or D), but also on whether Calculate leads to Left or to Right (CL or CR). So we need to extend out state space from { B, D } to the product of { B, D } with { CL, CR }. Then:

(4) EU(Calculate) = P(B & CL)U(BL-) + P(D & CL)U(DL-) + P(B & CR)U(BR-) + P(D & CR)U(DR-).

Now we need another assumption, namely that the immediate result of Calculate (going Left or going Right) is probabilistically independent of whether the world is Bright or Dark:

(5) P(B & CL) = P(B)P(CL); P(B & CR) = P(B)P(CR); P(D & CL) = P(D)P(CL); P(D & CR) = P(D)P(CR).

This may not always be the case, but the exceptions seem highly unusual. After all, our agent knows that the immediate result of Calculate is not sensitive to the external state of the world: it is fixed by her own probabilities and utilities.

Using (5), we can rearrange (4) as (6).

(6) EU(Calculate) = P(CL)[P(B)U(BL-) + P(D)U(DL-)] + P(CR)[P(B)U(BR-) + P(D)U(DR-)].

So EU(Calculate) is a mixture of EU(Left) and EU(Right), except that each term is made worse by the cost of calculation, as per (3). As a result, EU(Calculate) is always less than either EU(Left) or EU(Right) or both. So -- with the possible exceptions of cases where assumption (5) fails -- Calculate is never a rational option. QED.

What shall we make of this strange result? Here are two lines of response, not necessarily exclusive.

First, perhaps it's wrong to model the agent's options as Left, Right, and Calculate. Instead, we should distinguish between genuine act options, Left and Right, and process options such as Calculate. Calculate is a process option because it's a possible way of reaching a decision between the act options. Alternative process options are, for example: trusting one's instincts, or calculating which option has the best worst-case outcome and then going ahead with that option. Arguably you can't go Left without choosing any process option at all. You have to either follow your instinct, calculate expected utility, or use some other process. So it's wrong to compare Calculate with Left and Right. We should rather compare Calculate with other process options like trusting your instinct. Doing that, we'd probably get the intuitive result that it's sometimes rational to calculate expected utilities (to varying levels of precision), and sometimes to trust one's instincts.

The main problem with this line of response (I think) is that it's far from clear that one can't choose Left without first choosing a process for choosing between Left and Right. For how does one choose a process? By first choosing a process for choosing a process? The regress this starts is clearly absurd: when we make a decision, we don't go through an infinite sequence of choosing processes for choosing processes etc. And if the regress can stop at one level, why can't it also stop at the level before? Why can't one simply choose Left, without choosing any process for choosing between Left and Right?

That's not just a theoretical worry. When you come to a fork in the road, it really seems that you can do three things (among others): go left, go right, or sit down and calculate the expected utilities. Each of these is a genuine option. Of course, whatever you end up doing, there will be a psychological explanation of why you did it. Perhaps you did it out of habit, or out of instinct, or as the result of some further computation. But that's equally true for all three options. So I'm not convinced by the first line of response, although I'm also not convinced it can't be rescued.

Here's the second line of response. Calculating expected utilities is a form of a priori (mathematical) reasoning, and there's a well-known problem of making sense of such reasoning in the standard model of Bayesian agents.

More concretely, consider what the agent in the above example should believe about CL and CR. If she knows her own probabilities and utilities, and she knows (as we can assume) that Calculate would lead to choosing an option with greatest expected utility (or to Left in case of ties), then she must also know either that Calculate would lead to Left or that Calculate would lead to Right, for this follows from what she knows and probability 1 is closed under logical consequence. And of course you shouldn't sit down and go through a costly calculation if you already know the result! From a strict Bayesian perspective, a priori reasoning is always a waste of time because the result is always already known.

When we think about whether an agent should calculate expected utilities, the agent we have in mind does not already know the answer. That seems to leave two possibilities: either the agent does not know her own probabilities and utilities, or she is not probabilistically coherent. But if the agent doesn't know her probabilities and utilities, it is unclear how calculating expected utilities is supposed to help. Moreover, intuitively the kind of agent we have in mind need not be uncertain about her own beliefs and basic desires. So it would seem that she must be probabilistically incoherent. But if we're dealing with incoherent agents, it's no longer clear that expected utility maximization is the right standard of choice. We can't assume that the agent should calculate expected utilities iff doing so would maximize expected utility.

The general point is that when we think about whether it's rational to calculate expected utilities, we have implicitly left behind the domain of perfect Bayesian rationality and turned to bounded rationality. Contrary to widespread thought, perfect Bayesian agents don't always calculate expected utility. They never calculate anything, because they already know the result. Before we can say what agents with bounded rationality should do -- including whether and when they should calculate expected utilities -- we need a good model of such agents.

Comments

# on 04 January 2017, 14:51

Have you seen Joe Halpern's work on these issues? The papers about "costly computation" and "resource-bounded agents" on his papers page are I think relevant.

# on 04 January 2017, 15:20

Hi Brian, yes. I need to think a bit more about how the issues relate. As I remember, Halpern's models assume that before making an ordinary choice agents need to choose an algorithm (a Turing Machine). Then it's not hard to see how it may sometimes be rational and sometimes irrational to fully compute EUs. So that sounds like a version of the first line of response. But I'm not convinced it gets at the heart of the problem. Halpern et al briefly mention that the agents in question are uncertain about the outcome of the various algorithms, which requires non-trivial credences over impossible worlds. I think poking a little more at that point might be fruitful. Anyway, thanks for the reminder.

# on 08 February 2017, 16:26

Two related resources that anticipate / dispute these points;

Howard Raiffa's "Decision Analysis; Introductory lectures" has a section in chapter 9, the Art of Implementation, addressing this.
I'm not going to retype it here, but see https://books.google.com/books?id=TDwdAQAAMAAJ&focus=searchwithinvolume&q=%22can+you+do+a+decision+analysis+of%22

The other related resource is Chris Sim's "Why There Are No True Bayesians", which is funny, and is available in full here - http://sims.princeton.edu/yftp/Bayes250/NoRealBayesians.pdf
"That is, Bayesian decision theory pays no attention to costs of
computation or to the possibility that we can be uncertain about
something just because we don’t know how to perform a calculation
in the available time."

# on 08 February 2017, 18:19

Thanks David, I didn't know either of these.

# on 14 March 2017, 14:42

Could you please make an article on philosophy and artificial intelligence? I would like to see how we could possibly teach philosophy to computer programs.

# on 15 March 2017, 13:54

This whole article makes me think of genetic algorithms in gaming or more general forms of computation, and tradeoffs in biology or just plain intuition. Nature has shown us a great heuristic: "A good enough decision right now, is better than a perfect decision too late to do any [expletive deleted] good." (I believe that's the quote)

This is obviously because in the real world, you have many, many possibly related choices, and also, competition over algorithms to find some shortest path to a decision bound by optimizations of time/step versus benefit/step. If you want examples of "good approximate solutions" in nature, you will find plenty in biology. Interestingly, this issue comes up in video games, since you often need challenging, non-cheating AIs, in order to improve the so-called "fun units" without annoying your players too much. If the AI is less stupid (but not so perfect that it effectively always out-optimizes you no matter your skill level), it makes the game a bit more natural-to-play and entertaining! Nothing breaks suspension of belief like bots running into walls, shooting randomly, or no-clipping their way across the level.

PS: You obviously were leading us on to thinking about AI, as mentioned by a previous commenter.

Add a comment

Please leave these fields blank (spam trap):

No HTML please.
You can edit this comment until 30 minutes after posting.