Values and consequences in economics and quantum mechanics

One of the novelties in Richard Jeffrey's "Logic of Decision" (1965) was to unify the space over which probabilities and values are defined: both probability and desirability are distributed over the space of possible worlds, of ways things might be. By contrast, in earlier theories like that of Savage, probabilities were defined over states (or events) and utilities over consequences, which were taken to be distinct kinds of things. Technically, this difference between Savage and Jeffrey isn't terribly important as long as anything an agent may care about can be found in the set of 'consequences'. However, the distinction and the labeling in Savage's treatment carries a danger to overlook the complexity of human values. This has, I believe, led to a number of serious mistakes.

Here are some things one might value, or desire: getting 10 Euros, eating ice-cream, rolling down a hill, being healthy, being poor, owning a donkey.

Here are some other things: having learnt Latin as a child, having discovered the incompleteness theorems, not descending from apes, getting 10 Euros next month, being payed as much as one deserves, there being trees in two hundred years, the world being created by an intelligent being.

Savage-style theories, which still dominate in economics, tend to ignore desires of the second type. Thus if I prefer 10 Euros over 10 apples today, but 10 apples over 10 Euros tomorrow, then many economic models would say that my preferences have changed. However, let A- be the proposition that I receive 10 apples now while not having received 10 Euros yesterday; let A+ be the proposition that I receive 10 apples now while having received 10 Euros yesterday. Suppose my preference order both today and tomorrow is A+ > 10 Euros > A-. Then I will take the 10 Euros today and the 10 apples tomorrow, and my preferences won't have changed. Similarly, as Robert Aumann pointed out to Savage, it can make a big difference to the desirability of receiving $100 whether one's wife survives a dangerous operation. In general, the 'consequences' to which real people assign values aren't just simple, temporary events ('receiving $100'), but entire histories that may involve the distant past and future and the well-being of other people. They are entire ways things might be. The alleged distinction between states and consequences therefore becomes insubstantial (as Aumann also noticed).

The misconception that values are attached only to simple, temporary events shows up (for example) in the Discounted Utility model, the currently dominant framework for policy evaluation. Suppose values are only assigned to things like 'getting 10 Euros', irrespective of what happens earlier or later. How then should we make decisions whose consequences extend far into the future (say, whether to act on climate change or not)? A natural thought is to aggregate the costs and benefits for our successors and choose the option with the highest (expected) aggregated utility. If we care more about closer than about distant successors, we might add a 'discount rate', giving less weight to the costs and benefits for more distant successors. This is the Discounted Utility model, and it is supposed to reflect our actual judgements regarding long-term decisions.

Since in reality our values are not restricted to 'getting n Euros', it is unsurprising that the Discounted Utility model systematically fails in empirical tests (see Frederick et al 2002 for a survey). The things we value include entire histories, and there is no reason why the value we attach to a history should be proportional to the aggregate of goods we receive over this history. Many things I value about histories (like an end of the Darfur war) have nothing to do with goods I receive; some even concern times long after I'm dead. As for the distribution of goods in my life, my preferences are not at all determined by (discounted) aggregation: some goods I'd rather have now, others later, and often I'd prefer an even distribution over getting everything tomorrow.

Like other expected utility theories, the Discounted Utility model can be derived from 'qualitative' axioms about preferences, in this case preferences over histories. Such axioms were found by Tjalling Koopmans (1960). Economists tend to find them "intuitively compelling" (e.g. Rick and Loewenstein 2008, p.144), though they clearly reveal the misconception about values. For instance, Koopmans's axiom of stationarity holds that agents prefer history h1 over h2 iff they prefer the extended history a,h1 (extended by prefixing a) over the extended history a,h2. But couldn't I prefer having a glass of wine before going to bed over having water, but have the reverse preference if both alternatives are prefixed by drinking a bottle of vodka? (Koopmans, it should be mentioned, was well aware of these limitations, and had little faith in the Discounted Utility model.)

Next, the interpretation of quantum mechanics. On the Everett ('many worlds') account, every physically possible result of a quantum measurement actually occurs, but each on its own 'branch' of the universe. What is traditionally understood as the chance of a particular outcome is now represented as the weight of the relevant branch. The problem then is to explain why these weights can play the role of probabilities in theory confirmation and rational action. David Deutsch (1999) and David Wallace (in several papers) claim to have found this explanation, by proving that rational agents who know that their world is about to fission into several distinct successors s, each with a particular quantum weight w(s), will choose actions that maximize the average utility on each branch, weighted by the branch weights. Branch weights therefore play exactly the role of subjective probability in guiding rational action. (See Greaves 2006 for an introduction to the Deutsch-Wallace program).

We don't need to follow the proof very closely to see where it rests on Savage's misconception about human values. The basic idea is to start with axioms concerning qualitative preferences among measurements and their consequences -- much like Koopmans's axioms concerning preferences among histories -- and show that these axioms are uniquely represented by a distribution of probabilities and values in which the probabilities match the branch weights. (I find the proof in Wallace 2003 particularly clear; see Wallace 2002, Wallace 2005, and Greaves 2004 for variations.) Somewhat more precisely, let a game be a triple of a physical state, an operator (on the relevant Hilbert space -- think of something like momentum or position), and a function returning monetary values depending on the operator's value for the state. A physical process realises a game <s,o,p> iff it consists in performing a measurement of o on s and handing out money in accordance with p and the measurement result (on each branch). Suppose now that rational agents have a preference order over games. Under apparently harmless assumptions, it can be shown that this order is uniquely represented by a utility function that identifies the utility of a game with the expectation of its payoff relative to the branch weights, i.e. with \sum_c w(c)c, where w(c) is the sum of the weights of branches with payoff c. This means that in choosing between games, rational agents will act as if they were uncertain about which outcome would occur, and as if they distribute their credence in accordance with the branch weights.

There are two problems with this, both due to the Savage-style division between states and consequences (payoffs). To see the first, note that agents don't actually face choices between 'games' (triples of a state, an operator and a function), but only between actions -- physical processes that at most realise a game. Deutsch (implicitly) and Wallace (explicitly) assume that the utility of such an action equals the utility of any game it realises. The specification of the game must therefore contain everything the agent may care about in a given realisation. Why should that be so? Because it is assumed that all the agent cares about is the payoff, which is specified in the game.

As long as all we care about are things like 'receiving 10 Euros', this is fine. But what if I care about what kind of process made me receive 10 Euros -- say, how much carbon dioxide it produced, or whether it killed my entire family? Then the desirability of an action is not determined by what games it realises. When Wallace (2003:23) claims that "there is no rational justification" for preferences among games with the same payoff, he overlooks history-dependent values. (What if we replace the 'payoffs' with everything an agent might care about? A game is then, in effect, a centered specification of an entire branching universe, and multiple realisation becomes impossible. Unfortunately, the 'apparently harmless' axioms required to derive the probability rule then cease to make sense.)

The other problem is to assume that the present value of a bet equals the expected payoff we get from it later. If this were true then, in order to determine the value of a quantum bet, we'd first have to figure out what we should think about the payoff we can expect to receive in cases where we're about to undergo branching. Proponents of the Deutsch-Wallace program therefore spend a lot of ink debating issues about personal identity and expectations in the face of fission. However, in a Jeffrey-style theory, the value of a bet is not the expectation of its later payoff. The value of a bet -- just like the value of any other action -- is the expectation of its current utility. This works because utilities are defined for things like 'receiving 10 Euros tomorrow'.

So let's look at the present utilities I might assign to hypotheses involving branching. I might give high marks to scenarios where I receive money on branches with high weight. But I might just as well prefer receiving money on branches with low weight. As mentioned above, I might also care about what sort of event has caused the branching. And I might care about what happens on branches on which I don't even exist. There is no reason to assume that rational preferences are determined by whether or not I will exist on a branch, and what payoffs I will receive there. To think that today's value of 'receiving 10 Euros tomorrow' is simply a matter of tomorrow's value of 'receiving 10 Euros' is to repeat the mistake of the Discounted Utility model.

Comments

# on 08 January 2009, 15:43

Hi wo,

Interesting and thought-provoking post. I'd like to jump in at this stage:

"When Wallace (2003:23) claims that "there is no rational justification" for preferences among games with the same payoff, he overlooks history-dependent values. (What if we replace the 'payoffs' with everything an agent might care about? A game is then, in effect, a centered specification of an entire branching universe, and multiple realisation becomes impossible. Unfortunately, the 'apparently harmless' axioms required to derive the probability rule then cease to make sense.)"

My reading of Wallace is that he is thinking of payoffs as including 'everything an agent might care about', but assuming that this is restricted to events which lies futurewards in the branching structure of the end of the game. So whether or not CO2 is produced by the process which gives you 10 euros is factored into the payoff; the payoffs should be thought of as 'lots of CO2 plus 10 euro for me' vs 'less CO2 but no euros for me'.

In particular, the assumption is that completely transient features of games - events which occur during the 'playing' of the game but records of which are completely erased once the game is over - aren't relevant to the assessment of the game. For example, if a game involves killing your family, then tossing a coin, then restoring your family to perfect health and removing their memories (and yours) of the killing, it should have the same value as a game which simply involves tossing the coin. Wallace explicitly discusses this 'Erasure' assumption, and if we make it I think the CO2 and family-killing examples can be deflected.

But your argument seems more general than this. In particular, I get the impression you want to allow for strongly non-local preferences, those not restricted to events futurewards in the branching structure of the game being considered. An example due to Elga in an Oxford seminar a few years back (in a talk which I think is still unpublished) is the desire for diversity among worlds. I might prefer being an astronaut to being a philosopher, but prefer being an astronaut on one branch and a philosopher on another. These kinds of preferences, if coherent, would lead to big problems for the Deutsch-Wallace argument. I guess these are the problems you allude to in the passage I quoted.

My understanding is that Wallace would treat such desires as Elga's alleged desire for branch diversity as incoherent - that they shouldn't count as desires at all, because one criterion for being a desire is that there be some physically possible process you can carry out to satisfy it. Wallace cites as an example of an incoherent desire the desire to date someone with an odd number of molecules in her body. But I'm guessing this response is controversial.

One other thought. I'm toying with an approach to Everett where each agent exists (strictly and literally) in only one 'branch', and actuality is indexical. The picture is of a set of spatio-temporally isolated worlds, structurally similar to Lewisian modal realism. We can use counterpart theory to make sense of modal talk, quantifying over counterparts in other 'branches'. If we make the assumption (perhaps a strong one) that we only care about actual events, this immediately rules out the Elga-style preferences that make trouble for the Deutsch-Wallace argument. But it seems quite likely that this picture of Everett breaks the argument in other places.

Cheers,
Alastair

# on 11 January 2009, 05:01

Thanks Alastair, that's very helpful!

You're right that Wallace could handle the CO2 and killed-family cases by looking at their consequences at the end of the game (dead family, more CO2 in the atmosphere). I should have used examples with 'erasure': I would certainly object to a game in the course of which I and my family are tortured, with all traces of the torturing erased at the end. I don't think there is anything wrong or incoherent with this preference.

And yes, I also worry about non-local preferences such as Elga's, though I think his example is strangely artificial. I think there are many natural examples of this kind. For instance, I have strong preferences among setups that are certain to leave me dead, but have different implications for the rest of the world. I'm sure this is quite common: why else would we have a convention of writing wills? I also think it is common to have physically unsatisfiable desires, like a desire to have learned Latin as a child, or to not have said certain things.

I see that ruling out desires about 'non-actual' branches would avoid the second type of case on the account you're toying with (though only by stipulation). But the other problem seems to remain: if we care about entire branches, not only their future at the end of the game, the 'erasure' argument doesn't go through. I'm not sure what that does to Wallace's argument: I find his assumptions about payoff preferences hard to understand on this more relaxed (and realistic) view of desires.

# on 12 January 2009, 14:13

> I would certainly object to a game in the course of which I and my family are tortured, with all traces of the torturing erased at the end.

Fair point. But perhaps Wallace can restrict the argument to cases where the erasure is physically possible - which would rule out cases of torture, and indeed probably all cases with a long enough time-scale for any experience to be possible. If the argument only applies to games which involve no experience, the erasure assumption seems more plausible. And isn't it enough if the D-W argument justifies the Born rule in cases of simple microscopic chances, given that macro-chances supervene on micro-chances? Maybe I've overlooked something obvious, as this moves seems suspiciously easy.

> For instance, I have strong preferences among setups that are certain to leave me dead, but have different implications for the rest of the world.

Why is this a problem for Wallace? The payoffs he considers can include anything futurewards of the branching interaction, not only those events which occur within the lifetime of the agent.

# on 14 January 2009, 02:33

I thought a quantum game involves setting up and carrying out a measurement, which may well involve, say, torturing a cat. At least that's how Wallace defines games. But maybe that can be factored into the 'payoff'. In general, this is what I think should be done: a game's payoffs should be just the situations at its end, including their entire histories and futures. And we should allows agents to care about arbitrary aspects of such situations. I'm not actually sure what this does to the D-W argument:

We could use the desirability of outcome situations, measured in reals, in place of the cash payoffs in the D-W argument. But then these values are not entirely determined by the measured eigenvalue. That is, the payoff function is not a function from eigenvalues to reals, and this blocks the 'Payoff Equivalence' (PE) lemma.

Alternatively, we could replace the eigenvalues by entire outcomes (centred worlds); the payoff function would then simply be the desirability function. But then most of the formulas, PE for example, become unintelligible as they require the eigenvalues to be numbers. Similar problems arise if we leave the eigenvalues in place but replace the payoff values with entire outcomes.

We could replace the eigenvalues by the desirability of the corresponding outcomes, and use the identity function as the payoff function. This allows us to skip PE, but the proof still stalls, because it requires utilities to be defined for games where the payoff function is not identity (e.g. in the applications of 'Additivity' and 'General Equivalence').

Even if the argument can be fixed to accommodate history-dependent values, it still assumes that the present value of a game equals the expectation of its *future* value -- implicitly assumed e.g. in the derivation of equation (38) in Wallace 2003. I think that's an unreasonable constraint on rational desire. I guess my worry about futures where I no longer exist is related to this: if on either outcome I won't survive, my present utility for a game can't be the expectation of my future utility.

Add a comment

Please leave these fields blank (spam trap):

No HTML please.
You can edit this comment until 30 minutes after posting.