Objects of revealed preference

A common assumption in economics is that utilities are reducible to choice dispositions. The story goes something like this. Suppose we know what an agent would choose if she were asked to pick one from a range of goods. If the agent is disposed to choose X, and Y was an available alternative, we say that the agent prefers X over Y. One can show that if the agent's choice dispositions satisfy certain formal constraints, then they are "representable" by a utility function in the sense that whenever the agent prefers X over Y, the function assigns greater value to X than to Y. This utility function is assumed to be the agent's true utility function, telling us how much the agent values the relevant goods.

In some parts of economics, it is assumed that agents are fully informed about which goods they would get when making a choice. We then only need an "ordinal" utility function, and fairly weak constraints on choice dispositions. In other parts of economics, ignorance and uncertainty are allowed. We then need a "cardinal" utility function and stronger constraints on choice dispositions. In particular, we can't just look at how the agent is disposed to choose between the goods themselves. We must also look at how she would choose between various "gambles" involving these goods. Such gambles are usually represented either as probability measures over goods (following von Neumann) or as functions from possible "states of the world" to goods (following Savage).

This way of constructing a utility function from choice dispositions is known as "revealed preference theory". It has many problems. I want to talk about one of them. The problem is how the concrete acts between which an agent can choose should be converted into "goods" or suitable gambles over goods.

The problem has a number of sub-problems. One is to identify the goods. Suppose you can pick an apple or a banana, and you choose the apple. Can we conclude that you prefer having an apple over having a banana? Arguably not -- not if we want our assignment of utility to predict other things you might do. Perhaps you chose the apple because you wanted to leave the banana for me. So perhaps you really prefer having an apple and leaving the banana for Wolfgang over having a banana and leaving the apple for Wolfgang. Or perhaps the banana happened to be placed to right of the apple, and you prefer taking the thing on the left over taking the thing on the right. Or perhaps the choice was offered on a Monday afternoon in Paris, and you really prefer taking an apple on a Monday afternoon in Paris when offered a choice between an apple and a banana over taking a banana on a Monday afternoon in Paris when offered a choice between an apple and a banana. And so on. If we don't want to prejudge what people care about, we arguably need to use very fine-grained descriptions of the available acts, but then the revealed-preference method of determining utilities breaks down completely, because the same option will never figure in different choice situations.

The standard economics solution to this problem is to sharply restrict the kinds of things people are allowed to value. Often it is simply assumed that utilities pertain to "commodity bundles". So you're allowed to prefer having an apple over having a banana, but you're not allowed to care about what is left for me, about where the choice is offered, what other objects are available, and so on.

Since real people don't just care about commodity bundles, the result is that the economists' utility functions often misrepresent people's values, which leads to a lot of confusion and misunderstanding about the scope and limits of expected utility theory.

In any case, if we're interested in how to determine an agent's true values or desires, then the economists' response is untenable. So the problem remains, and I know of no good solution.

Here's a different problem, raised most prominently by Daniel Hausman (for example in his book Preference, Value, Choice, and Welfare). The problem is that people can be mistaken about their options. Even if what you ultimately care about are simple commodity bundles, and you prefer having an apple to having a banana, you may not choose the apple if you falsely believe that it is a wax apple.

Hausman concludes that the connection between choice behaviour and preference is mediated by the agent's beliefs about the available options. Along similar lines, Johanna Thoma here suggests that if we want to read off preferences from choice dispositions, "the description of options should be consistent with the agent's beliefs about the nature and consequences of the actions open to her".

I'm not quite sure what this is supposed to mean. Let's assume you face a choice between petrol and water; you know that these are your options, but you're wrong about what which cup contains the water and which contains the petrol. So you take the petrol. If we describe your options as taking a cup of water and taking a cup of petrol, isn't this "consistent with the agent's beliefs about the nature and consequences of the actions open to her"? But it would obviously be wrong to conclude that you prefer petrol.

In any case, the proposal (both Hausman's and Thoma's) looks insufficiently general. What if you merely suspect that the apple (in the apple-banana scenario) is a wax apple? The concept of all-or-nothing belief is too coarse-grained to predict your choices. Whether you will choose the apple plausibly depends on just how confident you are that the apple is a wax apple, and how bad it would be (by your lights) to take a wax apple.

Relatedly, Hausman seems to think his problem arises for both the "ordinal" and the "cardinal" strand of revealed preference theory. But it seems to me that it really only arises for the cardinal strand. Doesn't the ordinal approach stipulate that agents have full knowledge about which goods their choices would get them?

It is also not clear to me that the problem really can arise in the cardinal strand. Remember that here the agent's options are generally not identified with simple things like taking a banana, but with various "gambles". If you're unsure whether the apple is a wax apple or a real apple, the obvious Savage-style representation of the relevant act is a function that leads to having an apple in a state in which the object is an apple and to having a wax apple in a state in which the object is a wax apple. This representation also handles the original case in which you "believe" (whatever that means) that the apple is a wax apple. In Savage's framework, ignorance or uncertainty about the available options is not ignored; it is represented by the agent's probability measure over the relevant states.

Still, I think Hausman is onto a real problem. (A problem to which neither his nor Thoma's answer is a satisfactory response.)

Return to our main question: how should we represent the acts between which an agent can choose as functions from states to goods (or "outcomes", in Savage's terminology)? Given your Monday afternoon choice of an apple over a banana, how do we know that the apple option should be represented by a function that maps "apple" states to "have apple" states and "wax apple" states to "have wax apple" states?

I've already set aside the problem of how to identify the relevant goods or outcomes. We still need to figure out the relevant states. Then we need to associate the actually available acts with suitable functions from states to outcomes.

Lewis (in "Causal Decision Theory") identifies a state (a "dependency hypothesis") as "a maximally specific proposition about how the things [the agent] cares about do and do not depend causally on his present actions". On this proposal, states are effectively functions from options to outcomes. But then we can't sensibly go on to define options as functions from states to outcomes.

We could, however, distinguish two kinds of options (i.e., two roles for the concept of an option). We could accept that the objects of the preference relation ("options", in one sense) are Savage-style functions from states to outcomes, while using a different conception of options to define the states, a la Lewis.

This may seem strange, but I think there are independent reasons to make such a distinction.

To see why, let's set aside (for now) the problem of how to specify the relevant states. We now have states and outcomes, so we also have all sorts of functions from the former to the latter. We need to associate the concrete acts an agent might choose with some of these functions. That is, we need rules that explain why your choice of the apple is adequately represented by a function from "apple" states to "have apple" outcomes and from "wax apple" states to "have wax apple" outcomes, and why the alternative choice of taking the banana is not adequately represented by this function.

Intuitively, the explanation should go roughly like this: The option you choose is to take the thing that looks like an apple; choosing this option will (a) get you an apple in states where the thing that looks like an apple is an apple, and it will (b) get you a wax apple in states where the thing that looks like an apple is a wax apple.

If that is on the right track, we need to pick out the option you choose as something like "taking the thing that looks like an apple" -- not as "taking an apple", and not as a Savage-style function.

In general, when we associate a concrete act with a function from states to outcomes, we need to evaluate what the act would bring about in any given state. And that requires some way of (re-)identifying the act across the various states. In particular, it requires an individuation of the act that is logically compatible with all the states. We can't individuate your choice as "taking an apple", because that would rule out the wax apple state. (Consequently, we couldn't figure out what the Savage-style function we're trying to construct should return for the wax apple state).

By way of another illustration, consider the function that maps "apple" states to "get apple" outcomes and "wax apple" states to "get banana" outcomes. This is a perfectly well-defined function. (Let's call it "the apple/banana function".) But it does not represent one of your options, in the apple-banana scenario. Why not? The only plausible answer I can think of appeals to a special way of describing the acts you can actually perform. The acts you can actually perform are taking an apple and taking a banana (since the apple is not in fact a wax apple). The relevant "special description" is something like "taking the thing that looks like an apple" or "taking the thing on the left". The reason why the apple/banana function does not represent one of your options is that no such description of your acts leads, when conjoined with the relevant states, to the outcomes of the apple/banana function.

So to go from choice behaviour to a preference relation over Savage-style gambles, we arguably need two steps: we first need to find a suitable "special description" of the available acts; then we need to use this description to identify the relevant functions from states to outcomes -- assuming the states and outcomes are magically given.

This brings me back to Lewis's definition of states as (more or less) functions from options to outcomes. If we have to use the two-step procedure just described, we might as well follow Lewis's definition, taking the "options" to be the "special descriptions" of the available acts. Then the only completely unresolved problem is that of determining the outcomes.

(Notice that we couldn't define preferences directly in terms of the intermediary "special descriptions". Your preferences between these things will strongly depend on the context of choice (even if your utility function remains the same). So your choice dispositions between them are almost guaranteed to fail the formal constraints required to determine a utility function.)

Well, one other problem remains. Often, it seems, there is no adequate "special description" of the available acts. Suppose you give positive credence to a skeptical scenario in which your arms have just become paralysed. In that state, you can't take the apple, nor can you take the thing that looks like an apple. But the state should still figure in the calculation of expected utilities. What's the "special description" of your apple choice that is compatible with this state? And whatever it is, couldn't there be another state in which you can't even do that?

This is the real problem that looks a bit like Hausman's.

Comments

No comments yet.

Add a comment

Please leave these fields blank (spam trap):

No HTML please.
You can edit this comment until 30 minutes after posting.