Iterated prisoner dilemmas

There's something odd about how people usually discuss iterated prisoner dilemmas (and other such games).

Let's say you and I each have two options: "cooperate" and "defect". If we both cooperate, we get $10 each; if we both defect, we get $5 each; if only one of us cooperates, the cooperator gets $0 and the defector $15.

This game might be called a monetary prisoner dilemma, because it has the structure of a prisoner dilemma if utility is measured by monetary payoff. But that's not how utility is usually understood.

According to the "revealed preference" account of orthodox economics, the utility an agent assigns to an outcome is determined by her choice dispositions. Thus if you wouldn't choose to defect in the above game (and your choice can't be explained away as a slip), then the game isn't a true prisoner dilemma. A true prisoner dilemma is a game in which defection dominates, even though mutual cooperation has greater utility for both players than mutual defection.

In philosophy, utility is more commonly understood as measuring the extent to which an agent desires the relevant outcome, which is not assumed to be equivalent to a statement about choice dispositions. The connection between utility and choice is rather supposed to be normative. In particular, if a certain act strictly dominates all others (in terms of utility), then a rational agent ought to choose that act. So in a true prisoner dilemma, both agents ought to defect.

All this should be familiar and uncontroversial. But people oddly seem to forget it when turning to iterated prisoner dilemmas.

An iterated prisoner dilemma is a sequence of prisoner dilemmas played between the same agents. Let's say there are 100 rounds, and let's assume this is common knowledge, as are the utilities of both players, and their rationality. What will they do?

If each round is a true prisoner dilemma, then obviously both players will defect in each round. Recall that on the revealed preference account, a game isn't a true prisoner dilemma unless the players defect. And on the more realist account, a game isn't a true prisoner dilemma unless each player ought to defect.

We don't need backward induction to reach this conclusion. We also don't need the assumptions about common knowledge. Whatever you think about the other player, or about future rounds – if you're in a true prisoner dilemma, you ought to defect.

Suppose, for example, you are convinced your opponent plays tit-for-tat, so that if you cooperate in one round, then she will cooperate in the next round. It may then seem as if your best choice is to cooperate until the penultimate round. In the monetary game above, you'd thereby get $1005 in total, compared to $510 for always defecting. But if it is best to cooperate, then you're not playing a true prisoner dilemma!

Whatever you think about the other player in the monetary scenario, if your goal is to maximize total payoff, then you are not playing an iterated true prisoner dilemma. For there is a logically possible hypothesis about the other player (namely, that they play tit-for-tat) under which cooperating is the best choice. And that wouldn't be true if you were in a true prisoner dilemma.

There's nothing interesting to say about iterated versions of a true prisoner dilemma.

People tend to assume without argument that the players want to maximize the total utility across all rounds. But that doesn't make sense on either of the two interpretations of utility I mentioned. Of course, we can stipulate that certain agents only care about the total amount of money they will get across a sequence of games. This turns the 100-round monetary prisoner dilemma into an interesting puzzle. But the stipulation should be made explicit, the payoffs in each round shouldn't be called "utilities", and the rounds shouldn't be described as "prisoner dilemmas".

Admittedly, there's a third tradition in which "utility" is understood as a temporally local quantity whose discounted sum agents try to maximize. This approach is especially popular in computer science, but I'm not sure what the relevant local quantity ("reward") is meant to be.

In section 16.2 of the most-cited (and generally excellent) textbook on artificial intelligence, Russell and Norvig 2010, utility is defined via preferences (a la von Neumann and Morgenstern), which are assumed to be closely related to choice dispositions. But how is this supposed to give us local "rewards"?

The ultimate culprit, I suspect, is (as so often) localism about outcomes: people assume that the bearers of utility are temporally local events, such as the amount of money an agent receives as a direct consequence of the present choice. Rational agents are assumed to have coherent preferences over such outcomes; these preferences can be represented by a utility function, which is then used to assign utilities to the outcomes in an iterated prisoner dilemma.

But preferences over local outcomes are generally ill-defined. An agent's preferences over sequences of events must have a very specific structure for them to be determined by preferences over local, individual events: they must be "separable". Separability is somewhat plausible for monetary payoff: all else equal, you might always prefer $10 to $5. But for many other things it isn't. All else equal, do you prefer water or wine? Depends on what "all else" is.

What if we understand utility as fitness, as in evolutionary game theory? Here, too, temporal separability looks highly implausible. Fitness can't be decomposed into a temporally local measure whose aggregate is maximized by evolutionarily successful phenotypes. Nonetheless, I agree that we can learn something about the evolution of cooperation from so-called iterated prisoner dilemmas (and similar games). The reason is that in certain contexts, one can find a local quantity (e.g., amount of food) whose aggregate over time is correlated with fitness in the relevant population. But again it would be better to not call that quantity "utility", and to emphasize that an "iterated prisoner dilemma" is not a sequence of (true) prisoner dilemmas.

Comments

No comments yet.

Add a comment

Please leave these fields blank (spam trap):

No HTML please.
You can edit this comment until 30 minutes after posting.