< 648 older entriesHome122 newer entries >

Time consistency and stationarity

Suppose you prefer $105 today to $100 tomorrow. You also prefer $105 in 11 days to $100 in 10 days. During the next 10 days, your basic preferences don't change, so that at the end of that period (on day 10), you still prefer $105 now (on day 10) to $100 the next day. Your future self then disagrees with your earlier self about whether it's better to get $105 on day 10 or $100 on day 11.

In economics jargon, your preferences are called time inconsistent. Time inconsistency is supposed to be a failure of ideal rationality.

Decision theory notes

In the last four months I wrote a draft of a possible textbook on decision theory. Here it is.

I've used these notes as basis for my honours/MSc course "Belief, Desire, and Rational Choice". They're tailored to my usage, but they might be useful to others as well.

The main difference to other textbooks is that I talk at length about the structure and interpretation of subjective probabilities and utilities. In part, this is because it makes a great difference to the plausibility of the expected utility norm whether, say, utilities are defined in terms of individual welfare, in terms of choice dispositions, or taken as primitive. But I also think these are independently interesting philosophical topics.

Three kinds of preference

The decision-theoretic concept of preference is linked to the concepts of subjective probability and utility by the expected utility principle:

(EUP) A rational agent prefers X to Y iff the expected utility of X exceeds the expected utility of Y.

Economists usually take preference to be the more basic concept and interpret the EUP as an implicit definition of the agent's utilities (and sometimes also her probabilities).

Justification at a distance

According to a popular picture, some beliefs are justified by "seemings": under certain conditions, if it seems to you that P, then you are justified to believe that P, without the assistance of other beliefs. So seemings provide a kind of foundation for belief, albeit a fallible kind of foundation.

But most of our beliefs are not justified by seemings (or by beliefs which are justified by seemings, etc.). I once learned that Luanda is the capital of Angola and I've retained this belief for many years, although I rarely think about Angola and thus rarely experience any relevant seemings that could justify the belief.

Lewis on possible worlds and the grounds of modality

Friends of primitive powers and dispositions often contrast their view with an alternative view, usually attributed to Lewis, on which modal facts about powers, dispositions, laws, counterfactuals etc. are grounded in facts about other possible worlds. But Lewis never held that alternative view – nor did anyone else, as far as I know. The allegedly mainstream alternative is entirely made of straw. The real alternative that should be addressed is the reductionist view that powers and dispositions are reducible to ultimately non-modal elements of the actual world.

Dicing with death

In his "Dicing with Death" (2014), Arif Ahmed presents the following scenario as a counterexample to causal decision theory (CDT):

You are thinking about going to Aleppo or staying in Damascus. Death has predicted where you will be and is waiting for you there. For a small fee, you can delegate your choice to a coin toss the outcome of which Death can't predict.

Tossing the coin promises to reduce the chance of death from about 1 to 1/2. Nonetheless, CDT seems to suggest that you shouldn't toss the coin. To illustrate, suppose you are currently completely undecided and thus give equal credence to Death being in Aleppo and to Death being in Damascus. Then you're 50 percent confident that if you were to stay in Damascus, you would survive; similarly for going to Aleppo. You're also 50 percent confident that you would survive if you were to toss the coin, but in that case you'd have to pay the small fee. So it's not worth paying.

Acting under a description

Bob's favourite piano piece is Beethoven's Moonlight Sonata. Alice would like to play Bob's favourite piece, and she can play the Moonlight Sonata, but she doesn't know that it is Bob favourite piece, nor can she find out that it is. Can Alice play Bob's favourite piano piece?

In one sense yes, in another no. It's a kind of de re/de dicto ambiguity. Alice can play what is in fact Bob's favourite piece, but she can't play it "under that description", loosely speaking.

What are our options? (again)

In decision theory, the available options are often glossed informally as the acts the agent can perform, or the propositions she can make true. But this yields implausible results in cases where an agent has doubts about what she can do.

For example, assume Bob suspects that the button in front of him functions as a light switch, as in fact it does. Then Bob can turn on the light by pressing the button. But if he is not certain that the button is a light switch, decision theory should consider the consequences of pressing the button if it has some other function. So turning on the light by pressing the button should not count as an option.

The Galilean equivalence

It is tempting to think that there is nothing more to physical quantities than their nomic role: that to have a certain mass just is to behave in such-and-such a way under such-and-such conditions.

But it is also tempting to think that the "Galilean equivalence" of inertial mass and gravitational mass is a true identity; i.e., that

Inertial mass = gravitational mass.

However, the role associated with "inertial mass" is completely different from the role associated with "gravitational mass". So if having such-and-such inertial mass is having the relevant dispositions associated with "inertial mass", and likewise for gravitational mass, then the Galilean equivalence could not be an identity. It would rather state an empirical law, according to which two distinct quantities always have the same value.

Effective Altruistism and ethical consumerism

In chapter 8 of Doing Good Better, William MacAskill argues that we should not make a great effort to reduce our carbon emissions, to buy Fairtrade coffee, or to boycott sweatshops. The reason is that these actions have at best a small impact on improving other people's lives and so the cost and effort is better spent elsewhere.

From a strictly utilitarian perspective, there is nothing to complain about this. But strict utilitarianism is a highly counterintuitive position. In fact, MacAskill himself rejects it when he says that it would not be OK to consume meat from factory farms and "offset" by donating to animal welfare organisations, even if the net result would be less animal suffering. I agree. Whether a course of action is right or wrong is not just a matter of the net difference it makes to the amount of suffering in the world. But then we also have to reconsider MacAskill's conclusions about carbon offsetting, fairtrade, and sweatshops.

< 648 older entriesHome122 newer entries >