< 657 older entriesHome113 newer entries >

Simplicity and indifference

According to the Principle of Indifference, alternative propositions that are similar in a certain respect should be given equal prior probability. The tricky part is to explain what should count as similarity here.

Van Fraassen's cube factory nicely illustrates the problem. A factory produces cubes with side lengths between 0 and 2 cm, and consequently with volumes between 0 and 8 cm^3. Given this information, what is the probability that the next cube that will be produced has a side length between 0 and 1 cm? Is it 1/2, because the interval from 0 to 1 is half of the interval from 0 to 2? Or is it 1/8, because a side length of 1 cm means a volume of 1 cm^3, which is 1/8 of the range from 0 to 8?

Strengthening the prejacent

Sometimes, when we say that someone can (or cannot, or must, or must not) do P, we really mean that they can (cannot, must, must not) do Q, where Q is logically stronger than P. By what linguistic mechanism does this strengthening come about?

Example 1. My left arm is paralysed. 'I can't lift my (left) arm any more', I tell my doctor. In fact, though, I can lift the arm, in the way I can lift a cup: by grabbing it with the other arm. When I say that I can't lift my left arm, I mean that I can't lift the arm actively, using the muscles in the arm. I said that I can't do P, but what I meant is that I can't do Q, where Q is logically stronger than P.

Long-run arguments for maximizing expected utility

Why maximize expected utility? One supporting consideration that is occasionally mentioned (although rarely spelled out or properly discussed) is that maximizing expected utility tends to produce desirable results in the long run. More specifically, the claim is something like this:

(*) If you always maximize expected utility, then over time you're likely to maximize actual utility.

Since "utility" is (by definition) something you'd rather have more of than less, (*) does look like a decent consideration in favour of maximizing expected utility. But is (*) true?

What i and -i could not be

According to realist structuralism, mathematics is the study of structures. Structures are understood to be special kinds of complex properties that can be instantiated by particulars together with relations between these particulars. For example, the field of complex numbers is assumed to be instantiated by any suitably large collection of particulars in combination with four operations that satisfy certain logical constraints. (The four operations correspond to addition, subtraction, multiplication, and division.)

Might counterfactuals

A might counterfactual is a statement of the form 'if so-and-so were the case then such-and-such might be the case'. I used to think that there are different kinds of might counterfactuals: that sometimes the 'might' takes scope over the entire conditional, and other times it does not.

For example, suppose we have an indeterministic coin that we don't toss. In this context, I'd say (1) is true and (2) is false.

(1) If I had tossed the coin it might have landed heads.
(2) If I had tossed the coin it would have landed heads.

These intuitions are controversial. But if they are correct, then the might counterfactual (1) can't express that the corresponding would counterfactual is epistemically possible. For we know that the would counterfactual is false. That is, the 'might' here doesn't scope over the conditional. Rather, the might counterfactual (1) seems to express the dual of the would counterfactual (2), as Lewis suggested in Counterfactuals: 'if A then might B' seems to be equivalent to 'not: if A then would not-B'.

A few links

I stumbled across a few interesting free books in the last few days.

1. Tony Roy has a 1051 page introduction to logic on his homepage, which slowly and evenly proceeds from formalising ordinary-language arguments all the way to proving Gödel's second incompleteness theorem. All entirely mainstream and classical, but it looks nicely presented, with lots of exercises.

2. Ariel Rubinstein has made his six books available online (in exchange for some personal information): Bargaining and Markets, A Course in Game Theory, Modeling Bounded Rationality, Lecture Notes in Microeconomics, Economic Fables, and the intriguing Economics and Language, which applies tools from economics to the study of meaning.

Ifs and cans

Is 'can' information-sensitive in an interesting way, like 'ought'?

An example of uninteresting information-sensitivity is (1):

(1) If you can lift this backpack, then you can also lift that bag.

Informally speaking, the if-clause takes wide scope in (1). The truth-value of the consequent 'you can lift that bag' varies from world to world, and the if-clause directs us to evaluate the statement at worlds where the antecedent is true.

Should you rescue the miners?

Many accounts of deontic modals that have been developed in response to the miners puzzle have a flaw that I think hasn't been pointed out yet: they falsely predict that you ought to rescue all the miners.

The miners puzzle goes as follows.

Ten miners are trapped in a shaft and threatened by rising water. You don't know whether the miners are in shaft A or in shaft B. You can block the water from entering one shaft, but you can't block both. If you block the correct shaft, all ten will survive. If you block the wrong shaft, all of them will die. If you do nothing, one miner will die.

Let's assume that the right choice in your state of uncertainty is to do nothing. In that sense, then, (1) is true.

Iterated prisoner dilemmas

There's something odd about how people usually discuss iterated prisoner dilemmas (and other such games).

Let's say you and I each have two options: "cooperate" and "defect". If we both cooperate, we get $10 each; if we both defect, we get $5 each; if only one of us cooperates, the cooperator gets $0 and the defector $15.

This game might be called a monetary prisoner dilemma, because it has the structure of a prisoner dilemma if utility is measured by monetary payoff. But that's not how utility is usually understood.

Time consistency and stationarity

Suppose you prefer $105 today to $100 tomorrow. You also prefer $105 in 11 days to $100 in 10 days. During the next 10 days, your basic preferences don't change, so that at the end of that period (on day 10), you still prefer $105 now (on day 10) to $100 the next day. Your future self then disagrees with your earlier self about whether it's better to get $105 on day 10 or $100 on day 11.

In economics jargon, your preferences are called time inconsistent. Time inconsistency is supposed to be a failure of ideal rationality.

< 657 older entriesHome113 newer entries >