< 664 older entriesHome107 newer entries >

On Functional Decision Theory

I recently refereed Eliezer Yudkowsky and Nate Soares's "Functional Decision Theory" for a philosophy journal. My recommendation was to accept resubmission with major revisions, but since the article had already undergone a previous round of revisions and still had serious problems, the editors (understandably) decided to reject it. I normally don't publish my referee reports, but this time I'll make an exception because the authors are well-known figures from outside academia, and I want to explain why their account has a hard time gaining traction in academic philosophy. I also want to explain why I think their account is wrong, which is a separate point.

Duals of knowledge and belief

On the modal analysis of belief, 'S believes that p' is true iff p is true at all possible worlds compatible with S's belief state. So 'believes' is a necessity modal. One might expect there to be a dual possibility modal, a verb V such that 'S Vs that p' is true iff p is true at some worlds compatible with S's belief state. But there doesn't seem to be any such verb in English (or German). Why not?

What do we use if we want to say that something is compatible with someone's beliefs? Suppose at some worlds compatible with Betty's belief state, it is currently snowing. We could express this by "Betty does not believe that it is not snowing". But (for some reason) that's really hard to parse.

Gibbard and Jackson on the probability of conditionals

Gibbard's 1981 paper "Two recent theories of conditionals" contains a famous passage about a poker game on a riverboat.

Sly Pete and Mr. Stone are playing poker on a Mississippi riverboat. It is now up to Pete to call or fold. My henchman Zack sees Stone's hand, which is quite good, and signals its content to Pete. My henchman Jack sees both hands, and sees that Pete's hand is rather low, so that Stone's is the winning hand. At this point, the room is cleared. A few minutes later, Zack slips me a note which says "If Pete called, he won," and Jack slips me a note which says "If Pete called, he lost." I know that these notes both come from my trusted henchmen, but do not know which of them sent which note. I conclude that Pete folded.

One puzzle raised by this scenario is that it seems perfectly appropriate for Zack and Jack to assert the relevant conditionals, and neither Zack nor Jack has any false information. So it seems that the conditionals should both be true. But then we'd have to deny that 'if p then q' and 'if p then not-q' are contrary.

One-boxing and objective consequentialism

I've been reading about objective consequentialism lately. It's interesting how pervasive and natural the use of counterfactuals is in this context: what an agent ought to do, people say, is whichever available act would lead to the best outcome (if it were chosen). Nobody thinks that an agent ought to choose whichever act will lead to the best outcome (if it is chosen). The reason is clear: the indicative conditional is information-relative, but the 'ought' of objective consequentialism is not supposed to be information-relative. (That's the point of objective consequentialism.) The 'ought' of objective consequentialism is supposed to take into account all facts, known and unknown. But while it makes perfect sense to ask what would happen under condition C given the totality of facts @, even if @ does not imply C, it arguably makes no sense to ask what will happen under condition C given @, if @ does not imply C.

The probability that if A then B

It has often been pointed out that the probability of an indicative conditional 'if A then B' seems to equal the corresponding conditional probability P(B/A). Similarly, the probability of a subjunctive conditional 'if A were the case then B would be the case' seems to equal the corresponding subjunctive conditional probability P(B//A). Trying to come up with a semantics of conditionals that validates these equalities proves tricky. Nonetheless, people keep trying, buying into all sorts of crazy ideas to make the equalities come out true.

Spelling out a Dutch Book argument

Dutch Book arguments are often used to justify various epistemic norms – in particular, that credences should obey the probability axioms and that they should evolve by condionalization. Roughly speaking, the argument is that if someone were to violate these norms, then they would be prepared to accept bets which amount to a guaranteed loss, and that seems irrational.

But it's hard to spell out how exactly the argument is meant to go. In fact, I'm not aware of any satisfactory statement. Here's my attempt.

Imaginary Foundations

My paper "Imaginary Foundations" has been accepted at Ergo (after rejections from Phil Review, Mind, Phil Studies, PPR, Nous, AJP, and Phil Imprint). The paper has been in the making since 2005, and I'm quite fond of it.

The question I address is simple: how should we model the impact of perceptual experience on rational belief? That is, consider a particular type of experience – individuated either by its phenomenology (what it's like to have the experience) or by its physical features (excitation of receptor cells, or whatever). How should an agent's beliefs change in response to this type of experience?

Simplicity and indifference

According to the Principle of Indifference, alternative propositions that are similar in a certain respect should be given equal prior probability. The tricky part is to explain what should count as similarity here.

Van Fraassen's cube factory nicely illustrates the problem. A factory produces cubes with side lengths between 0 and 2 cm, and consequently with volumes between 0 and 8 cm^3. Given this information, what is the probability that the next cube that will be produced has a side length between 0 and 1 cm? Is it 1/2, because the interval from 0 to 1 is half of the interval from 0 to 2? Or is it 1/8, because a side length of 1 cm means a volume of 1 cm^3, which is 1/8 of the range from 0 to 8?

Strengthening the prejacent

Sometimes, when we say that someone can (or cannot, or must, or must not) do P, we really mean that they can (cannot, must, must not) do Q, where Q is logically stronger than P. By what linguistic mechanism does this strengthening come about?

Example 1. My left arm is paralysed. 'I can't lift my (left) arm any more', I tell my doctor. In fact, though, I can lift the arm, in the way I can lift a cup: by grabbing it with the other arm. When I say that I can't lift my left arm, I mean that I can't lift the arm actively, using the muscles in the arm. I said that I can't do P, but what I meant is that I can't do Q, where Q is logically stronger than P.

Long-run arguments for maximizing expected utility

Why maximize expected utility? One supporting consideration that is occasionally mentioned (although rarely spelled out or properly discussed) is that maximizing expected utility tends to produce desirable results in the long run. More specifically, the claim is something like this:

(*) If you always maximize expected utility, then over time you're likely to maximize actual utility.

Since "utility" is (by definition) something you'd rather have more of than less, (*) does look like a decent consideration in favour of maximizing expected utility. But is (*) true?

< 664 older entriesHome107 newer entries >