< 673 older entriesHome98 newer entries >

A desire that thwarts decision theory

Suppose we want our decision theory to not impose strong constraints on people's ultimate desires. You may value personal wealth, or you may value being benevolent and wise. You may value being practically rational: you may value maximizing expected utility. Or you may value not maximizing expected utility.

This last possibility causes trouble.

If not maximizing expected utility is your only basic desire, and you have perfect and certain information about your desires, then arguably (although the argument isn't trivial) every choice in every decision situation you can face has equal expected utility; so you are bound to maximize expected utility no matter what. Your desire can't be fulfilled.

Why would you do that?

I'm generally happy with Causal Decision Theory. I think two-boxing is clearly the right answer in Newcomb's problem, and I'm not impressed by any of the alleged counterexamples to Causal Decision Theory that have been put forward. But there's one thing I worry about. It is what exactly the theory should say: how it should be spelled out.

Suppose you face a choice between two acts A and B. Loosely speaking, to evaluate these options, we need to check whether the A-worlds are on average better than the B-worlds, where the "average" is weighted by your credence on the subjunctive supposition that you make the relevant choice. Even more loosely, we want to know how good the world would be if you were to choose A, and how good it would be if you were to choose B. So we need to know what else would be the case if you were to choose, say, A.

Objects of revealed preference

A common assumption in economics is that utilities are reducible to choice dispositions. The story goes something like this. Suppose we know what an agent would choose if she were asked to pick one from a range of goods. If the agent is disposed to choose X, and Y was an available alternative, we say that the agent prefers X over Y. One can show that if the agent's choice dispositions satisfy certain formal constraints, then they are "representable" by a utility function in the sense that whenever the agent prefers X over Y, the function assigns greater value to X than to Y. This utility function is assumed to be the agent's true utility function, telling us how much the agent values the relevant goods.

Lewis on magnetism: Reply to Janssen-Lauret and Macbride

In my 2014 paper "Against Magnetism", I argued that the meta-semantics Lewis defended in "Putnam's Paradox" and pp.45-49 of "New Work" is (a) unattractive, (b) does not fit what Lewis wrote about meta-semantics elsewhere, and (c) was never Lewis's considered view.

In a paper forthcoming in the AJP, Frederique Janssen-Lauret and Fraser Macbride (henceforth, JL&M) disagree with my point (b), and present what they call "decisive evidence" against (c). Here's my response. In short, I'm not convinced.

Refereeing as as Service

There should be a website (or app) that helps with the following kinds of issues.

  • I recently wrote a paper on ability modals in which I sketch some ideas for how a certain linguistic phenomenon might be compositionally derived. I'm really unsure about that part of the paper, because I'm not an expert in the relevant areas of formal semantics. I'd like to get advice from an expert, but none of my friends are, and I don't want to bother people I don't know.
  • I once wrote a paper on decision-theoretic methods in non-consequentialist ethics. But I don't know much about ethics. I'd need someone to tell me how non-consequentialists typically think about decisions under uncertainty, who has already tried to sell decision-theoretic methods for that purpose, and what key papers I need to read.
  • When I submit papers to journals, I often get rejections pointing out problems that are easy to fix. It would have been good if someone had pointed out these problems to me before I submitted the paper.
  • I think many of my drafts and papers are a little hard to understand, but I'm not sure why. I'd like someone to give me feedback on which passages are confusing, where a reader might get lost, etc.

Basically, I'd like to hire (different kinds of) referees to look over my drafts and give me constructive feedback.

Lewis's empiricism

Last week, I gave a talk in Manchester at a (very nice) workshop on "David Lewis and His Place in the History of Analytic Philosophy". My talk was on "Lewis's empiricism". I've now written it up as a paper, since it got too long for a blog post.

The paper is really about hyperintensional epistemology. The question is how we can make sense of the kind of metaphysical enquiry Lewis was engaged in if we accept his models of knowledge and belief, which leave no room for substantive investigations into non-contingent matters.

From Sensor Variables to Phenomenal Facts

I wrote this short piece for a special issue of the Journal of Consciousness Studies on Chalmers's "The Meta-Problem of Consciousness" (2018). Much of my paper rehashes ideas from section 5 of my "Imaginary Foundations" paper, but here I try to present these ideas more simply and directly, without the Bayesian background.

How to serve two epistemic masters

In this 2018 paper, J. Dmitri Gallow shows that it is difficult to combine multiple deference principles. The argument is a little complicated, but the basic idea is surprisingly simple.

Suppose A and B are two weather forecasters. Let r be the proposition that it will rain tomorrow, let A=x be the proposition that A assigns probability x to r; similarly for B=x. Here are two deference principles you might like to follow:

Relativism and absolutism in deontic logic

Consider a world where eating doughnuts is illegal and where everyone thinks it is OK to torture animals for fun. Suppose Norman at w is eating doughnuts while torturing his pet kittens. Is he violating the laws? Is he doing something immoral?

In one sense, yes, in another, no. His doughnut eating violates the laws of w, but not the laws of our world. Conversely, his kitten torturing violates a moral code accepted at our world, but not a code accepted at w.

On Functional Decision Theory

I recently refereed Eliezer Yudkowsky and Nate Soares's "Functional Decision Theory" for a philosophy journal. My recommendation was to accept resubmission with major revisions, but since the article had already undergone a previous round of revisions and still had serious problems, the editors (understandably) decided to reject it. I normally don't publish my referee reports, but this time I'll make an exception because the authors are well-known figures from outside academia, and I want to explain why their account has a hard time gaining traction in academic philosophy. I also want to explain why I think their account is wrong, which is a separate point.

< 673 older entriesHome98 newer entries >