I've spent some time this summer upgrading my tree prover. The
new version is here. What's new:
- support for some (normal) modal logics
- better detection of invalid formulas
- faster proof search
- nicer user interface and nicer trees
- cleaner source
I hope there aren't too many new bugs. Let me know if you find one!
Suppose we want our decision theory to not impose strong constraints on
people's ultimate desires. You may value personal wealth, or you may value being
benevolent and wise. You may value being practically rational: you may value
maximizing expected utility. Or you may value not maximizing expected
This last possibility causes trouble.
If not maximizing expected utility is your only basic desire, and you
have perfect and certain information about your desires, then arguably (although
the argument isn't trivial) every choice in every decision situation you can
face has equal expected utility; so you are bound to maximize expected utility
no matter what. Your desire can't be fulfilled.
I'm generally happy with Causal Decision Theory. I think two-boxing is
clearly the right answer in Newcomb's problem, and I'm not impressed by any of
the alleged counterexamples to Causal Decision Theory that have been put
forward. But there's one thing I worry about. It is what exactly the theory
should say: how it should be spelled out.
Suppose you face a choice between two acts A and B. Loosely speaking, to
evaluate these options, we need to check whether the A-worlds are on average
better than the B-worlds, where the "average" is weighted by your credence on
the subjunctive supposition that you make the relevant choice. Even more
loosely, we want to know how good the world would be if you were to choose A,
and how good it would be if you were to choose B. So we need to know what else
would be the case if you were to choose, say, A.
A common assumption in economics is that utilities are reducible to choice
dispositions. The story goes something like this. Suppose we know what an agent
would choose if she were asked to pick one from a range of goods. If the agent
is disposed to choose X, and Y was an available alternative, we say that the
agent prefers X over Y. One can show that if the agent's choice
dispositions satisfy certain formal constraints, then they are "representable"
by a utility function in the sense that whenever the agent prefers X over Y,
the function assigns greater value to X than to Y. This utility function is
assumed to be the agent's true utility function, telling us how much the agent
values the relevant goods.
In my 2014 paper "Against Magnetism", I
argued that the meta-semantics Lewis defended in "Putnam's Paradox" and pp.45-49
of "New Work" is (a) unattractive, (b) does not fit what Lewis wrote about
meta-semantics elsewhere, and (c) was never Lewis's considered view.
paper forthcoming in the AJP, Frederique Janssen-Lauret and Fraser Macbride
(henceforth, JL&M) disagree with my point (b), and present what they call
"decisive evidence" against (c). Here's my response. In short, I'm not
There should be a website (or app) that helps with the following kinds of issues.
- I recently wrote a paper on ability modals in which I sketch some ideas for
how a certain linguistic phenomenon might be compositionally derived. I'm really
unsure about that part of the paper, because I'm not an expert in the relevant
areas of formal semantics. I'd like to get advice from an expert, but none of my
friends are, and I don't want to bother people I don't know.
- I once wrote a paper on decision-theoretic methods in non-consequentialist
ethics. But I don't know much about ethics. I'd need someone to tell me how non-consequentialists typically think about decisions under uncertainty, who has already tried to sell decision-theoretic methods for that purpose,
and what key papers I need to read.
- When I submit papers to journals, I often get rejections pointing out
problems that are easy to fix. It would have been good if someone had pointed
out these problems to me before I submitted the paper.
- I think many of my drafts and papers are a little hard to understand, but
I'm not sure why. I'd like someone to give me feedback on which passages are
confusing, where a reader might get lost, etc.
Basically, I'd like to hire (different kinds of) referees to look over my drafts
and give me constructive feedback.
Last week, I gave a talk in Manchester at a
(very nice) workshop on "David Lewis and His Place in the History of Analytic
Philosophy". My talk was on "Lewis's empiricism". I've now written it up as a
paper, since it got too long for a blog post.
The paper is really about hyperintensional epistemology. The question is how we
can make sense of the kind of metaphysical enquiry Lewis was engaged in if we
accept his models of knowledge and belief, which leave no room for substantive
investigations into non-contingent matters.
I wrote this short
piece for a special issue of the Journal of Consciousness Studies on
Chalmers's "The Meta-Problem
of Consciousness" (2018). Much of my paper rehashes ideas from section 5 of
Foundations" paper, but here I try to present these ideas more simply and
directly, without the Bayesian background.
The central claim I try to defend is that the hard problem of consciousness
arises from a particular method by which our brain processes sensory input.
Agents whose brain uses that method can be expected to be puzzled about
phenomenal consciousness, even if they live in a purely physical world.
The story is meant to answer the "meta-problem" of what gives rise to our
puzzlement about consciousness, but it is also meant to dissolve the first-order
problem: once we understand the source of the puzzlement, we should no longer
2018 paper, J. Dmitri Gallow shows that it is difficult to combine
multiple deference principles. The argument is a little complicated,
but the basic idea is surprisingly simple.
Suppose A and B are two weather forecasters. Let r be the
proposition that it will rain tomorrow, let A=x be the proposition
that A assigns probability x to r; similarly for B=x. Here are two
deference principles you might like to follow:
(1) Cr(r / A=x) = x.
(2) Cr(r / B=x) = x.
Now conceivably, A and B might issue different forecasts. So what
should you believe on the assumption that A=x and B=y, where x and y
are different? One natural idea is to split the difference:
Consider a world where eating doughnuts is illegal and where everyone
thinks it is OK to torture animals for fun. Suppose Norman at w is
eating doughnuts while torturing his pet kittens. Is he violating the
laws? Is he doing something immoral?
In one sense, yes, in another, no. His doughnut eating violates the
laws of w, but not the laws of our world. Conversely,
his kitten torturing violates a moral code accepted at our world, but
not a code accepted at w.
In general, when we ask whether people at other worlds do what they
ought to do, we can evaluate their actions relative to their
norms, or we can evaluate them relative to our norms. Both
perspectives make sense. But they lead to different deontic logics.