< 734 older entriesHome37 newer entries >

Baccelli and Mongin (and others) on redescribing the outcomes

There are many alleged counterexamples to expected utility theory: Allais's Paradox, Ellsberg's Paradox, Sen (1993)'s polite agent who prefers the second-largest slice of cake, Machina (1989)'s mother who prefers fairness when giving a treat to her children, and so on. In all these cases, the preferences of seemingly reasonable people appear not to rank the options by their expected utility.

Those who make these claims generally assume that utility is a function of material goods. In Allais's Paradox, for example, the possible "outcomes" (of which utility is a function) are assumed to be amounts of money. As has often been pointed out, the apparent violations of expected utility theory all go away if the outcomes are individuated more finely – if, for example, we distinguish between an outcome of getting $1000 as the result of a risky gamble and an outcome of getting a sure $1000. See, for example, Weirich (1986), or Dreier (1996).

On Lipsey and Lancaster and Wiens on the theory of second best

If some ideal is impossible to reach, should we get as close to the ideal as we can?

It's easy to come up with apparent counterexamples. Lipsey and Lancaster (1956) are sometimes said to have proved that getting as close to the ideal as we can is not the best option. Have they really?

Wiens (2020) helpfully summarizes the main result of Lipsey and Lancaster and explains how it applies outside economics. (The Lipsey and Lancaster paper is all about tariffs and taxes and Paretian conditions.)

On Brown on the composition of value

A few thoughts on Brown (2014) and Brown (2020) and the composition of value.

Some propositions (or properties, but let's run with propositions) have value. They are reasons to act one way rather than another. We may ask how this kind of value distributes over the space of propositions.

Since logically equivalent propositions plausibly have the same value, we can picture the propositions as regions in logical space – sets of possible worlds. Now how is the value of a region related to the value of other regions – to its subregions, for example? This is the question Campbell Brown raises in Brown (2014) and Brown (2020).

Is value additive?

When something is good, or desirable, or a reason, then this is usually because it has some good (desirable, etc.) features. The thing may also have bad features, but if the thing is good then the good features outweigh the bad features. How does this weighing work? I'd like to say that the total goodness of a thing is always the sum of the goodness of its features. This "additive" view seems to be unpopular in both ethics and economics. I'll try to defend it.

I first need to state the view more precisely.

To begin, I assume that there are ultimate bearers of value. If we're talking about personal desire, this means that there are some things an agent desires "intrinsically" or "non-derivatively". Being free from pain might be a common example. If you desire to be free from pain then this is typically not because you really desire something else, and you think being free from pain is either a means to the other thing or evidence for the other thing. You simply desire being free from pain, and that's the end of the story.

On Gomez Sanchez on naturalness and laws

Gómez Sánchez (2023) asks an important and, in my view, unsolved question: what kinds of properties may figure in the laws of "special science" (chemistry, genetics, etc.)?

For the most part, the patterns captured in special science laws are not entailed by the fundamental laws of physics, nor by the intrinsic powers and dispositions of the relevant objects. Some kind of best-systems account looks appealing: the Weber-Fechner law, the laws of population dynamics, the laws of folk psychology etc. are useful summaries of pervasive and robust regularities in their respective domains. They are the "best systematisation" of the relevant facts, in terms of desiderata like simplicity and strength.

On Smithies, Lennon, and Samuels on irrational belief

I've decided to write somewhat regular short pieces on interesting papers I've recently read. This one is about Smithies, Lennon, and Samuels (2022).

Smithies, Lennon, and Samuels (henceforth, SLS) criticise the view that there are a priori connections between having a belief with a certain content and other states that would be rational given this belief. A simple example of the target view says that believing P is being disposed to act in a way that would bring one closer to satisfying one's desires if P were true. A more complicated example of the target view, on which SLS focus, is Lewis's. According to Lewis, for a mental state to be a belief state with such-and-such content, the state must, under normal conditions, be connected in a certain way to behaviour, perceptual experiences, and other propositional attitudes. SLS deny this.

The subjective Bayesian answer to the problem of induction

Some people – important people, like Richard Jeffrey or Brian Skyrms – seem to believe that Laplace and de Finetti have solved the problem of induction, assuming nothing more than probabilism. I don't think that's true.

I'll try to explain what the alleged solution is, and why I'm not convinced. I'll pick Skyrms as my adversary, mainly because I've just read Skyrms and Diaconis's Ten Great Ideas about Chance, in which Skyrms presents the alleged solution in a somewhat accessible form.

The problem of metaphysical omniscience

There's a striking tension in Lewis's philosophy. His epistemology and philosophy of mind, on the one hand, leave no room for (non-trivial) a priori knowledge or a priori inquiry. Yet for most of his career, Lewis was engaged in just this kind of inquiry, wondering about the nature of causation, the ontology of sets, the extent of logical space, the existence of universals, and other non-contingent matters. My paper "The problem of metaphysical omniscience" explores some options for resolving the tension. The paper has just come out in a volume, Perspectives on the Philosophy of David K. Lewis, edited by Helen Beebee and A.R.J. Fisher.

Evidential externalism as an antidote to skepticism?

A popular idea in recent (formal) epistemology is that an externalist conception of evidence is somehow useful, or even required, to block the threat of skepticism. (See, for example, Das (2019), Das (2022), and Lasonen-Aarnio (2015). The trend was started by Williamson (2000).)

Negative exhaustification?

Here's an idea that might explain a number of puzzling linguistic phenomena, including neg-raising, the homogeneity presupposition triggered by plural definites, the difficulty of understanding nested negations, and the data often assumed to support conditional excluded middle.

An utterance of

(1a) We will not have PIZZA tonight

conveys two things. Unsurprisingly, it conveys that we will not have pizza tonight. But it also conveys, due to the focus on 'PIZZA', that we will have something else. By comparison,

< 734 older entriesHome37 newer entries >