< 744 older entriesHome26 newer entries >

Nair on adding up reasons

Often there are many reasons for and against a certain act or belief. How do these reasons combine to an overall reason? Nair (2021) tries to give an answer.

Nair's starting point is a little more specific. Nair intuits that there are cases in which two equally strong reasons combine to a reason that is twice as strong as the individual reasons. In other cases, however, the combined reason is just as strong as the individual reasons, or even weaker.

To make sense this, we need to explain (1) how strengths of reason can be represented numerically, and (2) under what conditions the strengths of different reasons add up.

Binding and pre-emptive binding in Newcomb's Problem

When I recently taught Newcomb's Problem in an undergraduate class, opinions were – of course – divided. Some students were one-boxers, some were two-boxers. But few of the one-boxers were EDTers. I hadn't realised this in earlier years. Many of them agreed, after some back and forth, that their reasoning also supports one-boxing in a variation of Newcomb's Problem in which both boxes are transparent. In this version of the case, EDT says that you should two-box.

The argument students gave in support of one-boxing is that committing to one-boxing would make it likely that a million dollars is put into the opaque box.

This line of thought is most convincing if we assume that you know in advance that you will face Newcomb's Problem, before the prediction is made. It is uncontroversial that if you can commit yourself to one-boxing at this point, then you should do it.

By "committing", I mean what Arntzenius, Elga, and Hawthorne (2004) call "binding". By committing yourself to one-box, you would effectively turn your future self into an automaton that is sure to one-box. Your future self would no longer make a decision, based on their information and goals at the time. They would simply execute your plan.

Isaacs and Russell on updating without evidence

Isaacs and Russell (2023) proposes a new way of thinking about evidence and updating.

The standard Bayesian picture of updating assumes that an agent has some ("prior") credence function Cr and then receive some (total) new evidence E. The agent then needs to update Cr in light of E, perhaps by conditionalizing on E. There is no room, in this picture, for doubts about E. The evidence is taken on board with absolute certainty.

The standard picture thereby assumes that the agent's cognitive system is perfectly sensitive to a certain aspect of the world: if E is true, the agent is certain to update on E; if E is false, the agent is certain to not update on E.

Srinivasan's "radical externalism"

Internalism about justification is often supported by intuitions about cases. Srinivasan (2020) argues that these intuitions can't be trusted, because there are analogous cases in which they go in the opposite direction. I'll explain why I'm not convinced.

I should say that I'm not sure what this debate is about. Are we talking about some pre-theoretic folk concept of justification? Or about a concept that plays some important theoretical role? Srinivasan acknowledges (in footnote 10) that there might not be a single, precise folk concept of justification. I agree. To clarify her topic, she says that she is interested in the kind of justification that is a precondition for knowledge. This doesn't really help me. I think that 'knowledge' is context-dependent, and that it sometimes means no more than 'true belief'. There is no interesting justification condition that is present in every case of knowledge.

The three pilots problem for CDT

In a comment on an old blog post, a person called "D" brought up a nice puzzle for Causal Decision Theory. Here's (my version of) the scenario.

You have just taken over as one of three pilots on a spaceship that is on its way to Betelgeuse. The spaceship's flight operations are largely automatised. The only input needed from the pilots is the destination. At present, the only available destinations are Betelgeuse and a service station on a nearby moon. (Other destinations could not be safely reached with current fuel levels.)

Nencha on counterpart semantics

Informal talk about de re necessity is sometimes "weak" and sometimes "strong", in Kripke's terminology. When I say, 'Elizabeth II could not have failed to be the daughter of George VI', I mean – roughly – that Elizabeth is George's daughter at every world at which she exists. By contrast, when I say, 'Elizabeth II could not have failed to exist', I don't just mean that Elizabeth exists at every world at which she exists. My claim is that she exists at every world whatsoever. The former usage is "weak", the latter "strong".

When people give a semantics for the language of Quantified Modal Logic (QML), they typically treat the box as strong. '\( \Box Fx \)' is assumed to say that x is F at every accessible world, not just at every accessible world at which x exists.

Blumberg and Hawthorne on weakening desire

Long ago, in 2007, I expressed sympathy for the idea that desire can be analysed in terms of expected value: 'S desires p' is true iff p worlds are on average better, by S's standards, than not-p worlds, where the "average" is computed with S's credence function. As I mentioned at the time, this has the interesting consequence that 'S desires p' and 'S desires q' does not entail 'S desires p and q'.

Blumberg and Hawthorne (2022) make the same observation, and argue that it is a serious problem for the expected-value analysis. Intuitively, they say, 'Bill wants Ann to attend' and 'Bill wants Carol to attend' entail 'Bill wants Ann or Carol to attend'. In general, they claim, the following principle of Weakening is valid:

Harsanyi's trick

Harsanyi (1955) famously showed that a few seemingly harmless assumptions, when combined, entail the utilitarian doctrine that the goodness of a state of the world is the sum of the state's goodness for each individual. In other words, moral value is additive across people.

Recently, I've argued that value is additive on the grounds that its components are "separable", in the sense that if two states s and s' differ only with respect to some components, then the betterness ranking of s and s' does not depend on the respects in which s and s' agree. Debreu (1960) showed that, under some modest further assumptions, separability entails additive representability. I've never had a close look at Debreu's theorem, since the result isn't surprising.

Keiser on metasemantics

There are many conceptions of linguistic meaning. One approach, that I like, assumes that the semantic values we assign to sounds and scribbles function somewhat like the numbers we assign to certain pieces of paper and plastic when we say that they are a "5 pound note" or a "10 pound note": they are a compact summary of the kinds of activities people can perform with the relevant objects. With a 5 pound note you can buy certain kinds of goods. With the sounds 'it is raining' you can inform people that it is raining.

When people like Lewis (1975) spell out this use-based conception of semantics, they generally focus on assertion and information exchange. Roughly, the semantic value assigned to a declarative sentence is identified with the information that is conventionally conveyed by an utterance of the sentence.

Is there a dynamic argument for expected utility maximisation?

Why should you maximize expected utility? A well-known answer – discussed, for example, in McClennen (1990), Cubitt (1996), and Gustafsson (2022) – goes as follows.

< 744 older entriesHome26 newer entries >