< 752 older entriesHome18 newer entries >

DiPaolo on second best epistemology

Covid finally caught me, so I fell behind with everything. Let's try get back to the blogging schedule. This time, I want to recommend DiPaolo (2019). It's a great paper that emphasizes the difference between ideal ("primary") and non-ideal ("secondary") norms in epistemology.

The central idea is that epistemically fallible agents are subject to different norms than infallible agents. An ideal rational agent would, for example, never make a mistake when dividing a restaurant bill. For them, double-checking the result is a waste of time. They shouldn't do it. We non-ideal folk, by contrast, should sometimes double-check the result. As the example illustrates, the "secondary" norms for non-ideal agents aren't just softer versions of the "primary" norms for ideal agents. They can be entirely different.

Cariani on the modal future

I've been reading Fabrizio Cariani's The Modal Future (Cariani (2021)). It's great. I have a few comments.

This book is about the function of expressions like 'will' or 'gonna' that are typically used to talk about the future, as in (1).

(1) I will write the report.

Intuitively, (1) states that a certain kind of writing event takes place – but not right here and now. 'Will' is a displacement operator, shifting the point of evaluation. Where exactly does the writing event have to take place in order for (1) to be true?

Here's a natural first idea. (1) is true as long as a relevant writing event takes place at some point in the future. This yields the standard analysis of 'will' in tense logic:

Dietrich and List on reasons

Let's return to my recent explorations into the formal structure of reasons. One important approach that I haven't talked about yet is that of Dietrich and List, described in Dietrich and List (2013a), Dietrich and List (2013b), and Dietrich and List (2016).

Gallow on causal counterfactuals without miracles and backtracking

Gallow (2023) spells out an interventionist theory of counterfactuals that promises to preserve two apparently incompatible intuitions.

Suppose the laws of nature are deterministic. What would have happened if you had chosen some act that you didn't actually choose? The two apparently incompatible intuitions are:

(A1) Had you chosen differently, no law of nature would have been violated.

(A2) Had you chosen differently, the initial conditions of the universe would not have been changed.

Rejecting one of these intuitions is widely thought to spell trouble for Causal Decision Theory. Gallow argues that they can both be respected. I'll explain how. Then I'll explain why I'm not convinced.

Kocurek on chance and would

A lot of rather technical papers on conditionals have come out in recent years. Let's have a look at one of them: Kocurek (2022).

The paper investigates Al Hajek's argument (e.g. in Hájek (2021)) that "chance undermines would". It begins with a neat observation.

Sher on the weight of reasons

A few thoughts on Sher (2019), which I found advertised in Nair (2021).

This (long and rich) paper presents a formal model of reasons and their weight, with the aim of clarifying how different reasons for or against an act combine.

Sher's guiding idea is to measure the weight by which a reason supports an act in terms of the effect that coming to know the reason would have on the act's desirability.

Kammerer on acquaintance and certainty

Many experiences have phenomenal properties: there is something it is like to have them. A puzzling fact about these properties is that we appear to know about them in a special, direct fashion: we are "acquainted" with the phenomenal properties of our experiences. Another, related puzzle is that we appear to know about these properties with absolute certainty: if you have an experience as of looking at a red wall, you can conclusively rule out the possibility that you have an experience as of looking at a green wall.

In Schwarz (2018), I put forward a tentative explanation of these facts. I argued that it would be useful for an agent in a world like ours to have a credence function defined over a space that includes special "imaginary" propositions that are causally tied to stimulations of their sense organs in such a way that any given stimulation makes the agent certain of a corresponding imaginary proposition. What we conceptualise as propositions about phenomenal properties (of our experience), I argued, might be such imaginary propositions.

Revised decision theory notes

I have revised the lecture notes for my "Belief, Desire, and Rational Choice" course, making lots of small improvements (I hope) here and there. The revised notes are here, and the LaTeX source is on github.

Nair on adding up reasons

Often there are many reasons for and against a certain act or belief. How do these reasons combine to an overall reason? Nair (2021) tries to give an answer.

Nair's starting point is a little more specific. Nair intuits that there are cases in which two equally strong reasons combine to a reason that is twice as strong as the individual reasons. In other cases, however, the combined reason is just as strong as the individual reasons, or even weaker.

To make sense this, we need to explain (1) how strengths of reason can be represented numerically, and (2) under what conditions the strengths of different reasons add up.

Binding and pre-emptive binding in Newcomb's Problem

When I recently taught Newcomb's Problem in an undergraduate class, opinions were – of course – divided. Some students were one-boxers, some were two-boxers. But few of the one-boxers were EDTers. I hadn't realised this in earlier years. Many of them agreed, after some back and forth, that their reasoning also supports one-boxing in a variation of Newcomb's Problem in which both boxes are transparent. In this version of the case, EDT says that you should two-box.

The argument students gave in support of one-boxing is that committing to one-boxing would make it likely that a million dollars is put into the opaque box.

This line of thought is most convincing if we assume that you know in advance that you will face Newcomb's Problem, before the prediction is made. It is uncontroversial that if you can commit yourself to one-boxing at this point, then you should do it.

By "committing", I mean what Arntzenius, Elga, and Hawthorne (2004) call "binding". By committing yourself to one-box, you would effectively turn your future self into an automaton that is sure to one-box. Your future self would no longer make a decision, based on their information and goals at the time. They would simply execute your plan.

< 752 older entriesHome18 newer entries >