< 716 older entriesHome46 newer entries >

Pettigrew on epistemic risk and the demands of rationality

Pettigrew (2021) defends a type of permissivism about rational credence inspired by James (1897), on which different rational priors reflect different attitudes towards epistemic risk. I'll summarise the main ideas and raise some worries.

(There is, of course, much more in the book than what I will summarise, including many interesting technical results and some insightful responses to anti-permissivist arguments.)

Mackay on counterfactual epistemic scenarios

An interesting new paper by David Mackay, Mackay (2022), raises a challenge to popular ideas about the semantics of modals. Mackay presents some data that look incompatible with classical two-dimensional semantics. But the data nicely fit classical two-dimensionalism, if we combine that with a flexible form of counterpart semantics.

Before I discuss the data, here's a reminder of some differences between epistemic modals and non-epistemic ("metaphysical") modals.

Decreasing accuracy through learning

Last week I gave a talk in which I claimed (as an aside) that if you update your credences by conditionalising on a true proposition then your credences never become more inaccurate. That seemed obviously true to me. Today I tried to quickly prove it. I couldn't. Instead I found that the claim is false, at least on popular measures of accuracy.

The problem is that conditionalising on a true proposition typically increases the probability of true propositions as well as false propositions. If we measure the inaccuracy of a credence function by adding up an inaccuracy score for each proposition, the net effect is sensitive to how exactly that score is computed.

Sobel's strictly causal decision theory

In Jordan Howard Sobel's papers on decision theory, he generally defines the (causal) expected utility of an act in terms of a special conditional that he calls "causal" or "practical". Concretely, he suggests that

\[ (1)\quad EU(A) = \sum_{w} Cr(A\; \Box\!\!\to w)V(w), \]

where 'A □→ B' is the special conditional that is true iff either (i) B is the case and would remain the case if A were the case, or (ii) B is not the case but would be the case as a causal consequence of A if A were the case (see e.g. Sobel (1986), pp.152f., or Sobel (1989), pp.175f.).

Counterexamples to Good's Theorem

Good (1967) famously "proved" that the expected utility of an informed decision is always at least as great as the expected utility of an uninformed decision. The conclusion is clearly false. Let's have a look at the proof and its presuppositions.

Suppose you can either perform one of the acts A1…An now, or learn the answer to some question E and afterwards perform one of A1…An. Good argues that the second option is always at least as good as the first. The supposed proof goes as follows.

A plan you shouldn't follow (even if you think you will)

Here is a case where a plan maximises expected utility, you are sure that you are going to follow the plan, and yet the plan tells you to do things that don't maximise expected utility.

Middle Knowledge. In front of you are two doors. If you go through the left door, you come into a room with a single transparent box containing $7. If you go through the right door, you come into a room with two opaque boxes, one black, one white. Your first choice is which door to take. Then you have to choose exactly one box from the room in which you find yourself. A psychologist has figured out which box you would take if found yourself in the room with the two boxes. She has put $10 into the box she thinks you would take, and $0 into the other.

More troublesome sequential choices

Two recent papers – Oesterheld and Conitzer (2021) and Gallow (2021) – suggest that CDT gives problematic recommendations in certain sequential decision situations.

Decision problems without equilibria

In my recent posts on decision theory, I've assumed that friends of CDT should accept a "ratifiability" condition according to which an act is rationally permissible only if it maximises expected utility conditional on being chosen.

Sometimes no act meets this condition. In that case, I've assumed that one should be undecided. More specifically, I've assumed that one should be in an "stable" state of indecision in which no (pure) option is preferable to one's present state of indecision. Unfortunately, there are decision problems in which no act is ratifiable and no state of indecision is stable. I'm not sure what to say about such cases. And I wonder if whatever we should say about them also motivates relaxing the ratifiability condition for certain cases in which there are ratifiable options.

A dutch book against CDT? (EDC, ch.8)

The eighth and final chapter of Evidence, Decision and Causality asks whether the actions over which we deliberate should be evidentially independent of the past. It also presents a final alleged counterexample to CDT.

A few quick comments on the first topic.

It is often assumed that there can be evidential connections between what acts we will choose and what happened in the past. In Newcomb's Problem, for example, you can be confident that the predictor foresaw that you'd one-box if you one-box, and that she foresaw that you'd two-box if you two-box. Some philosophers, however, have suggested that deliberating agents should regard their acts as evidentially independent of the past. If they are right then even EDT recommends two-boxing in Newcomb's Problem.

More on dynamic consistency in CDT

One might intuit that any rationally choosable plan should be rationally implementable. In the previous post, I discussed a scenario in which some forms of CDT violate that principle. In this post, I have some more thoughts on how this can happen. I also consider some nearby principles and look at the conditions under which they might hold.

Throughout this post I'll assume that we are dealing with ideally rational agents with stable basic desires. We're interested in the attitudes such agents should take towards their options in simple, finite sequential choice situations where no relevant information about the world arrives in between the choice points.

< 716 older entriesHome46 newer entries >