I've read around a bit in the literature on higher-order evidence. Two different ideas seem to go with this label. One concerns the possibility of inadequately responding to one's evidence. The other concerns the possibility of having imperfect information about one's evidence. I have a similar reaction to both issues. I haven't seen it in the papers I've looked at. Pointers very welcome.

I'll begin with the first issue.

Let's assume that a rational agent proportions her beliefs to her evidence. This can be hard. For example, it's often hard to properly evaluate statistical data. Suppose you have evaluated the data, reached the correct conclusion, but now receive misleading evidence that you've made a mistake. How should you react?

Some (e.g. Christensen (2010)) say you should reduce your confidence in the conclusion you've reached. Others (e.g. Tal (2021)) say you should remain *steadfast* and not reduce your confidence.

I've been teaching a course called *Logic 2: Modal Logics* for the past few years. It's an intermediate logic course for third-year Philosophy students, all of whom have taken intro logic. I'm not entirely convinced that a second logic course should focus on modal logic, but it works OK.

One nice aspect of modal propositional logic is that models, proofs, soundness, completeness, etc. are not as trivial as in classical propositional logic, but easier than in classical predicate logic. I also like the many philosophical applications. I spend a week on epistemic logic, another on deontic logic, one on temporal logic, and one on conditionals.

Anyway, I've just uploaded my lecture notes to github, in case anyone is interested. The LaTeX source is there as well.

If a certain hypothesis entails that N percent of all observers in the universe have a certain property, how likely is it that *we* have that property – conditional on the hypothesis, and assuming we have no other relevant information?

Answer: It depends on what else the hypothesis says. If, for example, the hypothesis says that 90 percent of all observers have three eyes, and also that we ourselves have two eyes, then the probability that we have three eyes conditional on the hypothesis is zero.

This effect is easy to miss because many hypotheses that appear to be just about the universe as a whole secretly contain special information about us. Consider the following passage from Carroll (2010), cited in Arntzenius and Dorr (2017):

In the previous post I argued that rational priors must favour some possibilities over others, and that this is a problem for Richard Pettigrew's model of Jamesian permissivism. It also points towards an alternative model that might be worth exploring.

I claim that, in the absence of unusual evidence, a rational agent should be confident that observed patterns continue in the unobserved part of the world, that witnesses tell the truth, that rain experiences indicate rain, and so on. In short, they should give low credence to various skeptical scenarios. How low? Arguably, our epistemic norms don't fix a unique and precise answer.

Pettigrew (2021) defends a type of permissivism about rational credence inspired by James (1897), on which different rational priors reflect different attitudes towards epistemic risk. I'll summarise the main ideas and raise some worries.

(There is, of course, much more in the book than what I will summarise, including many interesting technical results and some insightful responses to anti-permissivist arguments.)

An interesting new paper by David Mackay, Mackay (2022), raises a challenge to popular ideas about the semantics of modals. Mackay presents some data that look incompatible with classical two-dimensional semantics. But the data nicely fit classical two-dimensionalism, if we combine that with a flexible form of counterpart semantics.

Before I discuss the data, here's a reminder of some differences between epistemic modals and non-epistemic ("metaphysical") modals.

Last week I gave a talk in which I claimed (as an aside) that if you update your credences by conditionalising on a true proposition then your credences never become more inaccurate. That seemed obviously true to me. Today I tried to quickly prove it. I couldn't. Instead I found that the claim is false, at least on popular measures of accuracy.

The problem is that conditionalising on a true proposition typically increases the probability of true propositions as well as false propositions. If we measure the inaccuracy of a credence function by adding up an inaccuracy score for each proposition, the net effect is sensitive to how exactly that score is computed.

In Jordan Howard Sobel's papers on decision theory, he generally defines the (causal) expected utility of an act in terms of a special conditional that he calls "causal" or "practical". Concretely, he suggests that

\[
(1)\quad EU(A) = \sum_{w} Cr(A\; \Box\!\!\to w)V(w),
\]

where 'A □→ B' is the special conditional that is true iff either (i) B is the case and would remain the case if A were the case, or (ii) B is not the case but would be the case *as a causal consequence of A* if A were the case (see e.g. Sobel (1986), pp.152f., or Sobel (1989), pp.175f.).

Good (1967) famously "proved" that the expected utility of an informed decision is always at least as great as the expected utility of an uninformed decision. The conclusion is clearly false. Let's have a look at the proof and its presuppositions.

Suppose you can either perform one of the acts A_{1}…A_{n} now, or learn the answer to some question E and afterwards perform one of A_{1}…A_{n}. Good argues that the second option is always at least as good as the first. The supposed proof goes as follows.

Here is a case where a plan maximises expected utility, you are sure that you are going to follow the plan, and yet the plan tells you to do things that don't maximise expected utility.

*Middle Knowledge.* In front of you are two doors. If you go through the left door, you come into a room with a single transparent box containing $7. If you go through the right door, you come into a room with two opaque boxes, one black, one white. Your first choice is which door to take. Then you have to choose exactly one box from the room in which you find yourself. A psychologist has figured out which box you would take if found yourself in the room with the two boxes. She has put $10 into the box she thinks you would take, and $0 into the other.