## How to serve two epistemic masters

In this 2018 paper, J. Dmitri Gallow shows that it is difficult to combine multiple deference principles. The argument is a little complicated, but the basic idea is surprisingly simple.

Suppose A and B are two weather forecasters. Let r be the proposition that it will rain tomorrow, let A=x be the proposition that A assigns probability x to r; similarly for B=x. Here are two deference principles you might like to follow:

(1) Cr(r / A=x) = x.

(2) Cr(r / B=x) = x.

Now conceivably, A and B might issue different forecasts. So what should you believe on the assumption that A=x and B=y, where x and y are different? One natural idea is to split the difference:

(3) Cr(r / A=x & B=y) = 0.5 x + 0.5 y.

But as Gallow proves, that's impossible, unless you're sure that x=y. Here's the informal reason.

Consider the range of possible x values. Take a surprisingly low one -- say, 0.01. By (1), conditional on A=0.01, your credence in r is 0.01.

Since A and B might disagree, B's probability y might be higher or lower than 0.01. Intuitively, y is more likely to lie above 0.01 than below, even conditional on A=0.01. By (3), this implies that your credence in r would have to be greater than 0.01, because your credence in r conditional on A=0.01 is the weighted average of your credence in r conditional on A=0.01 & B=y, weighted by the relevant credence in B=y conditional on A=0.01.In other words, (3) requires that conditional on A=x, your expectation of B is x. But that is implausible (and in fact mathematically impossible) if we consider especially high or low values of x, assuming A and B can disagree.

What should we make of this result? An obvious weak spot is (3). However, the argument I just gave (and the proof Gallow gives) obviously generalises to unequal (non-trivial) weights instead of 0.5. The argument (but not Gallow's proof) also generalises to various non-linear ways of aggregating the two expert judgements. To simplify, suppose you're sure that neither forecaster's probability lies below 0.01. Unless you're sure that A and B agree, it is then impossible to satisfy (1) and (3'):

(3') Cr(r / A=0.01 & B=y) > 0.01 whenever y > 0.01.

It doesn't even help if A's judgement completely trumps B's, so that we have

(3'') Cr(r / A=x & B=y) = x.

For that still renders the second deference principle (2) false, by the same argument as above. (We can hardly have both A trump B and B trump A.)

Gallow's result is a neat refutation of conciliationism about peer disagreement (as he notes). But it appears to have other applications as well that I find more troubling.

For example, let A be the objective chance (of rain tomorrow), and B my future credence (in rain tomorrow). In this case, I do want to hold on to both (1) and (2), at least under somewhat idealised circumstances. (1) follows from the Principal Principle, and (2) is the principle of Reflection.

Clearly, we can't be sure that our future credence equals the objective chance. So Gallow's result tells us that we can't have anything like (3): on the hypothesis that our future credence and the chances disagree, we can't split the difference. Nor can we let, say, the supposed chances generally trump the future credence.

Fortunately, I think all these options are implausible anyway.

Let's take a concrete scenario. My current credence in rain tomorrow (r) is 0.6. Tomorrow, I might have some relevant new evidence. Let's pretend (for simplicity) that there are exactly four things I might learn:

- I might see that it's raining (R).
- I might see that it's not raining (~R).
- I might have no relevant new information, because I haven't looked outside (N).
- I might be looking outside and it's so foggy that I can't tell if it's raining (F).

R would increase my credence in r to 1, ~R to 0; N would leave it at 0.6. F, let's say, would slightly increase my credence in r to 0.7.

Let's stipulate (arbitrarily) that my current credence in getting information N is 0.5, and 0.2 for F. If my credences satisfy the Reflection principle, then my present credence in r matches my expected future credence. It follows that getting R must have credence 0.16 and getting ~R 0.14.

Now comes the crucial question. What's my credence in r on the
hypothesis that the chance of r is x *and* my future credence is y,
where x and y are different?

Let's take x=0.8. We then have four questions, because my future credence in r can take four values: 1, 0, 0.6, and 0.7.

- What is Cr(r / A=0.8 & B=1)? Answer: 1. The hypothesis B=1 implies that tomorrow I see that it's raining, which implies that it will be raining, even if the present chance of rain is 0.8.
- What is Cr(r / A=0.8 & B=0)? Answer: 0, by the same reasoning.
- What is Cr(r / A=0.8 & B=0.6)? Answer: 0.8. The hypothesis B=0.6 implies that I won't have looked outside. This provides no relevant information about whether it's raining, so plausibly I should align my credence with the chances.
- What is Cr(r / A=0.8 & B=0.7)? The answer isn't obvious.
It depends on how likely severe fog is given rain and given not-rain.
It's possible that the fog provides
*further evidence*for rain, over and above the chance being 0.8. In that case, Cr(r / A=0.8 & B=0.7) > 0.8. But it's also possible that the fog evidence is evidence against rain, given the chance being 0.8. Let's assume for the sake of the argument that Cr(r / A=0.8 & B=0.7) = 0.75, in line with (3).

The point to note is that there is no uniform way in which disagreements between chance and future credence are resolved: sometimes chance trumps, sometimes future credence trumps, sometimes we may split the difference, sometimes the disagreement data provides even further evidence for r than each individual hypothesis.

(This last case also happens with weather forecasters or other experts. Suppose two investigators have looked at different evidence regarding a crime and both assign probability 0.8 to Jones being the culprit. The rational response upon learning this might well be to give credence greater than 0.8 to that hypothesis.)

One might still think that the values I've chosen above would lead to trouble, because in each case Cr(r / A=0.8 & B=y) lies in between 0.8 and y. But there need not be any trouble.

I've already chosen the above numbers so that Cr(r / B=y) = y. To verify that Cr(r / A=0.8) = 0.8, we need to know how probable the different hypotheses about future evidence are conditional on the chance of rain being 0.8.

Some ways of filling in these numbers will break the Principal Principle (1), but others won't, and many of these look reasonable to me. For example, let's say I'm 50% confident that I won't look outside tomorrow, independent of the chance of rain. So Cr(B=0.6 / A=0.8) = 0.5. Let's also say that the chance of rain being 0.8 increases the probability that I'll see rain from 0.16 to Cr(B=1 / A=0.8) = 0.25. In order to get Cr(r / A=0.8) = 0.8, it follows that my credence in seeing the fog, conditional on the chance of rain being 0.8, must still be 0.2. Probability theory futher requires that Cr(B=0 / A=0.8) = 0.05. That looks sensible to me (especially since we assumed that my credence in rain conditional on the chance being 0.8 and fog is only 0.75).

What if we look at extreme values? What if x=0? Well:

- What is Cr(r / A=0 & B=1)? How likely is it that it will rain, given that the chance is zero and yet I'll be seeing rain? Hard to say. But it doesn't matter because this condition has credence zero.
- What is Cr(r / A=0 & B=0)? Answer: 0.
- What is Cr(r / A=0 & B=0.6)? Answer: 0.
- What is Cr(r / A=0 & B=0.7)? Answer: 0.

Again, both (1) and (2) are satisfied. Extreme values of A or B provide especially strong evidence, trumping non-extreme values of the other variable.

The upshot is that it *is* possible to serve two epistemic
masters, if we're sufficiently flexible about how to combine
conflicting judgements by the masters. In the case of objective chance
and future credence, the required flexibility is independently
plausible.

Hmmm. Slightly confused. Might be missing something.

A and B are excellent logic students. Today, they both do a logic problem. A obtains P; B obtains ~P. I haven't seen the logic problem myself, but I'm wondering about whether the answer is P or ~P.

Obviously I should withhold belief. If so, the deference principles, (1) and (2), are straightforwardly incorrect, no?

(As you point out, (3) is also incorrect, but it can be patched up, roughly in the way you suggest. See the paper "Updating on the Credences of Others" by Easwaran et al for a pretty well worked out story.)