Evidential externalism as an antidote to skepticism?
A popular idea in recent (formal) epistemology is that an externalist conception of evidence is somehow useful, or even required, to block the threat of skepticism. (See, for example, Das (2019), Das (2022), and Lasonen-Aarnio (2015). The trend was started by Williamson (2000).)
I find the idea highly implausible and poorly motivated. There is a much better response to skepticism.
It's easy to see why the idea might seem appealing. Assume we have knowledge of the external world. Our envatted duplicates do not. One might think that if we know more than our envatted duplicates then we must have evidence that they lack. (This is trivial if evidence=knowledge.) But we are internally the same. So what evidence we have is partly dependent on our environment.
The way this is usually spelled out is to say that in a "good case", where things are as they appear to be, our evidence directly reveals relevant facts about our environment. When we're looking at our hands, for example, externalists say that our evidence entails that we have hands. When we're looking at an apple, our evidence entails that we are looking at an apple. And so on.
This has highly implausible consequences.
Suppose I have just rolled two dice. If they both landed 6, I have put a wax apple on my desk. If they didn't both land 6, I have put a real apple on my desk that looks just like the wax apple. Knowing all this (but not how the dice landed), you now take a look at my desk. In fact, the dice have landed 3 and 4, so you are looking at a real apple. Can you tell, just by looking, that the apple is real, and that the dice have not both landed 6?
Intuitively, you can't. Your evidence doesn't reveal that the apple is real, or that the dice have not landed 6.
Friends of evidential externalism might admit that in this particular case, your evidence really doesn't entail that the apple is real, perhaps because there's a "nearby" or "normal" situation in which the apple is fake.
But the problem has nothing to do with whether the alternative is nearby or normal. Suppose instead of rolling two dice, I have put a wax apple on my desk iff the 3rd and 4th decimal digit of the universal gravitational constant G are both '9'. I know what these digits are, you don't. They are '4' and '3', and there is no nearby or normal situation in which they are both '9'. (The universe would be very different if the force of gravity were stronger or weaker.) If your evidence, when you look at my desk, tells you that the apple is real, then your total evidence also tells you that the 3rd and 4th digits of G aren't both '9'. But clearly your evidence tells you no such thing.
More generally, let p be any fact about the world that I happen to know and you don't. Imagine that before looking at the apple, you know that I have put a wax apple on my desk if p is false and a real apple if p is true. If your evidence tells you that the apple is real, your total evidence would entail that p is true. But it does not.
(As Salow (2018) points out, you could exploit the supposed entailment to artificially make your total evidence support arbitrary propositions that you would like to be true.)
Intuitively, it doesn't matter under what conditions I would choose the wax apple. Suppose all we know about the scenario is that before you look at my desk, you rationally give credence 0.05 to the hypothesis that there's a wax apple on the desk. Intuitively, when you look at my desk, your credence should not go to zero. Your evidence does not put you in a position to rule out the previously open wax apple possibility.
This intuition also applies to more radical skeptical scenarios. Schwitzgebel (2017) argues, and I agree, that rational agents may give a small amount of credence to radical skeptical scenarios. I give some credence to the hypothesis that I am a brain in a vat. Intuitively, my perceptual evidence does not put me in a position to rule out this possibility.
So evidential externalism is implausible. I'll now explain why it is poorly motivated – why there is a much better response to the threat of skepticism.
The better response holds that while our evidence doesn't entail the negation of skeptical scenarios, it nonetheless renders such scenarios improbable.
Let H be a skeptical hypothesis (say, that we are brains in vats). Let E be our total evidence (whatever that might be). I claim that the probability of H given E is less than the probability of ¬H given E, even though E does not entail ¬H. By probability theory, P(H/E) < P(¬H/E) iff P(H ∧ E) < P(¬H ∧ E). What I claim is therefore that even though E does not entail ¬H, the prior probability of H ∧ E is less than the prior probability of ¬H ∧ E. Intuitively, I claim that among scenarios compatible with our total evidence, skeptical scenarios have lower a priori probability than non-skeptical scenarios.
I would have thought that everyone, even evidential externalists, must assume some such bias.
Suppose you have observed 112 ravens, all of which are black. How likely is it, in light of this evidence, that the next raven is also black? The answer is surely not 1. Your evidence does not entail that the next raven is black. But the evidence also isn't entirely neutral about the colour of the next raven. All else equal, it makes it likely that the next raven is black.
Let E be your evidence about the first 112 ravens, and H the hypothesis that the next raven is black. Since P(H/E) > 1/2, it follows that P(H ∧ E) > P(¬H ∧ E): among scenarios compatible with E, scenarios in which the next raven is black have greater a priori probability than scenarios in which the next raven is not black.
Without an a priori bias in favour of the "uniformity of nature", inductive inference would be impossible.
The same is true for scientific inference more generally. Our scientific evidence does not entail the standard model of quantum physics, or general relativity, or the hypothesis of anthropogenic climate chance. There are scenarios in which all our evidence is true but the theories are false. These are moderately skeptical scenarios in which the scientific method leads us astray.
It won't do to say that such scenarios are impossible. A grown-up epistemology must come to terms with the fact that our epistemic methods are fallible. Good inductive and abductive reasoning can lead to false conclusions.
In a Bayesian framework, this means that scenarios in which these methods lead us astray have low a priori probability.
In a generalised form, the position Das attributes to White is part of any sane epistemology. It is certainly part of any sane version of Bayesian epistemology. Without non-trivial constraints on priors, one could say nothing at all about how probable a hypothesis is in light of some evidence, except in the limit case in which the hyopthesis or its negation is logically entailed by the evidence. All non-trivial applications of the Bayesian toolbox lie outside this limit case.
Evidential externalists hold that perceptual inference is special. Unlike inductive and abductive inference, they say, perceptual inference is infallible: when we look at an apple and come to believe that it is an apple, the conclusion is entailed by our evidence. The hypothesis that the apple is made of wax can be ruled out safely and conclusively.
I find this much more counterintuitive than postulating an a priori bias against skeptical scenarios. In a broadly evidentialist or Bayesian framework, the assumption of an a priori bias is equivalent to the assumption that evidential support outstrips entailment by the evidence. Everyone should accept this assumption, to make sense of our inductive and abductive practice. Once we accept the assumption, there is no good reason to tell a very different story about perceptual knowledge.