Centred propositions and objective epistemology

Given some evidence E and some proposition P, we can ask to what extent E supports P, and thus to what extent an agent should believe P if their only relevant evidence is E. The question may not always have a precise answer, but there are both intuitive and theoretical reasons to assume that the question is meaningful – that there is a kind of (imprecise) "evidential probability" conferred by evidence on propositions. That's why it makes sense to say, for example, that one should proportion one's beliefs to one's evidence.

In broadly Bayesian approaches to confirmation and rational belief, the extent to which E supports P is determined by certain facts about ultimate priors -- namely by the (ultimate) prior probabilities of E, P, and P&E. (Complications arise if one of these is zero. We'll ignore those.) For example, if an agent with total evidence E should assign credence 0.9 to P, then their prior probability in P given E, which equals the ratio of the prior probability of P&E over the prior probability of E, should be 0.9. Observation of many black ravens makes it rational to believe that the next raven will also be black because there is a high prior probability that the as yet unobserved resembles the observed.

So non-trivial constraints on what one should believe in the light of such-and-such evidence trace back to non-trivial constraints on what one should believe (hypothetically) in the light of no evidence whatsoever. Here are a few plausible and very sketchy examples of such constraints:

(1) Scenarios in which things are radically different from how they appear to be deserve low prior probability. (Trust your senses.)

(2) Scenarios in which the so-far unobserved is utterly unlike the so-far observed deserve low prior probability. (Cautiously reason by induction.)

(3) Scenarios in which phenomena have simple explanations deserve greater prior probability than scenarios in which phenomena have very complicated explanations. (Cautiously reason by abduction.)

(4) Similar possibilities should have similar prior probabilities. (Indifference, "Maximize entropy".)

(5) The prior probability of an event X on the supposition that X has objective chance y, should be y. (A form of the Principal Principle.)

Now consider the following scenario.

(W) There are two people, Alice and Bob. Alice's senses are working fine: if she perceives things to be a certain way, they generally are that way. Bob is less lucky; his sense organs have been rewired so that his experiences are systematically deceptive. If he sees something as being on the left it is actually on the right, he hears soft sounds as loud, and so on.

Bob finds himself in a skeptical scenario. By constraint (1) above, Bob's predicament should have low prior probability. If the objects of probability can be centred that is easy to model: the centred possibility of being Bob in (W) should have low prior probability. But what if the objects of probability are uncentred possible worlds? What is the prior probability of scenario (W), and more generally of worlds in which some people's sense organs are reliable and others are not?

There are some obvious choices. We could look at the worst case and say that worlds in which someone's sense experiences are deceptive should all have low prior probability. Or we could look at the best case, so that constraint (1) covers only worlds in which everyone's sense experiences are deceptive. Or we could average, assigning middling prior probability to worlds like (W).

All these options look really implausible to me, both in themselves and as interpretations of (1). The idea that one should trust one's senses can't be cashed out in terms of beliefs about the reliability of sense organs across the entire universe.

As far as I can see, the only sensible alternative for philosophers who reject centred propositions is to relativize epistemic norms to agents. Roughly speaking, (W) deserves low prior probability relative to Bob, but not relative to Alice.

Consider the simple Chisholmian picture on which everyone has infallible a priori access to their own haecceity, but not to anybody else's haecceity. So, if Rudolf Lingens suspects that he is Gustav Lauben then the content of his suspicion is the uncentred proposition that Gustav Lauben is the person with Lingens's (actual) haecceity H. I haven't yet specified the haecceities in (W). Let's say Bob has haecceity H1 and Alice haecceity H2. The Chisholmian can then say that Scenario 1 is a skeptical scenario for agents whose actual haecceity is H2, but not for agents whose actual haecceity is H1. Accordingly, an actual agent with H2 should assign low prior probability to w, while an agent with H1 should not.

(Arguably the Chisholmian picture requires some such relativization of rational prior probabilities anyway, since an agent with haecceity H1 must assign zero probability to scenarios in which H1 doesn't exist.)

Or consider what I take to be Stalnaker's account. Here people can be mistaken about their identity or haecceity, but they can never be uncertain or mistaken about who they are given an objective, uncentred description of the world. Conditional on a complete description of scenario (W), we must either be certain that we are Alice or that we are Bob. Conditional on the actual world, all of us must be certain that we are who we actually are. So let's say that Lingens and Lauben are a priori certain that if the world of (W) is actual, then they are Alice and Bob, respectively. (Don't ask why!) Then (W) is a skeptical scenario for Lauben and not for Bob. Again, the scenario should have low prior probability for some agents and not for others.

Relativization to agents is not enough. Through time, one can move in and out of a skeptical scenario: one's sense organs and memory can be reliable at one time and unreliable at another. So we have to relativize our prior probability assignments not only to agents, but also to times.

The problem illustrated by (W) is pervasive. For one thing, worlds in which some people have more reliable sense organs than others are not remote possibilities that can be safely ignored. The actual world is almost certainly a world like that. The problem also plausibly generalizes beyond principle (1). For example, whether the unobserved resembles the observed can vary greatly from agent to agent, and from time to time. I would even argue that all the principles (1) to (5) above put constraints on essentially self-locating beliefs, and thus have to become relativized to agents and times if one doesn't allow for such beliefs.

That looks like a serious cost for opponents of centred propositions. We can no longer ask to what extent evidence E supports proposition P, or to what extent one should believe P if one's evidence is E. We always have to relativize: to what extent does E support P for A at t?

In slogan form: it looks like rejecting centred propositions means giving up on objective (unrelativized, inter-subjective) epistemology.

Comments

No comments yet.

Add a comment

Please leave these fields blank (spam trap):

No HTML please.
You can edit this comment until 30 minutes after posting.