An alternative model of permissivism about epistemic risk

In the previous post I argued that rational priors must favour some possibilities over others, and that this is a problem for Richard Pettigrew's model of Jamesian permissivism. It also points towards an alternative model that might be worth exploring.

I claim that, in the absence of unusual evidence, a rational agent should be confident that observed patterns continue in the unobserved part of the world, that witnesses tell the truth, that rain experiences indicate rain, and so on. In short, they should give low credence to various skeptical scenarios. How low? Arguably, our epistemic norms don't fix a unique and precise answer.

Let's assume, then, that there is a range of evidential probability measures, each of which is eligible as a rational prior credence function.

All eligible priors give low credence to skeptical scenarios, but they don't agree on how low these credences are. As a result, agents who adopt different eligible priors will appear more or less cautious in the lessons they draw from inconclusive evidence.

Suppose we've seen 17 green birds and wonder whether the next bird will be green as well. If you give more prior credence than me to the (moderately skeptical) hypothesis that our initial sample was unrepresentative, then your credence in the next bird being green might be 0.8 while mine is 0.9.

Or suppose we've heard incriminating witness statements and wonder whether the defendant is guilty. Again, if you're epistemically more cautious than me by giving greater prior probability to scenarios in which witnesses are unreliable, your credence in the defendant's guilt might be 0.8 while mine is 0.9.

Why might you prefer a more cautious prior probability – or "inductive method", as Carnap would put it? Perhaps because you care more about the risk of inaccurate beliefs. You really want to avoid a low accuracy score, even at the cost of foregoing the possibility of a high score. I'm more risk-inclined. I value high accuracy more than I disvalue low accuracy.

There are different ways to model these attitudes, but a natural idea is to assume that we employ different scoring rules.

To illustrate this idea, consider two scoring rules. One is a simple absolute rule Sa, on which the inaccuracy of a credence function Cr at a world w is Sa(Cr,w) = 1-Cr(w). The other is a cubic rule Sc with Sc(Cr,w) = |1-Cr(w)|3.

The cubic function doesn't care much about high accuracy. It regards a credence of 0.8 in the truth as very similar to a credence of 1 (because |1-0.8|3 is very close to |1-1|3). However, it regards a credence of 0 in the truth as much worse than a credence of 0.2 (|1-0.2|3 is about half of |1-0|3). By comparison, the absolute function cares less about low accuracy, and more about high accuracy.

This shows up if we look at the expected inaccuracy score of different eligible priors, from the perspective of other eligible priors.

The cubic score urges caution. If you start with a highly opinionated prior, it pushes you towards a less opinionated, more egalitarian prior.

The absolute score does the opposite. It pushes you towards a more opinionated, less egalitarian prior.

The fixed point for the absolute score is a maximally opinionated probability measure that assigns probability 0 to any skeptical hypothesis. This measure minimises expected Sa-inaccuracy relative to itself. The fixed point for the cubic score is the uniform measure that takes skeptical hypothesis as seriously as non-skeptical hypotheses.

Neither of these fixed points is rationally permissible, I think. Perhaps this shows that the two scoring rules I've used aren't rationally permissible either. Plausibly, for any permissible scoring rule there should be a permissible rational prior credence that minimises expected inaccuracy from the perspective of itself. This is the prior you should choose if you employ that scoring rule. Every other prior should drive you towards it.

In this model, we don't want permissible scoring rules to be proper. Proper scoring rules make every credence function ideal from its own perspective. But if you're really averse to getting things wrong then you should feel uneasy about assigning credence 0.01 to a skeptical scenario. By your own lights, you should think it would be better to move towards a more cautious state in which the skeptical scenario has higher credence.

(Since we're talking about scoring rules for the choice of priors, some familiar arguments for propriety don't apply.)

Unlike Richard's model, the model I have in mind only has one layer of permissivism. Once you've settled on a degree of epistemic risk-aversion, and thereby on a scoring rule, the range of eligible prior credence functions should ideally reduce to a single candidate. We could therefore allow your risk attitudes to change over time without allowing for dramatic and incomprehensible changes in belief. If you change your risk attitudes and adopt a new inductive method, applying it to your total evidence, you may become more or less cautious. But you will never switch from a high credence in H to a high credence in ¬H.

Comments

No comments yet.

Add a comment

Please leave these fields blank (spam trap):

No HTML please.
You can edit this comment until 30 minutes after posting.