Nair on adding up reasons

Often there are many reasons for and against a certain act or belief. How do these reasons combine to an overall reason? Nair (2021) tries to give an answer.

Nair's starting point is a little more specific. Nair intuits that there are cases in which two equally strong reasons combine to a reason that is twice as strong as the individual reasons. In other cases, however, the combined reason is just as strong as the individual reasons, or even weaker.

To make sense this, we need to explain (1) how strengths of reason can be represented numerically, and (2) under what conditions the strengths of different reasons add up.

Let's start with reasons to belief.

Suppose we adopt the following Bayesian analysis of reasons to belief:

E is reason for S to believe H iff \( Pr_{s}(H/E) > Pr_{S}(H) \),

where \( Pr_{S} \) is S's subjective credence function. This answers the first challenge. We can now measure the strength of this inequalty – the degree \( r_{b}(H/E) \) to which E is a reason to believe H for S – y the log likelihood ratio \( \log \frac{Pr_{S}(E/H)}{Pr_{S}(E/\neg{}H)} \):

\( r_{b}(H/E) = \log \frac{Pr_{S}(E/H)}{Pr_{S}(E/\neg{}H)} \)

And then we can show under what conditions different reasons "add up". In particular, we can show that rb(H/E ∧ E') = rb(H/E) + rb(H/E) whenever E and E' are probabilistically independent conditional on H and on ¬H.

This is Nair's "Bayesian Simple Theory of Reasons". Nair suggests two ways in which the theory could be revised in line with different views about reasons.

First, we could replace the parameter H in the displayed formulas by some other proposition that is a function of H. For example, following Kearns and Star (2009)'s idea that reasons to believe are evidence that one ought to believe, one could say that

E is reason for S to believe H iff \( Pr_{s}(\text{S ought to believe H}\;/\;E) > Pr_{s}(\text{S ought to believe H}) \).

Second, we could change the interpretation of the probability measure \( Pr_S \) in the displayed formulas. In particular, nonreductionists about reason my want to treat the relevant probability as irreducible. This, of course, wouldn't really answer the first challenge – to explain how strengths of reason can be represented numerically. Nair expresses some hope that one might be able to derive a probabilistic representation from a primitive qualitative relation.

All this looks to me like it's on the right track. I have a few comments, before I move on to reasons for action.

First, the Bayesian theory implies that whenever S is sure of E then E isn't a reason for S to believe anything. Similarly, whenever S is confident in E then E is not a strong reason for S to believe anything. This is highly implausible. The Bayesian Simple Theory is refuted by the "problem of old evidence". We need to find a different interpretation of \( Pr_{S} \).

I'd suggest that \( Pr_{S} \) should be an objective "evidential" probability measure, conditionalised on relevant background information. The measure is subject-relative only insofar as different background information is relevant when we talk about different subjects. The basic notion is "In light of background information B, E is reason to believe H", and this is identified with \( Pr(H/E\land{}B) > Pr(H/B) \).

I don't think we can informatively reduce the objective evidential probability measure Pr to anything else. So we are in nonreductivist territory, along with most "objective Bayesians".

Is there a way of deriving a numerical, probabilistic representation from qualitative conditions on evidential support? Yes. Cox's "Theorem" tries to do just that. A number of more rigorous qualitative axiomatisations of conditional probability are mentioned in section 7 of Fishburn (1986).

In all of these accounts, however, what is compared is the absolute support that H receives by E, as measured by Pr(H/E), not the incremental support measured by the log likelihood ratio. It's a good question whether one could derive a probabilistic representation from qualitative constraints on incremental support. (Can one somehow turn, say, Cox's axioms into qualitative conditions on comparing likelihood ratios, by using the fact that posterior odds are prior odds times likelihood ratios? This might be a fun exercise.)

Anyway, my second comment. Why is rb(H/E) measured by the log likelihood ratio? There is a large literature in confirmation theory comparing different measures of (incremental and absolute) support. (See, for a start, Fitelson (2001).) As far as I can tell, most sensible people agree that there isn't one privileged measure. My view, at least, is that our pre-theoretic concept of confirmation or evidential support is vague and equivocal. It can be regimented in different ways, none of them objectively better than the others.

To illustrate this situation, suppose some evidence E entails H. One might want to say that E provides maximally strong reason to believe H. The log likelihood ratio measure doesn't satisfy this condition. On Nair's account, some evidence E may provide no strong reason for you to believe H even though E entails H. This seems wrong.

It's hard to find a measure of incremental support that avoids this problem. Should we switch to a measure of absolute support – identifying \( r_{b}(H/E) \) with something like \( Pr(H/E) \)? This runs into other problems. We may not want to say that the current inflation figures are a strong reason to believe that 1+1=2.

Things get worse if we take seriously the occurrence of 'believe' in 'reason to believe'. Consider the evidence E that a coin is biased 60:40 towards heads and that it is about to be tossed. On Nair's account (and on pretty much any Bayesian measure of support), E is a reason to believe that the coin will land heads. But you should not believe that the coin will land heads on the basis of E alone. And even if you have further evidence, E is almost certain to make no difference. Only in extremely strange circumstances would knowing E make it reasonable to believe that the coin will land heads.

In some sense, then, E is clearly not a reason to believe that the coin will land heads. In another sense, however, I think it is. Nair's nonreductionist proposal at most captures the other sense.

Nair here inadvertently puts his finger on a problem I've always had with "reasons" talk. I fear that our pre-theoretic concept of reasons is at least as vague and equivocal as our pre-theoretic concept of evidential support. If we want it to do any serious theoretical work, we first need to regiment it, explaining what exactly we have in mind. Plausibly, there is no way of regimenting the concept that will allow it to do everything we might initially have hoped it can do.

OK. Let's look at reasons for acts.

The Kearns and Star inspired model of reasons to believe can easily be extended to acts. We would say that E is reason for S to do A iff \( Pr_{s}(\text{S ought to do A}\;/\;E) > Pr_{s}(\text{S ought to do A}) \), and we would measure the strength of the inequality in terms of the log likelihood ratio.

This doesn't look plausible, though. For one thing, it has a strong whiff of "desire as belief", and we know, since Lewis (1988), that any such proposal faces serious technical challenges. Also, suppose there are three options. A is best, B second-best, C worst. There are reasons to do B, but stronger reasons to do A. This seems possible. But in a case like this, the reasons to do B may not be evidence that we ought to do B. In fact, it might be entirely clear, given the background facts, that we ought to do A.

Nair's nonreductivist approach for the belief case also doesn't seem to work. The idea would be to define a primitive relation \( r_{a}(A/E) \), for 'the degree to which E is a reason for S to do A', and show that it is measured by some log likelihood ratio \( \log \frac{Pr_{S}(E/A)}{Pr_{S}(E/\neg{}A)} \). The problem Nair points out is that incremental evidential support is symmetric, whereas reasons for action are not: that I have promised to help is a reason for me to help, that I help may not be a reason for me to have promised to help.

At this point, Nair merely gestures towards Sher (2019), which I haven't read, for an alternative solution.

Let's see. My first stab at formalising 'E is a reason to do A' would be to say that E increases the expected utility of A: EU(A/E) > EU(A). As before, we should probably talk about objective expected utility here, to allow that propositions that are already known can be reasons. Objective expected utility is expected utility relative to the evidential probability measure Pr. Utility, in this context, could be different things, depending on whether we talk about moral reasons, prudential reasons, etc.

What is EU(A/E)? Let's assume, for simplicity, an evidential measure of expected utility, a la Jeffrey (1983). Then EU(A/E) is simply EU(A ∧ E), and expected utility reduces to utility.

Can we quantify the extent to which E increases the expected utility of A? The "likelihood ratio" \( \frac{ EU(E/A) }{ EU(E/\neg{}A) } \) would reduce to \( \frac{ U(A\land{}E) }{ U(\neg{}A\land{}E) } \). The value of this ratio tells us nothing whatsoever about how EU(A/E) relates to EU(A). So this is not a sensible measure.

A better idea is to look at \( \frac{U(A\land{}E) }{ U(A\land{}\neg{}E) } \). If this ratio has value x then worlds where you do A and E is true are, on average (i.e. expectation), x times as good as worlds where you do A and E is false. This might give us a sensible measure.

As before, however, I doubt that there is a unique, privileged way of measuring the extent to which a proposition is a reason to perform an act. I'd expect the situation to be parallel to the case of belief. But it would be interesting to spell out a few candidate measures.

Here is a direct analogue of the problem I mentioned for the case of belief. Suppose that utility has a maximum value and that E says that doing A would lead to maximal utility. One might think that this provides maximally strong reason to do A. The above ratio measure wouldn't account for this. It seems to call for a measure of "absolute support", but such measures have obvious other problems.

Fishburn, Peter C. 1986. “The Axioms of Subjective Probability.” Statistical Science 1: 335–58.
Fitelson, Branden. 2001. Studies in Bayesian Confirmation Theory. Dissertation.
Jeffrey, Richard. 1983. The Logic of Decision. Second. Chicago: University of Chicago Press.
Kearns, Stephen, and Daniel Star. 2009. “Reasons as Evidence.” Oxford Studies in Metaethics 4: 215–42.
Lewis, David. 1988. “Desire as Belief.” Mind 97: 323–32.
Nair, Shyam. 2021. Adding Up Reasons: Lessons for Reductive and Nonreductive Approaches.” Ethics 132 (1): 38–88. doi.org/10.1086/715288.
Sher, Itai. 2019. “Comparative Value and the Weight of Reasons.” Economics & Philosophy 35 (1): 103–58. doi.org/10.1017/S0266267118000160.

Comments

No comments yet.

Add a comment

Please leave these fields blank (spam trap):

No HTML please.
You can edit this comment until 30 minutes after posting.