Does subjective uncertainty objectively matter?

Let's say that an act A is subjectively better than an alternative B if A is better in light of the agent's information; A is objectively better if it is better in light of all the facts. The distinction is easiest to grasp in a consequentialist setting. Here an act is objectively better if it brings about more good -- if it saves more lives, for example. A morally conscientious agent may not know which of her options would bring about more good. Her subjective ranking of the options might therefore go by the expectation of the good: by the probability-weighted average of the good each act might bring about.

I'm not a consequentialist. Nevertheless, I find it plausible that objective and subjective betterness are related to one another by decision-theoretic principles: an act is subjectively better to the extent that it has greater expected objective value. I wrote up some reasons why I find that plausible in a paper. I tried to stay neutral on substantive moral issues there. One such issue, about which I'm actually unsure, is the extent to which subjective uncertainty affects objective moral value.

Suppose you throw an empty bottle out of the window onto the street, without looking whether it might hit anybody. Fortunately it doesn't. In normal contexts, your act would be subjectively wrong: it is not what a morally ideal agent would do given your information about the world. But is it also objectively wrong? Is it objectively worse to throw the bottle without looking than to throw it after checking that the street is empty?

The decision-theoretic framework doesn't settle the question. The state of affairs in which you throw a bottle onto the street after checking that the street is empty is different from the state in which you throw without checking. So we could easily assign different moral value to these states. Doing so would not go against the plausible idea that only what actually happens matters to moral value. After all, the two states are actually different. They don't just differ in what could have happened. (In fact, it is hard to see how any two states could differ in what's possible without differing in what's actual. "Ethical actualism" is as non-issue.)

So we could say that throwing the bottle without looking is objectively worse than throwing it after looking. But should we say that? The problem is that we non-consequentialists don't have a clear prior standard for objective value. To be sure, throwing the bottle without looking is wrong, even if nobody gets hurt. But it is not obvious that this is a judgement about objective moral status.

One reason not to make objective value sensitive to the agent's information is that this reeks of double-counting when it comes to subjective moral status. Why should you not throw the bottle without looking? Because somebody might get hurt. Arguably, the reason why you shouldn't throw the bottle is that it might bring about a bad state of affairs. You can't rule out that it does. This presupposes that throwing the bottle onto the empty street (without looking) isn't by itself objectively bad. Otherwise we ought to say that you shouldn't throw the bottle without looking because this is certain to bring about a bad state of affairs.

Relatedly, one might worry that making objective value sensitive to the agent's information trivializes the decision-theoretic connection between subjective and objective moral status. If it is objectively wrong to throw the bottle whether or not there are people on the street, we don't need to look at the expectation of objective value to figure out that the act is subjectively wrong. The act is objectively wrong no matter which state of the world is actual.

But these considerations are a little too quick. Objective moral value plausibly has many dimensions. It matters whether innocent people get hurt, whether promises are broken, whether personal projects are sacrificed, and so on. Define the primary objective value of an act by how it scores in these dimensions, where we do not yet take into account the agent's uncertainty about possible outcomes. (Let's assume for simplicity that the primary value is a number that somehow aggregates the dimensions, but it could instead be vector or some other more complicated structure.) We can then add a further dimension on which acts are evaluated in terms of the possible primary values their choice might realize by the lights of the agent. For example, we could here consider the agent's subjective expectation of primary objective value. More plausibly, we could consider not the expectation but, say, the variance of primary value. That's how risk is commonly measured in finance. There wouldn't be any double-counting here because the subjective expectation doesn't already take into account the variance of primary value. And we could explain why the wrongness of throwing the bottle is ultimately grounded in the primary badness of the possibility that someone might get hurt.

How can we test whether objective value is sensitive to the agent's information? A promising test case is Diamond's well-known "counterexample" to expected utility theory: Two patients A and B are in urgent need of a new liver, but only one liver is available. We can choose between giving it directly to A, giving it directly to B, and making the choice by tossing a coin. The latter is often judged to be better, on the grounds that it gives both A and B a fair chance of getting the liver. To get that result, the fairness must factor in the objective status: if objective status ignores fairness, tossing the coin couldn't have greater expected moral value.

It is not entirely obvious that what matters in Diamond's scenario really are the agent's beliefs about who might get the liver. Tossing the coin gives each patient an equal objective chance of getting the liver, irrespective of the agent's beliefs, and that might be why the coin tossing is better. However, instead of using the outcome of a coin toss, we could make the liver allocation depend on an arbitrary other proposition to which we assign credence 1/2, and I think that choice would still be better than directly giving the liver to A or B.

So I'm inclined to think that at least in some situations, such as Diamond's, subjective uncertainty about possible outcomes does affect objecitve moral status. But Diamond's scenario is rather special. What about more ordinary cases involving risk, like the one with the bottle?

Here, too, we must be careful to distinguish objective risk from subjective risk. Acts that bring about a significant objective chance of harming innocents may have lower objective moral status, but that doesn't reveal anything about whether the agent's state of information affects objective status. Similarly, we have to factor out direct effects risky acts often have on others. People who are subjected to a risk of being harmed may feel unsafe and terrified, even if the harm doesn't come about. As a consequence, the risky act may well be bad simply because the relevant feelings are themselves a kind of harm. It is then not the riskiness itself that carries the negative moral weight.

One might also argue that some of the things we objectively value are incompatible with certain kinds of reckless behaviour. Plausibly, honoring friendship involves not acting in a way that might easily cause great harm to friends, merely for one's personal enjoyment. But here again it is not obvious that the relevant value is genuinely objective. After all, we value being a morally conscientious agent, which I think roughly means maximizing expected (objective) moral value. It would clearly be a mistake to treat that as an aspect of objective value.

In this connection, let me end by quoting an insightful passage from Holly Smith's "Subjective Rightness" (2010), on the more general question of whether objective moral status is sensitive to the agent's beliefs (not only her uncertainty about outcomes).

[T]here are moral views according to which an action's objective moral status may be partly or wholly a function of the agent's beliefs. [...] For example, on many moral views, lying is wrong, where "lying" is defined (roughly) as asserting what the agent believes to be a falsehood with the intention of deceiving his audience. To perform an act of lying requires the agent to have two beliefs: the belief that his assertion is false, and the belief that his assertion will deceive his audience. Other types of acts commonly held to be wrong also involve attitudinal states that, on analysis, turn out to involve the agent's beliefs. Examples include stealing (taking possession of property one believes to belong to another) and committing murder (acting in a way that one believes and intends will result in the death of another person). [...]

It could be cogently argued (and I am sympathetic with this argument) that in the case of each of these types of performance [...], it is only the underlying non-mental activity that is objectively wrong. On this view, when we evaluate as "right" or "wrong" the more complex act (such as lying or stealing) -- an act that involves bodily motions, the surrounding circumstances, and the agent's beliefs and desires -- we are using a kind of time-saving (but misleading) shortcut that merges together considerations of objective moral status, subjective moral status, and blameworthiness. Thus, in the case of lying, it could be argued that what is genuinely objectively wrong is making an assertion that misleads the person who hears it; what is subjectively wrong is making an assertion in the belief that it is false and will mislead; and what is blameworthy is performing an act that one believes to be subjectively wrong.

Comments

# on 10 December 2014, 20:09

Hi, Wo


With regard to the "objective chance" of getting the liver in the fair coin scenario, isn't that also the chance from a certain epistemic perspective, or more precisely from the perspective of a scenario in which certain info is available to the agent, and certain info is not?
That's not the same as what the agent actual probabilistic assessment is - the agent might be making a mistake, for example -, but still, it seems to me that it's sensitive to the information available to the agent, since in order to toss a fair coin with 1/2 chance of favoring each, the agent needs to have some information about the coin, and needs to lack some other information about the coin.
For example, if the agent had enough information to make predictions at the level of subatomic particles (as much info about that as possible), and sufficient computing power, maybe the 1/2 assessment would be incorrect in nearly all cases, regardless of whether the coin is fair - perhaps, an agent like that ought to assign probability close to 1 to either tails or heads each time (even in if the universe happens to be indeterministic, which might or might not be the case as far as I know).

Alternatively, instead of a coin, maybe a computer-simulated coin is clearer: it's proper to assign 1/2 probability to either heads or tails - if the computer program seems to work like a fair coin -, from the perspective of a normal human agent using the program (and it seems to me that using the program would be as good a method of picking the patient as tossing the actual coin), but not from the perspective of an agent who can actually run the program's algorithms and get a detailed picture of the outcome before the program does it (and after the relevant command is given to the program, etc.). That would not be as precise as the assessment based on subatomic particles perhaps, but it would still be based on a lot more info than the normal human assessment.

So, my impression is that the distinction between the probabilistic assessments and agent makes based on the info available to her, and the probabilistic assessment she should make based on that info may be the relevant one in this context (and that I suppose might be called subjective/objective probability, but that's a matter of terminology; I understand you're not using the words in that manner).

In particular, if the "toss a coin" option is objectively better (in the terminology you are using) because there is an objective 1/2 chance and regardless of what the agent tossing the coin believes, that would still imply that objective value is sensitive to the information available to the agent tossing the coin, regardless of whether is is also sensitive to the probabilistic assessment that the agent actually makes - properly or not.

I think similar considerations apply to objective risk vs. subjective risk.

Add a comment

Please leave these fields blank (spam trap):

No HTML please.
You can edit this comment until 30 minutes after posting.