Centred propositions and agent-relative value

Plausible moral theories should be agent-relative. They should permit us to care more about close friends than about distant strangers. They can prohibit killing ten innocent people even in circumstances where eleven innocent people would otherwise be killed by somebody else. They might say that it would be right for Alice to dance with Bob, but wrong for Bob to dance with Alice.

But how should we think about agent-relative values? It may seem that the state of affairs in which Alice dances with Bob is either right or not right. How could it be right relative to Alice but wrong relative to Bob? Or consider a case where I can prevent you from killing eleven by killing ten myself. If it is wrong that you kill the eleven, then surely I have a moral reason to see to it that you don't kill the eleven, just as I have a moral reason to see to it that I don't kill the ten. Moreover, presumably it is worse if you kill eleven than if I kill ten. So shouldn't my reason to prevent you from killing the eleven outweigh my reason to not kill the ten?

The puzzlement here is caused by the implicit assumption that moral value attaches to impersonal, uncentred states of affairs: that whether Alice should dance with Bob depends on the moral status pertaining to the state of affairs in which Alice dances with Bob; that whether I should kill the ten depends on the comparative moral status of the state in which you kill the eleven and the state in which I kill the ten. To allow for agent-relative theories, we are then forced to say that the moral ranking of possible states of affairs is different for different agents: relative to Alice, it is better if Alice dances with Bob, but not relative to Bob.

This is an unattractive picture. It fails to capture the universality of credible agent-relative theories. Intuitively, a theory that prohibits killing ten to prevent the killing of eleven does not invoke moral values that are tied to, or relativized to, particular agents. It requires the very same thing of everyone.

A much better alternative is to assume that moral value attaches not to uncentred states of affairs, but to centred propositions. A centred proposition is an entity that captures differences not only between possible worlds, but also between times and people within a world. Centred propositions are often modeled as sets of triples of a possible world, a time, and a person. Alternatively, they might be identified with monadic properties. For example, the property of eating a muffin divides the class of possible people at possible times into those that eat a muffin at the time and those that do not.

If we assume that the ultimate bearers of moral value are centred propositions, we can say, for example, that being truthful has high moral valuable, while killing innocents does not. On this picture, the fact that it is wrong for me to kill the ten innocents is not grounded in the moral status attaching to the impersonal state of affairs in which I kill the ten. Rather, it is grounded in the fact that killing ten innocents is seriously wrong, in a way that isn't compensated by preventing someone else from killing eleven. Similarly, if Alice has promised to dance with Bob and Bob has promised not to dance with Alice, then by dancing with Bob Alice can ensure that she instantiates the valuable property of keeping one's promise, even though doing so will go along with making someone else break their promise.

Centred propositions were originally invented to deal with puzzles involving self-locating ignorance and to explain the different effects believing one and the same uncentred proposition often seems to have on the believer's behaviour. But these arguments for postulating attitudes with uncentred content are controversial, since there seem to be other ways to account for the relevant phenomena. In particular, one can generally model the content of the relevant beliefs in terms of uncentred propositions involving particular haecceities to which only special subjects have access.

Interestingly, this alternative is no longer available if we want to model agent-centred values. Here the haecceitist account would simply take us back to the point where we have to say that different basic moral values (involving different haecceities) are relevant for different agents.

So moral value is "essentially centred". The same is true for rational value and aesthetic value. In general, what is right or wrong, valuable or valueless, are centred propositions.

It seems to me that this provides some (though certainly not conclusive) reason for also modeling various attitudes in terms of centred propositions. After all, there are close connections between values and attitudes. On some accounts, something is valuable iff it ought to be desired; on others, something is valuable iff it would be desired (or desired to be desired) under ideal conditions. This suggests that desire and value have the same kinds of object. Moreover, it is plausible that "subjective" moral status -- what is right or wrong in the light of some probability measure -- derives from "objective" status by decision-theoretic principles, which suggests that the relevant probabilities should also pertain to centred propositions.

(Just to be clear: these are not claims about language. I do not claim that certain statements of English should be analyzed in terms of centred propositions. I am talking about the nature of value, belief and desire.)


No comments yet.

Add a comment

Please leave these fields blank (spam trap):

No HTML please.
You can edit this comment until 30 minutes after posting.