Let's say that an act A is subjectively better than an alternative B if A is better in light of the agent's information; A is objectively better if it is better in light of all the facts. The distinction is easiest to grasp in a consequentialist setting. Here an act is objectively better if it brings about more good -- if it saves more lives, for example. A morally conscientious agent may not know which of her options would bring about more good. Her subjective ranking of the options might therefore go by the expectation of the good: by the probability-weighted average of the good each act might bring about.
"The Philosopher's Index" is a commercial software once widely used to search for articles in philosophy journals. These days it is generally easier and faster to search on the open internet. (Even the company behind the Philosopher's Index is not quite sure why the Index is still needed.) However, there is one thing the Index has that can't be found anywhere else: many of its entries contain abstracts of books and articles, apparently provided by the authors themselves. These abstracts are often not part of the published versions, and they can be quite useful to get an authoritative summary, or to see what the author considered to be the main point of a paper.
If you spin a wheel of fortune, the outcome -- red or black -- depends on the speed with which you spin. As you increase the speed, the outcome quickly cycles through the two possibilities red and black. As a consequence, any reasonably smooth probability distribution (or frequency distribution) over initial speed determines an approximately equal probability (frequency) for red and black. Here is an example of such a distribution, taken from Strevens.
I've been asked to review Michael Strevens's new book, Tychomancy. This motivated me to have another look at his earlier book Bigger than Chaos.
The aim of Bigger than Chaos is to explain how apparently chaotic interactions in highly complex systems often give rise to simple large-scale regularities, such as the laws of thermodynamics, the stability of predator/prey population levels, or the economic cycle. The basic explanatory strategy, which Strevens calls enion probability analysis (EPA), consists in aggregating the probabilistic dynamics for the individual components of a complex system into a probabilistic dynamics for macro-level features of the system.
Plausible moral theories should be agent-relative. They should permit us to care more about close friends than about distant strangers. They can prohibit killing ten innocent people even in circumstances where eleven innocent people would otherwise be killed by somebody else. They might say that it would be right for Alice to dance with Bob, but wrong for Bob to dance with Alice.
Often the factors that determine a phenomenon don't determine it uniquely. Sometimes this changes the phenomenon itself.
Take language. Plausibly, the meanings of our words are somehow determined by patterns of use, but these patterns aren't specific enough to fix, say, a unique extension or intension for our language. There is a range of precise meaning assignments all of which fit our use equally well. One might leave it at that and say that it is indeterminate which of these precise languages we speak. But this misses something. It misses the fact that we don't speak a precise language. For example, in a precise language, "Mount Everest has sharp boundaries" would be true, but in English it is false. The logic of a precise language would (arguably) be classical, but the logic of English is not.
When we face a decision and work out what we should do, we gain information about what we will do. Taking into account this information can in turn affect what we should do. Here's an example.
Lewis, in "Causal Decision Theory" (1981, p.308):
Suppose we have a partition of propositions that distinguish worlds where the agent acts differently ... Further, he can act at will so as to make any one of these propositions hold, but he cannot act at will so as to make any proposition hold that implies but is not implied by (is properly included in) a proposition in the partition. ... Then this is the partition of the agent's alternative options.
That can't be right. Assume I "can act at will so as to make hold" the proposition P that I raise my hand. Let Q be an arbitrary fact over which I have no control, say, that Julius Caesar crossed the Rubicon. Then I can also act at will so as to make P & Q true. (By raising my hand, I make it true, by not raising it I make it false.) So, by Lewis's definition, P is not an option, since I can act at will so as to make a more specific proposition P & Q true (a proposition that implies but is not implied by P). By the same reasoning, all my options must entail Q. So they don't form a partition: they don't cover regions of logical space where Q is false.
Consider a long list S1...Sn of sentences such that (a) each Si is trivially equivalent to its predecessor and successor (if any), and (b) S1 is not trivially equivalent to Sn.
For example, S1 might be a complicated mathematical or logical statement, and S1...Sn a process of slowly transforming S1 into a simpler expression. For another example, S1...Sn might be statements in different languages, where each Si qualifies as a direct translation of its neighbor(s) but S1 is not a direct translation of Sn.
I recently accepted a Chancellor's Fellowship at the University of Edinburgh. So it looks like the next stop, after six years in Australia, will be Scotland. Woop!
|< 601 older entries|