Given some evidence E and some proposition P, we can ask to what extent E supports P, and thus to what extent an agent should believe P if their only relevant evidence is E. The question may not always have a precise answer, but there are both intuitive and theoretical reasons to assume that the question is meaningful – that there is a kind of (imprecise) "evidential probability" conferred by evidence on propositions. That's why it makes sense to say, for example, that one should proportion one's beliefs to one's evidence.
In 2008, I wrote a post on Stalnaker on self-location, in which I attributed a certain position to Stalnaker and raised some objections. But the position isn't actually Stalnaker's. (It might be closer to Chisholm's). So here is another attempt at figuring out Stalnaker's view. (I'm mostly drawing on chapter 3 of Our Knowledge of the internal world (2008), chapter 5 of Context (2014), and a forthcoming paper called "Modeling a perspective on the world" (2015).)
In "Ramseyan Humility", Lewis argues for a thesis he calls "Humility". He never quite says what that thesis is, but its core seems to be the claim that our evidence can never rule out worlds that differ from actuality merely by swapping around fundamental properties. Lewis's argument, on pp.205-207, is perhaps the most puzzling argument he ever gave.
In The Logic of Decision, Richard Jeffrey pointed out that the desirability (or "news value") of a proposition can be usefully understood as a weighted average of the desirability of different ways in which the proposition can be true, weighted by their respective probability. That is, if A and B are incompatible propositions, then
Superficially, modal auxiliaries such as 'must', 'may', 'might', or 'can' seem to be predicate operators. So it is tempting to interpret them as functions from properties to properties: just as 'Alice jumps' attributes to Alice the property of jumping, 'Alice can jump' attributes to her the property of being able to jump, 'Alice may jump' attributes the property of being allowed to jump, and so on.
Let's say that an act A is subjectively better than an alternative B if A is better in light of the agent's information; A is objectively better if it is better in light of all the facts. The distinction is easiest to grasp in a consequentialist setting. Here an act is objectively better if it brings about more good -- if it saves more lives, for example. A morally conscientious agent may not know which of her options would bring about more good. Her subjective ranking of the options might therefore go by the expectation of the good: by the probability-weighted average of the good each act might bring about.
"The Philosopher's Index" is a commercial software once widely used to search for articles in philosophy journals. These days it is generally easier and faster to search on the open internet. (Even the company behind the Philosopher's Index is not quite sure why the Index is still needed.) However, there is one thing the Index has that can't be found anywhere else: many of its entries contain abstracts of books and articles, apparently provided by the authors themselves. These abstracts are often not part of the published versions, and they can be quite useful to get an authoritative summary, or to see what the author considered to be the main point of a paper.
If you spin a wheel of fortune, the outcome -- red or black -- depends on the speed with which you spin. As you increase the speed, the outcome quickly cycles through the two possibilities red and black. As a consequence, any reasonably smooth probability distribution (or frequency distribution) over initial speed determines an approximately equal probability (frequency) for red and black. Here is an example of such a distribution, taken from Strevens.
I've been asked to review Michael Strevens's new book, Tychomancy. This motivated me to have another look at his earlier book Bigger than Chaos.
The aim of Bigger than Chaos is to explain how apparently chaotic interactions in highly complex systems often give rise to simple large-scale regularities, such as the laws of thermodynamics, the stability of predator/prey population levels, or the economic cycle. The basic explanatory strategy, which Strevens calls enion probability analysis (EPA), consists in aggregating the probabilistic dynamics for the individual components of a complex system into a probabilistic dynamics for macro-level features of the system.
Plausible moral theories should be agent-relative. They should permit us to care more about close friends than about distant strangers. They can prohibit killing ten innocent people even in circumstances where eleven innocent people would otherwise be killed by somebody else. They might say that it would be right for Alice to dance with Bob, but wrong for Bob to dance with Alice.
|< 606 older entries|