< 624 older entriesHome146 newer entries >

Validity judgments

Philosophers (and linguists) often appeal to judgments about the validity of general principles or arguments. For example, they judge that if C entails D, then 'if A then C' entails 'if A then D'; that 'it is not the case that it will be that P' is equivalent to 'it will be the case that not P'; that the principles of S5 are valid for metaphysical modality; that 'there could have been some person x such that actually x sits and actually x doesn't sit' is an unsatisfiable contradiction; and so on. In my view, such judgments are almost worthless: they carry very little evidential weight.

Reduction and coordination

The following principles have something in common.

Conditional Coordination Principle.
A rational person's credence in a conditional A->B should equal the ratio of her credence in the corresponding propositions B and A&B; that is, Cr(A->B) = Cr(B/A) = Cr(B)/Cr(A&B).
Normative Coordination Principle.
On the supposition that A is what should be done, a rational agent should be motivated to do A; that is, very roughly, Des(A/Ought(A)) > 0.5.
Probability Coordination Principle.
On the supposition that the chance of A is x, a rational agent should assign credence x to A; that is, roughly, Cr(A/Ch(A)=x) = x.
Nomic Coordination Principle.
On the supposition that it is a law of nature that A, a rational agent should assign credence 1 to A; that is, Cr(A/L(A)) = 1.

All these principles claim that an agent's attitudes towards a certain kind of proposition rationally constrain their attitudes towards other propositions.

Do laws explain regularities?

Humeans about laws of nature hold that the laws are nothing over and above the history of occurrent events in the world. Many anti-Humeans, by contrast, hold that the laws somehow "produce" or "govern" the occurrent events and thus must be metaphysically prior to those events. On this picture, the regularities we find in the world are explained by underlying facts about laws. A common argument against Humeanism is that Humeans can't account for the explanatory role of laws: if laws are just regularities, then then laws can't really explain the regularities — so the charge — since nothing can explain itself.

Confirmation and singular propositions

In discussions of the raven paradox, it is generally assumed that the (relevant) information gathered from an observation of a black raven can be regimented into a statement of the form Ra & Ba ('a is a raven and a is black'). This is in line with what a lot of "anti-individualist" or "externalist" philosophers say about the information we acquire through experience: when we see a black raven, they claim, what we learn is not a descriptive or general proposition to the effect that whatever object satisfies such-and-such conditions is a black raven, but rather a "singular" proposition about a particular object -- we learn that this very object is black and a raven. It seems to me that this singularist doctrine makes it hard to account for many aspects of confirmation.

Small formulas with large models

Take the usual language of first-order logic from introductory textbooks, without identity and function symbols. The vast majority of sentences in this language are satisfied in models with very few individuals. You even have to make an effort to come up with a sentence that requires three or four individuals. The task is harder if you want to come up with a fairly short sentence. So I wonder, for any given number n, what is the shortest sentences that requires n individuals?

Belief update: shifting, pushing, and pulling

It is widely agreed that conditionalization is not an adequate norm for the dynamics of self-locating beliefs. There is no agreement on what the right norms should look like. Many hold that there are no dynamic norms on self-locating beliefs at all. On that view, an agent's self-locating beliefs at any time are determined on the basis of the agent's evidence at that time, irrespective of the earlier self-locating belief. I want to talk about an alternative approach that assumes a non-trivial dynamics for self-locating beliefs. The rough idea is that as time goes by, a belief that it is Sunday should somehow turn into a belief that it is Monday.

Functionalism and the nature of propositions

Let's assume that propositional attitudes are not metaphysically fundamental: if someone has such-and-such beliefs and desires, that is always due to other, more basic, and ultimately non-intentional facts. In terms of supervenience: once all non-intentional facts are settled, all intentional facts are settled as well.

Then how are propositional attitudes grounded in non-intentional facts? A promising approach is to identify a characteristic "functional role" of propositional attitudes and then explain facts about propositional attitudes in terms of facts about the realization of that role. (We could also identify the attitude with the realizer, or with the higher-order property of heaving a realizer, but that's optional.)

Sleeping Beauty is testing a hypothesis

Let's look at the third type of case in which credences can come apart from known chances. Consider the following variation of the Sleeping Beauty problem (a.k.a. "The Absentminded Driver"):

Before Sleeping Beauty awakens on Monday, a coin is tossed. If the coin lands tails, Beauty's memories of Monday will be erased the following night, and the coin will be tossed again on Tuesday. If the Monday toss lands heads, no memory erasure or further tosses take place. Beauty is aware of all these facts.

When Beauty awakens on Monday morning and learns that today's toss has landed tails (alternatively: that the Monday toss has landed tails), how should that affect her credence in the hypothesis that the coin is fair?

Undermining and confirmation

Next, undermining. Suppose we are testing a model H according to which the probability that a certain type of coin toss results in heads is 1/2. On some accounts of physical probability, including frequency accounts and "best system" accounts, the truth of H is incompatible with the hypothesis that all tosses of the relevant type in fact result in heads. So we get a counterexample to simple formulations of the Principal Principle: on the assumption that H is true, we know that the outcomes can't be all-heads, even though H assigns positive probability to all-heads. In such a case, we say that all-heads is undermining for H.

Inadmissible evidence in Bayesian Confirmation Theory

Suppose we are testing statistical models of some physical process -- a certain type of coin toss, say. One of the models in question holds that the probability of heads on each toss is 1/2; another holds that the probability is 1/4. We set up a long run of trials and observe about 50 percent heads. One would hope that this confirms the model according to which the probability of heads is 1/2 over the alternative.

(Subjective) Bayesian confirmation theory says that some evidence E supports some hypothesis H for some agent to the extent that the agent's rational credence C in the hypothesis is increased by the evidence, so that C(H/E) > C(H). We can now verify that observation of 500 heads strongly confirms that the coin is fair, as follows.

< 624 older entriesHome146 newer entries >