Wolfgang Schwarz

< 632 older entriesHome14 newer entries >

Sleeping Beauty as losing track of time

What makes the Sleeping Beauty problem non-trivial is Beauty's potential memory loss on Monday night. In my view, this means that Sleeping Beauty should be modeled as a case of potential epistemic fission: if the coin lands tails, any update Beauty makes to her beliefs in the transition from Sunday to Monday will also fix her beliefs on Tuesday, and so the Sunday state effectively has two epistemic successors, one on Monday one on Tuesday. All accounts of epistemic fission that I'm aware of then entail halfing.

Localism in decision theory

Decision theory comes in many flavours. One of the most important but least discussed divisions concerns the individuation of outcomes. There are basically two camps. One side -- dominant in economics, psychology, and social science -- holds that in a well-defined decision problem, the outcomes are exhausted by a restricted list of features: in the most extreme version, by the amount of money the agent receives as the result of the relevant choice. In less extreme versions, we may also consider the agent's social status or her overall well-being. But we are not allowed to consider non-local features of an outcome such as the act that brought it about, the state under which it was chosen, or the alternative acts available at the time. This doctrine doesn't have a name. Let's call it localism (or utility localism).

Necessitarianism, dispositionalism, and dynamical laws

Necessitarian and dispositionalist accounts of laws of nature have a well-known problem with "global" laws like the conservation of energy, for these laws don't seem to arise from the dispositions of individual objects, nor from necessary connections between fundamental properties. It is less well-known that a similar, and arguably more serious, problem arises for dynamical laws in general, including Newton's second law, the Schrödinger equation, and any other law that allows one to predict the future from the present.

Is it ever rational to calculate expected utilities?

Decision theory says that faced with a number of options, one should choose an option that maximizes expected utility. It does not say that before making one's choice, one should calculate and compare the expected utility of each option. In fact, if calculations are costly, decision theory seems to say that one should never calculate expected utilities.

New server

I've moved all my websites to a new server. Let me know if you notice anything that stopped working. (Philosophy blogging will resume shortly as well.)

Overlapping acts

I'm currently teaching a course on decision theory. Today we discussed chapter 2 of Jim Joyce's Foundations of Causal Decision Theory, which is excellent. But there's one part I don't really get.

Philosophical models and ordinary language

A lot of what I do in philosophy is develop models: models of rational choice, of belief update, of semantics, of communication, etc. Such models are supposed to shed light on real-world phenomena, but the connection between model and reality is not completely straightforward.

Beliefs, degrees of belief, and earthquakes

There has been a lively debate in recent years about the relationship between graded belief and ungraded belief. The debate presupposes something we should regard with suspicion: that there is such a thing as ungraded belief.

Validity judgments

Philosophers (and linguists) often appeal to judgments about the validity of general principles or arguments. For example, they judge that if C entails D, then 'if A then C' entails 'if A then D'; that 'it is not the case that it will be that P' is equivalent to 'it will be the case that not P'; that the principles of S5 are valid for metaphysical modality; that 'there could have been some person x such that actually x sits and actually x doesn't sit' is an unsatisfiable contradiction; and so on. In my view, such judgments are almost worthless: they carry very little evidential weight.

Reduction and coordination

The following principles have something in common.

Conditional Coordination Principle.
A rational person's credence in a conditional A->B should equal the ratio of her credence in the corresponding propositions B and A&B; that is, Cr(A->B) = Cr(B/A) = Cr(B)/Cr(A&B).
Normative Coordination Principle.
On the supposition that A is what should be done, a rational agent should be motivated to do A; that is, very roughly, Des(A/Ought(A)) > 0.5.
Probability Coordination Principle.
On the supposition that the chance of A is x, a rational agent should assign credence x to A; that is, roughly, Cr(A/Ch(A)=x) = x.
Nomic Coordination Principle.
On the supposition that it is a law of nature that A, a rational agent should assign credence 1 to A; that is, Cr(A/L(A)) = 1.

All these principles claim that an agent's attitudes towards a certain kind of proposition rationally constrain their attitudes towards other propositions.

< 632 older entriesHome14 newer entries >

Search

Subscribe (RSS)