< 638 older entriesHome133 newer entries >

Mechanistic evidence for probabilistic models

You observe a process that generates two kinds of outcomes, 'heads' and 'tails'. The outcomes appear in seemingly random order, with roughly the same amount of heads as tails. These observations support a probabilistic model of the process, according to which the probability of heads and of tails on each trial is 1/2, independently of the other outcomes.

How observations about frequencies confirm or disconfirm probabilistic models is well understood in Bayesian epistemology. The central assumption that does most of the work is the Principal Principle, which states that if a model assigns (objective) probability x to some outcomes, then conditional on the model, the outcomes have (subjective) probability x. It follows that models that assign higher probability to the observed outcomes receive a greater boost of subjective probability than models that assign lower probability to the outcomes.

IDT again

In my recent post on Interventionist Decision Theory, I suggested that causal interventionists should follow Stern and move from a Jeffrey-type definition of expected utility to a Savage-Lewis-Skyrms type definition. In that case, I also suggested that they could avoid various problems arising from the concept of an intervention by construing the agent's actions as ordinary events. In conversation, Reuben Stern convinced me that things are not so easy.

Brute weak necessities

The two-dimensionalist account of a posteriori (metaphysical) necessity can be motivated by two observations.

First, all good examples of a posteriori necessities follow a priori from non-modal truths. For example, as Kripke pointed out, that his table could not have been made of ice follows a priori from the contingent, non-modal truth that the table is made of wood. Simply taking metaphysical modality as a primitive kind of modality would make a mystery of this fact.

Experts with self-locating beliefs

Imagine you and I are walking down a long path. You are ahead, but we can communicate on the phone. If you say, "there are strawberries here" and I trust you, I should not come to believe that there are strawberries where I am, but that there are strawberries wherever you are. If I also know that you are 2 km ahead, I should come to believe that there are strawberries 2 km down the path. But what's the general rule for deferring to somebody with self-locating beliefs?

Mereological universalism

I used to agree with Lewis that classical mereology, including mereological universalism, is "perfectly understood, unproblematic, and certain". But then I fell into a dogmatic slumber in which it seemed to me that the debate over mereology is somehow non-substantive: that there is no fact of the matter. I was recently awakened from this slumber by a footnote in Ralf Busse's forthcoming article "The Adequacy of Resemblance Nominalism" (you should read the whole thing: it's terrific). So now I once again think that Lewis was right. Let me describe the slumber and the awakening.

Intervenionist decision theory without interventions

Causal models are a useful tool for reasoning about causal relations. Meek and Glymour 1994 suggested that they also provide new resources to formulate causal decision theory. The suggestion has been endorsed by Pearl 2009, Hitchcock 2016, and others. I will discuss three problems with this proposal and suggest that fixing them leads us back to more or less the decision theory of Lewis 1981 and Skyrms 1982.

Sleeping Beauty as losing track of time

What makes the Sleeping Beauty problem non-trivial is Beauty's potential memory loss on Monday night. In my view, this means that Sleeping Beauty should be modeled as a case of potential epistemic fission: if the coin lands tails, any update Beauty makes to her beliefs in the transition from Sunday to Monday will also fix her beliefs on Tuesday, and so the Sunday state effectively has two epistemic successors, one on Monday one on Tuesday. All accounts of epistemic fission that I'm aware of then entail halfing.

Localism in decision theory

Decision theory comes in many flavours. One of the most important but least discussed divisions concerns the individuation of outcomes. There are basically two camps. One side -- dominant in economics, psychology, and social science -- holds that in a well-defined decision problem, the outcomes are exhausted by a restricted list of features: in the most extreme version, by the amount of money the agent receives as the result of the relevant choice. In less extreme versions, we may also consider the agent's social status or her overall well-being. But we are not allowed to consider non-local features of an outcome such as the act that brought it about, the state under which it was chosen, or the alternative acts available at the time. This doctrine doesn't have a name. Let's call it localism (or utility localism).

Necessitarianism, dispositionalism, and dynamical laws

Necessitarian and dispositionalist accounts of laws of nature have a well-known problem with "global" laws like the conservation of energy, for these laws don't seem to arise from the dispositions of individual objects, nor from necessary connections between fundamental properties. It is less well-known that a similar, and arguably more serious, problem arises for dynamical laws in general, including Newton's second law, the Schrödinger equation, and any other law that allows one to predict the future from the present.

Is it ever rational to calculate expected utilities?

Decision theory says that faced with a number of options, one should choose an option that maximizes expected utility. It does not say that before making one's choice, one should calculate and compare the expected utility of each option. In fact, if calculations are costly, decision theory seems to say that one should never calculate expected utilities.

Informally, the argument goes as follows. Suppose an agent faces a choice between a number of straight options (going left, going right, taking an umbrella, etc.), as well as the option of calculating the expected utility of all straight options and then executing whichever straight option was found to have greatest expected utility. Now this option (whichever it is) could also be taken directly. And if calculations are costly, taking the option directly has greater expected utility than taking it as a result of the calculation.

< 638 older entriesHome133 newer entries >