Mechanistic evidence for probabilistic models

You observe a process that generates two kinds of outcomes, 'heads' and 'tails'. The outcomes appear in seemingly random order, with roughly the same amount of heads as tails. These observations support a probabilistic model of the process, according to which the probability of heads and of tails on each trial is 1/2, independently of the other outcomes.

How observations about frequencies confirm or disconfirm probabilistic models is well understood in Bayesian epistemology. The central assumption that does most of the work is the Principal Principle, which states that if a model assigns (objective) probability x to some outcomes, then conditional on the model, the outcomes have (subjective) probability x. It follows that models that assign higher probability to the observed outcomes receive a greater boost of subjective probability than models that assign lower probability to the outcomes.

But evidence for probabilistic models does not only come from observed frequencies. In the sciences, arguably the most important kind of evidence for probabilistic models are facts about the mechanisms that generate the relevant outcomes. And here it is much less clear how the confirmation works. (There must be literature on this. Any pointers would be welcome.)

Suppose I explain to you that the outcomes you've observed are generated by flipping a coin. I show you the coin, explain to you how it is flipped, etc. This should strongly increase your credence in the assumption that heads and tails have probability 1/2. It should do so even if you hadn't observed any outcomes at all. Intuitively, that's because you may realize that (a) the outcome is very sensitive to the initial conditions of the flip, (b) the dynamics of the process does not favour one outcome over the other, and (c) the initial conditions are unlikely to favour a particular outcome.

But how do these facts support the hypothesis (call it 'H') that the coin land heads with probability 1/2?

If a hypothesis is confirmed by evidence, then the hypothesis has to raise the probability of the evidence (perhaps together with background assumptions). In the easiest cases, the hypothesis simply entails the evidence. So our question becomes: how does H increase the probability of (a) and (b) and (c)?

Arguably not by the Principal Principle. The Principal Principle links the objective probability a model assigns to outcomes with the subjective probability of the outcomes given the model. But (a) and (b) and (c) are not propositions about outcomes. They are not the kinds of things to which H assigns a probability.

One might argue that there's an indirect route from (a) and (b) and (c) to H, via frequencies: (a) and (b) and (c) raise the subjective probability of getting a sequence of outcomes in which heads has a relative frequency of roughly 1/2, and that is something to which H assigns a probability.

I have two worries about this argument. The first is that it just doesn't seem to capture the way in which (a) and (b) and (c) support H. When I tell you how coin flips work, you wouldn't reason that the relative frequency of heads on many trials is likely to be around 1/2, and from that infer that the probability of heads is 1/2.

Moreover (second), what if I ensure you that the coin is tossed only once? It would still be reasonable to believe that the probability of heads is 1/2, but this time it is certain that the relative frequency of heads won't be 1/2.

Another possible explanation of how (a) and (b) and (c) confirm H goes as follows. You may think that when the coin is flipped, the exact initial conditions -- in particular, the coin's vertical and rotational velocity -- are to some extent a matter of chance. That is, you may assume that there's an objective probability distribution over initial conditions. If you also assume that this distribution is roughly bell-shaped and not concentrated on a very narrow range of initial conditions (in accordance with (c)), then it follows from (a) and (b) that the probability of each outcome is about 1/2.

On this account, the probability measure specified by H is effectively identified with the probability measure over initial conditions. Even if you know little about the latter, the dynamics of the flipping process guarantees that it must determine a probability of roughly 1/2 for heads. Thus, (a) and (b) and (c) confirm H because they entail H.

I'm not convinced by this explanation either. For one thing, it violates the autonomy of higher-level objective probabilities. It seems highly implausible to me that the objective probabilities in population models or genetics are identical to the objective probabilities of statistical mechanics, and it seems even more implausible that the statistical mechanics probabilities are the probabilities of quantum mechanics. In fact, the mechanistic evidence for statistical mechanics, as it is usually presented, assumes a deterministic microphysics. So I don't think the probabilities in models of coin tosses are identical to lower-level probabilities over exact initial conditions.

Moreover, it seems to me that (a) and (b) and (c) would support H even on a purely subjective reading of (c), on which it says that you give approximately equal credence to initial conditions that differ very slightly from one another. In that case, your knowledge of (a) and (b) entails that you should assign credence 1/2 to heads. By the Principal Principle, it then follows that you can't believe that the objective probability of heads is anything other than 1/2. But that can hardly be the full story of how (a) and (b) and (c) support H.

(Compare: if you knew the exact initial conditions, or if God informed you of the outcome, you could be certain how the coin will land on my next flip; but we can't infer from the Principal Principle that the objective probability of heads is not 1/2.)

Notice that if (c) is true on the subjective reading, and you have the mechanistic information (a) and (b), then not only will your credence in heads be 1/2; your credence will also be highly resilient, in the sense of Skyrms 1980. Resilience is invariance under conditionalisation. Given (a) and (b) and (c), your credence in heads will not be swayed by further information, say, about the weather, about the time at which the next flip occurs, or about whether microphysics is deterministic.

Skyrms suggests that a probabilistic model is well-confirmed for an agent to the extent that the agent's corresponding degrees of belief are resilient. It's certainly true that accepting a probabilistic model goes along not only with aligning one's credences with the model's probabilities, but also with making those credences highly resilient. Arguably that is why well confirmed probabilistic models are almost never just confirmed by frequency data. Mere frequency information would not make our credence resilient.

The problem is how to make sense of this effect in the framework of Bayesian confirmation theory. Skyrms rejects the whole idea of objective (physical) probability. On his view, it is wrong to speak of our credence in probabilistic models and about how that credence changes in response to evidence. For Skyrms, to say that H is well confirmed is really to say that the subjective probability of heads being 1/2 is resilient.

Skyrms's solution is too radical for my taste. I'd like to think that probabilistic hypotheses are genuine hypotheses that can be tested and believed. But this makes it mysterious why confirmation of such hypotheses would go along with resilience of the corresponding degrees of belief: why is your credence in H proportional to the resilience of your credence in heads?

Comments

No comments yet.

Add a comment

Please leave these fields blank (spam trap):

No HTML please.
You can edit this comment until 30 minutes after posting.