Expressivism about chance

I'll begin with a strange consequence of the best system account. Imagine that the basic laws of quantum physics are stochastic: for each state of the universe, the laws assign probabilities to possible future states. What do these probability statements mean?

The best system account identifies chance with the probability function that figures in whatever fundamental physical theory best combines the virtues of simplicity, strength and fit, where fit is a matter of assigning high probability to actual events. So when we say that the chance of some radium atom decaying within the next 1600 years is 1/2, what we claim is true iff whatever fundamental theory best combines the virtues of simplicity, strength and fit assigns probability 1/2 to the mentioned outcome. As a piece of ordinary language philosophy, this is not very plausible. For one thing, people speak of chances even when it is assumed that the fundamental dynamics is deterministic. Moreover, by ordinary usage, chances are logically independent of actual frequencies, which is incompatible with the best system account. Nevertheless, the account may be plausible as a somewhat revisionary explication of one strand in the mess that is our ordinary conception of chance.

But the question I asked is not what it means when we say that the chance of some outcome is so-and-so, but what it means when a physical theory makes such a claim. When a law of quantum physics (as we imagine) says that the probability of a radium atom decaying within 1600 years is 1/2, does it thereby say that whatever fundamental theory best combines the virtues of simplicity, strength and fit assigns probability 1/2 to that outcome? It is hard to believe that this is the content of the law. The basic laws of physics do not quantify over physical theories, nor do they talk about methodological standards such as strength and simplicity.

As Al Hajek pointed out to me, a similar problem arises for all theoretical terms. Suppose we analyse (or explicate) 'inertial mass' as whatever feature is responsible for the fact that some physical bodies are harder to accelerate and slow down than others -- more specifically, suppose we analyse 'inertial mass' as whatever fills the position of the question mark in 'F = ? a'. Then it looks like Newton's second law, F = ma, merely says that whatever plays the mass role plays the mass role.

But in this case, it is not so obvious that the conclusion is absurd: some philosophers have endorsed it, concluding that the laws of nature are necessary. Another, better, response is to argue that the analysis only "fixes the reference": it tells us which physical magnitude 'inertial mass' picks out, without revealing its essence. When Newton's law says that F = ma, the content of the law directly concerns this magnitude. The law is true at a world w iff whatever magnitude actually plays the mass role also plays the mass role at w.

In the case of chance, this second response looks much less plausible. On the best system account, chance is not a basic physical magnitude akin to mass or charge. It is not a building block of reality. Chance is only a probability function, one among many. What makes it special is that it is the function that figures in whatever theory best combines the virtues of simplicity, strength and fit. Of course we could rigidify the analysis and say that at every world, `chance' picks out whatever probability function figures in the best system of the actual world, so that the probabilistic laws of quantum mechanics could be taken to say something directly about this function. But this is still implausible. Why should the fundamental laws of physics care about that particular probability function? Worse, the laws would all come out as necessarily true, since a probability function qua probability function is presumably individuated by the values it assigns to its arguments.

So this is the strange consequence of the best system account: while it may offer a plausible analysis for our use of 'chance', it does not offer any plausible analysis for probability statements as they occur (we imagine) in quantum mechanics or other parts of science. Personally, I think that's a very serious problem, because I am not convinced that 'chance' is a useful concept in philosophical theorising. On the other hand, science is up to its neck in probability statements, so we should really have some account of what these statements mean.

Here is an answer I find attractive: they don't mean anything. That is, when quantum mechanics says that outcome O in experiment E has probability x, this is not an assertion about the world. Rather, it is a hedged version of the statement that experiment E has outcome O. Think of theories as agents, and of theoretical probabilities as degrees of belief. When a theory makes a probability claim, it is expressing its degrees of belief, rather than making any outright statement about the world.

Setting aside these metaphors, the basic idea is that probabilistic scientific theories are not evaluable for truth or falsity (maybe their non-probabilistic parts still are, but the probabilistic parts aren't). They are evaluable for other virtue such as simplicity, strength and fit, so we can still make sense of science trying to find the best theories of various domains.

This account has some nice features. For example, it can to some extent vindicate the intuition that a fundamental law according to which radium atoms have a half-life of 1600 years is logically compatible with all radium atoms decaying within a minute. That's true, because the law doesn't say anything outright about decay frequencies. On the other hand, the interpretation is clearly compatible with Humeanism: it is not assumed that the law makes a claim about a primitive unHumean whatnot.

An obvious problem with this proposal is that we have to adjust the standard conception of confirmation. If a theory does not make claims about the world, how is it confirmed or disconfirmed by empirical evidence? I guess, on this view, probabilistic theories really can't be confirmed or disconfirmed in a traditional sense. They also can't be believed. But think about why it is valuable to learn that a given theory has high fit, especially if the theory is also simple and strong. This reveals a lot about general patterns in the history of the world. More specifically, if you learn that a theory T fares good in terms of simplicity, strength and fit, then rationality requires you to align your credence with the theory's probabilities (as I've argued here). None of that requires that the theory is true, or even evaluable for truth and falsehood. So instead of talking about belief (or degrees of belief) in scientific theories, we can speak of two other things which conveniently go together: (i) believing that the relevant theory fares well in terms of simplicity, strength and fit, and (ii) endorsing the theory, in the sense of making its probabilities one's own degrees of belief. Accordingly, when we test a probabilistic theory, we test (i) to what extent the theory fares well in terms of simplicity, strength and fit, and therefore (ii) to what extent it shall guide our credence.

Here, as well as in some other places, my proposal resembles epistemic interpretations of chance on which probabilistic scientific theories specify norms for degrees of belief. If a theory says that one ought to assign credence x to outcome O, then obviously testing the theory means to test whether one ought to assign credence x to outcome O. This is just what I said we do when we test the theory, but I didn't say that the content of the theory is about norms of belief. I find it incredible that the laws of quantum mechanics or statistical mechanics should be concerned with normative, psychological matters.

So that's my proposal for how to interpret probabilistic statements in scientific laws. As I said, I care much less about the current use of 'chance' in philosophy. I suspect it doesn't corresponds to anything real. (All right, best system probabilities are real, but they aren't a good fit, and why do we need them? Not to make sense of science -- we're already done with that task.)

I do believe in another kind of objective probability: objective evidential probability, or degrees of confirmation. Roughly speaking, the evidential probability of a hypothesis H given evidence E is the degree of belief a rational agent ought to assign to H if all their (relevant) evidence is E. Often it is useful to consider the evidential probability of a hypothesis conditional on all the information that has been collected in a community, or even on all the information that could easily be collected. I think this is usually what we mean when we speak of objective probabilities, in weather forecasts or at the casino. On the present proposal, this kind of probability is not identical to some probability function studied in science, but the two are related. Often the information that could easily be collected would make it rational to endorse a particular probabilistic theory, in which case the the evidential probability conditional on that information is (ceteris paribus) equal to the probabilities specified in that theory.

Comments

No comments yet.

Add a comment

Please leave these fields blank (spam trap):

No HTML please.
You can edit this comment until 30 minutes after posting.