On Gomez Sanchez on naturalness and laws

Gómez Sánchez (2023) asks an important and, in my view, unsolved question: what kinds of properties may figure in the laws of "special science" (chemistry, genetics, etc.)?

For the most part, the patterns captured in special science laws are not entailed by the fundamental laws of physics, nor by the intrinsic powers and dispositions of the relevant objects. Some kind of best-systems account looks appealing: the Weber-Fechner law, the laws of population dynamics, the laws of folk psychology etc. are useful summaries of pervasive and robust regularities in their respective domains. They are the "best systematisation" of the relevant facts, in terms of desiderata like simplicity and strength.

If we think of laws as sentences and measure simplicity by inverse syntactic complexity, then every system of laws can be rendered maximally simple by translating it into a language with suitably contorted predicates. Without constraints on the language, the simplicity criterion becomes vacuous. An obvious response to this problem is to say that the basic predicates that figure in candidate laws must pick out relatively natural properties. (Similarly, if we think of laws in terms of models, we would require the models to involve natural properties.)

But what does this mean? What makes a property natural, so that it may figure in a law of special science? There are, broadly speaking, three possibilities.

First, one might say that naturalness boils down to what we humans in our current epistemic situation happen to find simple and easy to work with. We find it easy to work with regularities that involve colours like red and green. Intelligent aliens or fish with different sensory capacities may regard these regularities as unnatural and gerrymandered. "Pragmatic" versions of the best-systems account along these lines have become popular in recent years (see, for example, Jaag and Loew (2020)). Eddon and Meacham (2015) goes over some options for how the relativity of naturalness could be understood.

I'd be happy to allow for some anthropo-relativity in the criteria for special-science laws. Perhaps the laws are patterns that are especially useful to creatures like us. But I'm reluctant to go all the way, to say that the patterns of special science are objectively on a par with any old gerrymandered list of miscellaneous facts.

A second option, that I think Jonathan Schaffer has defended (although I forgot where, and can't find it on the spot), is to take the relevant notion of naturalness as primitive (or to analyse it in terms of grounding, which for present purposes amounts to the same thing).

Gomez Sanchez raises some good objections to this proposal. If we look at paradigm examples of natural (vs unnatural) kinds, we can typically explain what their instances have in common. Water molecules are all composed in a similar way out of similar constituents. The green things all have similar reflectance properties. The naturalness of these classes (or kinds) does not appear to be a brute and inexplicable fact.

This leaves the third option, that the relevant notion of naturalness can be somehow analysed in terms of a basic notion of fundamentality.

This seemed to be Lewis's view. Lewis never talked much about special-science laws. But he did suggest that we have use for a notion of "imperfect" naturalness. Over the years, he sketched four ways in which this notion of imperfect naturalness could be understood. None of them looks very promising or well thought-through.

Perhaps the best of Lewis's ideas, proposed in Lewis (1983, 13f.), is to analyse imperfect naturalness in terms of something like Armstrong's account of similarity between structural universals. If this could be made to work, it would correctly predict that water is natural, and that red is more natural than red or blue.

Gomez Sanchez only mentions what is arguably Lewis's least promising idea: to measure relative naturalness by (inverse) length of definition in terms of fundamental properties. By this criterion, red comes out much less natural than having negative unit charge or a mass of 1 kg unless there is something with positive unit charge fewer than 10 meters away. It's a terrible idea.

Gomez Sanchez has a new idea. It goes like this.

We first define the special-science laws without presupposing a particular concept of naturalness. Afterwards, we define the (imperfectly) natural properties as the properties that figure in a special-science law.

The definition of special-science laws is iterative. At each stage, the laws are defined as the axioms of the best system, but the criteria by which systems are ranked change slightly from stage to stage.

At each stage, there is no restriction on the predicates that may figure in a system. However, if a system contains very unnatural predicates, then the system is unlikely to come out best, because the competition involves a further criterion besides the usual criteria of syntactic simplicity and strength: semantic simplicity. This is the criterion whose precise content changes from stage to stage.

At stage 1, semantic simplicity measures inverse length of definition in terms of predicates for fundamental properties. The stage-1 laws are the axioms of the system that does best in terms of syntactic simplicity, strength, and inverse length of definition (of basic predicates) in terms of fundamental properties.

At stage 2, semantic simplicity measures inverse length of definition in terms of predicates that figure in stage-1 laws. The stage-2 laws are the axioms of the system that does best in terms of syntactic simplicity, strength, and inverse length of definition (of basic predicates) in terms of stage-1 predicates.

And so on, across infinitely many stages. Something is a special-science law if it occurs anywhere in this hierarchy. A property is (imperfectly) natural if a predicate for it occurs anywhere in the hierarchy.

The more we move up the hierarchy, the more we tolerate predicates whose definition in terms of fundamental properties is long and complicated. High up in the hierarchy, the definitions can be very long and complicated indeed. Crucially, however, a predicate can only make it into a system high up in the hierarchy if it is definable in a simple way from other predicates that made it into the earlier stages of the hierarchy. Trivialising predicates that allow compressing all truths into 'everything is F' probably can't be defined in this way.

It's a nice idea. But I don't think it works.

To begin, suppose the winner at stage 1 is a system whose predicates all pick out fundamental properties. It follows that the criteria at stage 2 are exactly the same as the criteria at stage 1. At both stages, semantic simplicity measures inverse length of definition in terms of predicates for fundamental properties. (In fact, the stage 2 criteria will not allow predicates for fundamental properties that don't figure in the stage-1 laws, which were allowed at stage 1, but since such predicates were unused at stage 1, this doesn't affect my point.) If the criteria of the competition are the same, then the winner will be the same. The stage-2 laws will equal the stage-1 laws. And then the criteria at stage 3 will once again be the same as at stage 2 and stage 1. And so on. We won't get any special-science laws, and we won't get any natural properties that aren't fundamental!

To get the hierarchy off the ground, the winner at stage 1 must use predicates for non-fundamental properties. I see no strong reason to think that it does. It is at the very least an open empirical conjecture that one can find a highly simple and powerful system whose predicates all pick out fundamental properties. But it is not an open empirical conjecture that there are special-science laws. Something has gone wrong.

Also, suppose at some stage n the best system includes the laws 'all ravens are black' and 'all cherries are red'. Let's define a chaven to be either a cherry or a raven, and let's define reck to apply to things that are either red cherries or black ravens. These are fairly simple definitions in terms of stage-n predicates. At stage n+1, a system might therefore contain 'all chavens are reck', at no great cost in semantic simplicity. Since 'all chavens are reck' entails that all ravens are black and that all cherries are red, the system has gained in syntactic simplicity, in comparison to a system that contains the two original laws. But chaven and reck are unnatural and gerrymandered properties that we don't want in our laws!

Again, one might hope that the problem doesn't arise. Perhaps the standards for semantic and syntactic simplicity can be fine-tuned so that the cost in semantic simplicity incurred by the chavens/reck system outweighs the benefit in syntactic simplicity. But how can we be sure that this will always work, for all cases of this type? I am rather sure that it won't always work. The problems are too easy to create. For example, it's easy to come up with toy models in which three continuous fundamental magnitudes are related by certain equations that can be simplified by merging two of the magnitudes into a single variable, defined as (say) the product of the original magnitudes. It's easy to come up with toy models in which statistical regularities become much easier to state if certain outliers are reclassified. And so on and on.

In general, it is, to me, a completely open question what the Gomez Sanchez hierarchy will look like, and whether it will contain the kinds of systems we would recognize as special-science laws. And so the hierarchy is ill-suited to define the special-science laws.

A large part of the problem, I think, is the length-of-definition idea. Gomez Sanchez isn't using Lewis's original idea of measuring relative naturalness by (inverse) length of definition in terms of fundamental properties. That's good, because the idea is terrible. But the terrible idea still plays a key role in her construction, deciding which predicates may get in at successive stages in her hierarchy.

(I have focussed on how to characterise the special sciences. Gomez Sanchez suggests that the imperfectly natural predicates that figure in the special sciences are also needed to play important roles in metasemantics and in the theory of induction. I'm not convinced. Perhaps we can get by with fundamentality in these areas.)

Eddon, M., and Christopher J. G. Meacham. 2015. “No Work For a Theory of Universals.” In A Companion to David Lewis, edited by Jonathan Schaffer and Barry Loewer, 116–37. Wiley-Blackwell.
Gómez Sánchez, Verónica. 2023. “Naturalness by Law.” Noûs n/a (n/a). doi.org/10.1111/nous.12433.
Jaag, Siegfried, and Christian Loew. 2020. “Making Best Systems Best for Us.” Synthese 197 (6): 2525–50. doi.org/10.1007/s11229-018-1829-1.
Lewis, David. 1983. “New Work for a Theory of Universals.” Australasian Journal of Philosophy 61: 343–77.

Comments

# on 17 October 2022, 06:18

I last mentioned Solomonoff induction and there are two approximations to the uncomputable ideal of that: the minimum message length principle and the
minimum description length principle. Both of these have corresponding multivariate statistical methods that seem related to other more traditional statistical approaches to generating a mathematical model that simplify a messy world. A sketch of this stuff is doi:10.3390/e13061076
I think the algorithmic measures of complexity penalize "unnatural predicates".

More generally, I think this stuff will slot into the "information-theoretic foundations of thermodynamics and statistical mechanics...in arbitrary physical theories" that there is a literature on, going back to Maxwell's Demon. The most natural (lawful) description is the one requiring the least amount of work to be done to predict or control the phenomena of interest, where work is actual physical work performed in requisite computations. (So this includes the work setting up one's unnatural predicates/domain-specific language). Mario Bunge said he turned to philosophy once he considered the equations predicting the trajectory of a fly or a soccer ball.

# on 17 October 2022, 08:00

Thanks David. I agree that these ideas might be useful here. The MDL approach in particular is obviously a close cousin of the best-systems approach. I wish it was better known (or known at all!) among philosophers. But I'm not sure how exactly we are meant to get to thermodynamics, let alone to genetics or systems biology, unless we already know the language in which the "phenomena of interest" should be expressed. If it's simply whatever happens to be our language, we get the first option I mentioned in the blog post, which looks unattractive to me. I wish we had more than a sketch of how all this is meant to work!

(I suspect the main observations from Michael Streven's "Bigger than Chaos" might help to fill the gap. The fact that sensible people can entertain David Albert's hypothesis that all special sciences are a branch of statistical mechanics seems to me to point in the same direction.)

# on 18 October 2022, 04:28

"let alone to genetics or systems biology" - You have probably read some of Karl Friston's papers, such as doi:0.1016/j.plrev.2017.09.001 where the abstract says (not at all ambitiously):

"We describe a meta-theoretical ontology of life based on the free energy principle. We translate our ontology into a systematic research heuristic for life sciences...us[able] to develop mathematically tractable, substantive *explanations* of living organisms...We apply this meta-theoretical ontology and research heuristic to Homo sapiens."

Another recent example might be 10.1073/pnas.211388311

I think these are interesting first as a putative scaffold for natural laws, but also for a description of the sciences as a natural process trying to minimize surprisal.

# on 18 October 2022, 10:46

Haven't read these papers, no. My head always hurts when I read Friston, mainly from banging it against my desk out of frustration, so I try to avoid reading him.

Add a comment

Please leave these fields blank (spam trap):

No HTML please.
You can edit this comment until 30 minutes after posting.