Bacon on higher-order logic

In the dark old days of early logic, there was only syntax. People introduced formal languages and laid down axioms and inference rules, but there was nothing to justify these except a claim to "self-evidence". Of course, the languages were assumed to be meaningful, but there was no systematic theory of meaning, so the axioms and rules could not be justified by the meaning of the logical constants.

All this changed with the development of model theory. Now one could give a precise semantics for logical languages. The intuitive idea of entailment as necessary truth-preservation could be formalized. One could check that some proposed system of axioms and rules was sound, and one could confirm – what had been impossible before – that it was complete, so that any further, non-redundant axiom or rule would break the system's soundness.

That's how things played out in first-order logic. Systems of second-order logic turned out to be incomplete, and we found that there was no way to make them complete. Full second-order logic, it seems, can only be grasped semantically.

Or so I used to think. Andrew Bacon clearly disagrees. In his Philosophical Introduction to Higher-order Logics (Bacon 2023), he wants to take us back to the dark old days.

In Bacon's conception of logic, there is only syntax. The language of higher-order logic is assumed to be interpreted, but there is no systematic account of what an interpretation might look like; the standard set-theoretic interpretation is rejected. Axioms and inference rules can therefore only be justified by self-evidence. (There are a few chapters on model theory, starting on page 287, but Bacon makes clear that he doesn't think of model theory as elucidating or formalizing the meaning of higher-order sentences. Model theory is introduced as a tool to derive proof-theoretic results.)

What about the unformalizability ("incompleteness") of second-order logic? This, too, is dismissed as a mirage, as it is based on the supposedly faulty assumption that second-order entailment is defined in the standard model-theoretic fashion.

I can understand why a friend of higher-order logic might be reluctant to endorse the standard set-theoretic semantics, which effectively construes second-order quantifiers as first-order quantifiers over sets. Perhaps a truly faithful semantics for second-order logic should invoke second-order quantifiers in the meta-language. This is especially pressing when we think about second-order set theory: the standard semantics for second-order ZFC yields an interpretation that is surely unintended. But this much is true even for first-order ZFC: '\(\in\)' is not supposed to denote a set of ordered pairs.

In any case, I think it is important to think of models as models. In propositional logic, a model is a truth-value assignment to the sentence letters. Nobody seriously thinks sentence meanings can be identified with truth-values. The real meaning of a sentence should at least specify its truth-conditions. We could then determine whether one sentence entails another by checking if there's any conceivable scenario in which the first is true and the second false. To determine whether one sentence logically entails another, we would accordingly have to check if there is a conceivable scenario in which the first is true and the second false, on any assignment of truth-conditions to the sentence letters. Equivalently, we would have to check if every pair of a scenario and an assignment of truth-conditions to the sentence letters that makes the first sentence true also makes the second sentence true. If you think about this for a moment, you realize that it can be simplified: any pair of a scenario and an assignment of truth-conditions to the sentence letters determines an assignment of truth-values to the sentence letters. Conversely, any such truth-value assignment is determined by some pair of a scenario and an assignment of truth-conditions. This compressed information about a pair of a scenario and an interpretation is all we need, in propositional logic, to determine whether an arbitrary sentence is true in the scenario, under the interpretation. That's why we can model a pair of a scenario and an interpretation by an assignment of truth-values to the sentence letters. A model isn't the real thing. It simplifies the real thing down to whatever we need to determine whether any given sentence is true or false.

Return to second-order logic. Suppose the intended domain of discourse forms a set, as is usually the case. Whatever the true meaning of a predicate is, it is tempting to assume that for any subset of the domain, a predicate could apply to just the elements of that subset. If so, we can model predicate meanings by such subsets, and we can model the meaning of quantifiers by assuming that they range over all the subsets.

This leads to deep and fascinating questions. Once the domain is infinite, the notion of "all subsets of the domain" is far from innocent. If the domain consists of the real numbers, for example, it depends on the ambient set theory (the set theory in which the model theory is couched) whether there are any subsets of the domain whose cardinality is smaller than the cardinality of the domain, but larger than the cardinality of the naturals. Once again, this issue becomes more pressing if we think about set theory. Second-order ZFC is categorical: all its models are isomorphic. But whether it validates the Continuum Hypothesis depends on the ambient set theory!

I don't see how these deep and fascinating questions can be avoided by refusing to engage with semantics. If the higher-order quantifiers are plentitudinous, ranging over "properties" whose extension may be any subset of the domain, we face the question of exactly how plentitudinous they are. As Bacon points out in passing, there are pure second-order statements that decide the Continuum Hypothesis, one way or another. Is there a right way? Bacon seems to think so. But how could we tell which way is right? Surely neither statement is self-evident. And anyway, the determinacy assumption seems to go against a growing consensus in set theory according to which the Continuum Hypothesis resembles the Parallel Postulate in geometry: we can study structures in which it is true and structures in which it is false; neither is objectively more real than the other.

Bacon says very little about any of this. Most of the book is devoted to an entirely different issue.

That issue comes to light if we assume that our formal language doesn't just have second-order quantifiers, which are by their nature extensional, but also some further, non-extensional third-order predicates.

Bacon's running example is higher-order identity. He assumes that we have a predicate '=' that can be put between predicates and between sentences (which are zero-ary predicates). For Bacon, the central question of higher-order logic is how this predicate should behave – syntactically, of course: what are its introduction and elimination rules?

In the standard semantics of second-order logic, the denotation of a predicate is its extension. If we interpret '=' as identity of denotation, we get the result that 'F=G' is true whenever F and G are co-extensive. Things are even worse for zero-ary predicates: 'p=q' comes out true whenever p and q have the same truth-value.

This consequence is Bacon's main reason for rejecting the standard semantics. It would, he says, imply "that there are only two propositions", which, he says (on p.329), is "widely rejected in contemporary metaphysics".

Much of Bacon's book explores candidate rules for higher-order identity. Chapters 4-8, after some introductory chapters on lambdas, present a "classical" system in which '=' may be placed between two expressions iff they are provably equivalent. Chapters 9-13 discuss alternative systems that allow identity to fail even for provably equivalent expressions. According to Bacon, the choice tracks a deep question about the granularity of reality. This seems to be a common assumption in higher-order metaphysics. I don't get it.

Let's briefly take a different example first. Suppose we add to the standard language of propositional logic a box operator, whose intended interpretation is, say, 'it is known that' or 'it is provable in Peano Arithmetic that'. (As a sentence operator, the box is a higher-order predicate.) We clearly have to change our model theory. Since the box isn't truth-functional, our models must contain more information about the interpretation of sentence letters than their truth-values. That's why standard models in epistemic logic and provability logic have a lot more structure.

How fine-grained should the sentence meanings in the new models be? It depends. In my view, there are attractive intensional concepts that can claim to regiment the informal concept of knowledge. But there are also hyperintensional regimentations. For provability in Peano Arithmetic, we obviously need a very fine-grained, hyperintensional semantics.

Higher-order identity is special because it seems to rule out not only models that are too coarse-grained (like the box), but also models that are too fine-grained. If we have some pre-theoretic ideas about which sentences have the same meaning and which don't, the meanings in our models must have the same level of granularity.

But it would be entirely misguided, I think, to believe that sentences in a formal language have a fixed meaning, independent of any stipulations we might have made, so that there is an objective fact of the matter about whether or not two of them have the same meaning.

In provability logic, for example, the intended interpretation reads the sentence letters as standing for sentences of Peano Arithmetic (or their gödel numbers); \(\neg\neg p\) denotes a different sentence from \(p\). If we add a higher-order identity predicate to provability logic, we should adopt a "non-classical" system in which \(\neg\neg p = p\) isn't provable. In standard epistemic logic, by contrast, the sentence letters express possible ways the world might be. Here, \(\neg\neg p = p\) should be provable.

Note that I'm not reasoning from the granularity of the models. (This would be question-begging.) I'm not saying that the sentence letters in epistemic logic stand for subsets of some domain \(W\). We always start with an informal conception of meaning.

But informal or not, meanings don't fall from the sky. A language doesn't interpret itself. I certainly don't see why we would incur any controversial metaphysical commitments by adopting a version of provability logic in which \(\neg\neg p = p\) isn't provable.

Let's keep logic and metaphysics apart.

Bacon, Andrew. 2023. A Philosophical Introduction to Higher-Order Logics. Routledge.

Comments

No comments yet.

Add a comment

Please leave these fields blank (spam trap):

No HTML please.
You can edit this comment until 30 minutes after posting.