Wolfgang Schwarz

Blog

Integrating centred information

Sensory information is centred. Right now, for example, my visual system conveys to me that there's a red wall about 1 metre ahead (among much else); it does not convey that Wolfgang Schwarz is about 1 metre away from a red wall on 22 January 2026 at 12:04 UTC.

We can quibble over what exactly is part of the sensory information. We can also quibble over what "sensory information" is even meant to be. But it should be uncontroversial that we gain information from our senses. My point is that, on any plausible way of spelling this out, the information we receive is centred: it doesn't have parameters that fix a unique location in space and time. If I were unsure about what time it is or who I am, looking at the wall in front of me wouldn't help. The underlying reason, of course, is that photoreceptors are insensitive to differences in spatiotemporal location: they don't produce different outputs depending on where or when they are activated by photons.

Kripke on empty names

I (somewhat randomly) picked up Kripke 2011 the other day. This is Kripke's first engagement with the problem of empty names. What struck me is the biased selection of examples. Most of the paper is concerned with names of fictional characters like 'Sherlock Holmes', and Kripke only seems to consider simple utterances in which they figure as the subject, like (1).

The absoluteness of consistency

A somewhat appealing (albeit, to me, also somewhat obscure) view of mathematics is the pluralist doctrine that every consistent mathematical theory is true, insofar as it accurately describes some mathematical structure. I want to comment on a potential worry for this view, mentioned in (Clarke-Doane 2020): that it has implausible consequences for logic.

Gödel, Mechanism, Paradox

A famous argument, first proposed in (Lucas 1961), supposedly shows that the human mind has capabilities that go beyond those of any Turing machine. In its basic form, the argument goes like this.

Let S be the set of mathematical sentences that I accept as true. S includes the axioms of Peano Arithmetic. Let S+ be the set of sentences entailed by S. Suppose for reductio that my mind is equivalent to a Turing machine. Then S is computably enumerable, and S+ is a computably axiomatizable extension of Peano Arithmetic. So Gödel's First Incompleteness Theorem applies: there is a true sentence G that is unprovable in S+. By going through Gödel's reasoning, I can see that G is true. So G is in S and thereby in S+. Contradiction!

Teaching logic: Tarski vs Mates vs "logical constants"

I'm teaching an intermediate/advanced logic course this semester. So I had to ask myself how to introduce the semantics of quantifiers, with an eye on proving soundness and completeness. The standard approach, going back to Tarski, defines a satisfaction relation between a formula, a model, and an assignment function, and then defines truth by supervaluating over all assignments. The main alternative, often found in intro logic textbooks, is Mates' approach, where ∀xA(x) is defined as true in a model M iff A(c) is true in every c-variant of M, where c is a constant not occurring in A.

Bacon on higher-order logic

In the dark old days of early logic, there was only syntax. People introduced formal languages and laid down axioms and inference rules, but there was nothing to justify these except a claim to "self-evidence". Of course, the languages were assumed to be meaningful, but there was no systematic theory of meaning, so the axioms and rules could not be justified by the meaning of the logical constants.

All this changed with the development of model theory. Now one could give a precise semantics for logical languages. The intuitive idea of entailment as necessary truth-preservation could be formalized. One could check that some proposed system of axioms and rules was sound, and one could confirm – what had been impossible before – that it was complete, so that any further, non-redundant axiom or rule would break the system's soundness.

Dynamic rationality

The standard dynamic norm of Bayesianism, conditionalization, is clearly inadequate if credences are defined over self-locating propositions. How should it be adjusted?

This question was popular at around 2005-2015. Chris Meacham and I came up with the same answer, which we published in (Meacham 2010), (Schwarz 2012), and (Schwarz 2015). I showed that the replacement norm that we proposed has all the traditional virtues of conditionalization. For example, (under the usual idealized conditions) following the norm uniquely maximizes expected accuracy, and an agent is invulnerable to diachronic Dutch books iff they follow the norm.

The deontic logic of Desire as Belief

Assume that for any proposition A there is a proposition \(\Box A\) saying that A ought to be the case. One can imagine an agent – call him Frederic – whose only basic desire is that whatever ought to be the case is the case. As a result, he desires any proposition A in proportion to his belief that it ought to be the case:

\[\begin{equation*} (1)\qquad V(A) = Cr(\Box A). \end{equation*} \]

Let w be a maximally specific proposition. Such a "world" settles all descriptive and all normative matters. In particular, w entails either \(\Box w\) or \(\neg \Box w\). Suppose w entails \(\Box w\). Does Frederick desire to live in such a world? Yes. On the assumption that w is actual, the entire world is as it ought to be. That's what Frederick wants. So he desires w.

If then else

Bare indicative conditionals are bewildering, but they become surprisingly well-behaved if we add an 'else' clause.

Intuitively, 'if A then B' doesn't make an outright claim about the world. It says that B is the case if A is the case – but what if A isn't the case?

An 'else' clause resolves this question. 'If A then B else C' makes an outright claim. It says that either B or C is the case, depending on whether A is the case. That is: the world is either an A-world, in which case it is also a B-world, or it is a ¬A-world, in which case it is a C-world. For short: (A∧B)∨(¬A∧C).

A new kind of Neo-Fregeanism?

Frege argued that number concept are, in the first place, second-order predicates. When we talk about numbers as objects, we use a logical device of "nominalization" that introduces object-level representations of higher-level properties. In Grundgesetze, he assumed that every first-order predicate can be nominalized: for every first-order predicate F, there is an associated object – the "extension" of F – such that F and G are associated with the same object iff ∀x(Fx ↔︎ Gx). The number N is then identified with the extension of 'having an extension with N elements'. Unfortunately, the assumption that every predicate has an extension turned out to be inconsistent, so the whole approach collapsed.

< 796 older entries