Wolfgang Schwarz

Blog

Integrating centred information

Sensory information is centred. Right now, for example, my visual system conveys to me that there's a red wall about 1 metre ahead (among much else); it does not convey that Wolfgang Schwarz is about 1 metre away from a red wall on 22 January 2026 at 12:04 UTC.

We can quibble over what exactly is part of the sensory information. We can also quibble over what "sensory information" is even meant to be. But it should be uncontroversial that we gain information from our senses. My point is that, on any plausible way of spelling this out, the information we receive is centred: it doesn't have parameters that fix a unique location in space and time. If I were unsure about what time it is or who I am, looking at the wall in front of me wouldn't help. The underlying reason, of course, is that photoreceptors are insensitive to differences in spatiotemporal location: they don't produce different outputs depending on where or when they are activated by photons.

Lewis 1979 and others have argued that belief contents are also centred. The idea is that there is a theoretically fruitful and intuitive sense in which our representation of the world assigns a special role to ourselves and the present time, like a map with a "you are here" marker. I find this plausible.

If belief contents are centred, one can give a simple account of how centred sensory information might update an agent's beliefs: the agent can simply accept the sensory information; they might conditionalize or Jeffrey-conditionalize on it.

But I don't think this simple account is correct.

When I'm holding a pencil right in front of my eyes, the information coming from my visual system might be something like pencil about 5 cm ahead. If I were to conditionalize on this, I would come to believe that there's a pencil about 5 cm ahead. But what I actually come to believe is that there's a pencil about 5 cm in front of my eyes.

Imagine your eyes are removed from your head and placed somewhere else, with radio signals replacing the nerve connections. How should you update on the visual information there's a red wall ahead?

Or suppose your eyes have been put at separate locations; your left eye sends a signal of a red wall; your right eye of a snowy mountain. What do you do with that?

One can even imagine that the eyes convey information about different times. Perhaps one eye is put behind a transparent medium in which light travels very slowly, so that its photoreceptors carry information about how things were on the other side of the medium many years ago. Or perhaps the radio signals from this eye travel with a long delay.

These are far-fetched scenarios. But they dramatise a real problem encountered by our nervous system. After all, our eyes really are at different locations. And since light travels faster than sound, our auditory information is not exactly in sync with our visual information: we hear the thunder after we see the lightning.

In short, our brain needs to integrate the sensory information provided by our sense organs into a unified representation of the world, and it shouldn't do that by simply accepting and conjoining the sensory information.

An analogous problem arises for linguistic communication. If beliefs are centred, how should we understand what is communicated by ordinary assertions? Naively, one would think that in a simple case of communication, the speaker utters a sentence that expresses a "proposition" which the speaker believes and which they want the hearer to believe. But on the centred-belief account, this simple model can't be right, unless we never communicate our centred beliefs. Some have concluded that we should reject the centred account of belief, or supplement it with an uncentred account. (See, e.g., Perry 1977, Stalnaker 1981, Stalnaker 2008, Caie and Ninan 2025.) I think we should instead revise the simple model of communication, roughly along the lines suggested in Weber 2013.

In any case, the uncentred-content move looks really unappealing for the case of sensory integration. There must be a better solution.

Here is one idea. On closer inspection, our representation of the world might be multi-centred, with several "you are here" markers (compare, e.g., Spohn 1996, Torre 2010, Ninan 2013). There could be one marker for each eye, one for each ear, and so on. It's then easy to update such a representation in response to centred sensory information. If, for example, the left eye "says" red wall ahead, one can rule out all multi-centred possibilities in which there's no red wall ahead of the marker for the left eye.

But multi-centred contents are revisionary. They don't neatly plug into standard formulations of Bayesian epistemology and decision theory. Can we do with a single centre?

Imagine first a creature with a single sense organ, an eye, to which it is connected by radio signals. If the eye says red wall ahead, the creature can update on red wall ahead of my eye.

More generally, we shouldn't think of the "you are here" marker as an arrow pointing at a precise spacetime point. The marker for belief contents usually singles out a temporal stage of a composite object. (In Lewisian terms: the properties that are the attitude contents are properties of stages of composite objects.) In unusual cases, the marker might point at an object that's scattered across spacetime. When sensory input arrives from the eye, all possibilities in the creature's doxastic space in which the received content isn't true at "my eye" – i.e., at the eye of the marked object – can be ruled out.

This assumes that the creature is aware that it has an eye. More generally, the update might rule out all possibilities in which the received content isn't true at some location from which "I" (i.e., the marked object) receive(s) sensory input.

Note that it doesn't matter if the creature considers this location as part of itself: it doesn't matter, for the purposes of integrating sensory information, if the belief arrow points to an object that includes the sense organ or not.

Things get more complicated if there are multiple sense organs. Here, it might help to give each input channel a tag indicating from which sense organ it comes, so that one can update on red wall ahead of organ A. These tags would play a similar role to the centres in the multi-centred account.

To spell this out more carefully, I suspect we should be more precise about how to understand the content that is received from the senses. On the account I describe in Schwarz 2018, the "tags" could be folded into the "imaginary dimension" of the sensory content. Intuitively, each organ would get a distinctive phenomenal character by which it can be identified in the update.

Caie, Michael, and Dilip Ninan. 2025. “First-Person Propositions.” Philosophers’ Imprint 25 (0). https://doi.org/10.3998/phimp.3481.
Lewis, David. 1979. “Attitudes De Dicto and De Se.” The Philosophical Review 88 (4): 513–43. https://doi.org/10.2307/2184843.
Ninan, Dilip. 2013. “Self-Location and Other-Location.” Philosophy and Phenomenological Research 87 (2): 301–31. https://doi.org/10.1111/phpr.12051.
Perry, John. 1977. “Frege on Demonstratives.” Philosophical Review 86: 474–97.
Schwarz, Wolfgang. 2018. “Imaginary Foundations.” Ergo 29: 764–89.
Spohn, Wolfgang. 1996. “On the Objects of Belief.” In Intentional Phenomena in Context: Papers from the 14th Hamburg Colloquium on Cognitive Science, edited by C. Stein and M. Textor, 55:117–41.
Stalnaker, Robert. 1981. “Indexical Belief.” Synthese 49: 129–51.
Stalnaker, Robert. 2008. Our Knowledge of the Internal World. Oxford: Oxford University Press.
Torre, Stephan. 2010. “Centered Assertion.” Philosophical Studies 150 (1): 97–114. https://doi.org/10.1007/s11098-009-9399-1.
Weber, Clas. 2013. “Centered Communication.” Philosophical Studies 166 (S1): 205–23. https://doi.org/10.1007/s11098-012-0066-6.

Comments

No comments yet.

Add a comment

Please leave these fields blank (spam trap):

No HTML please.
You can edit this comment until 30 minutes after posting.