<?xml version="1.0" encoding="iso-8859-1"?>

<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">

<channel>
  <atom:link href="http://www.umsu.de/blog/rss.xml?comments=on" rel="self" type="application/rss+xml" />
  <title>wo's weblog</title>
  <link>https://www.umsu.de/blog/</link>
  <description>Musings in Analytic Philosophy</description>

  <item>
    <title>Comment by David Duffy on 'The tyranny of the objective'</title>
    <link>https://www.umsu.de/blog/2026/828#c2464</link>
    <guid>https://www.umsu.de/blog/2026/828#c2464</guid>
    <pubDate>Mon, 23 Feb 2026 02:35:59 +0000</pubDate>
    <description><![CDATA[As to Isaacs et al, if one looks at this as a statistical problem, it is<br />
that of estimation of the binomial parameter n from a single observation (eg Jeffrey&#039;s tramcar). If p is known, then the generalized Bayes estimator of n with improper prior pi(n) ~ 1/n is 1/p. The likelihood ratio comparing two hypothetical n&#039;s should be (1-p)^(n1-n2) * n1/n2, so if p is small and n1=1, then it increases by 1 for each increment of n2.]]></description>
  </item>
    <item>
    <title>Comment by David Duffy on 'The tyranny of the objective'</title>
    <link>https://www.umsu.de/blog/2026/828#c2463</link>
    <guid>https://www.umsu.de/blog/2026/828#c2463</guid>
    <pubDate>Sat, 21 Feb 2026 06:31:41 +0000</pubDate>
    <description><![CDATA[&quot;...the hypothesis that there is exactly one inhabited universe&quot; is<br />
different from &quot;I inhabit a universe&quot;<br />
<br />
I&#039;m feeling a bit dull, but doesn&#039;t the former imply non-independence of p&#039;s ie existence of life in U1 extinguishes probability of occurrence in the other universes? ]]></description>
  </item>
    <item>
    <title>The tyranny of the objective</title>
    <link>https://www.umsu.de/blog/2026/828</link>
    <guid>https://www.umsu.de/blog/2026/828</guid>
    <pubDate>Fri, 20 Feb 2026 16:39:33 +0000</pubDate>
    <description><![CDATA[<p>A widely held view in philosophy is that ordinary information and
ordinary belief are concerned with "objective" propositions whose
truth-value doesn't vary between perspectives or locations within a
world.</p>
<p>Some hold that all genuine content is objective, and that the
appearance of counterexamples is an illusion that can somehow be
explained away. (See, e.g., "<a$m[1]href=\"" . relative2absolute($m[2]) . "\"" class="citation"
data-cites="stalnaker81indexical" title="Stalnaker, Robert. 1981. “Indexical Belief.”
Synthese 49: 129–51.
">Stalnaker 1981</a>, "<a$m[1]href=\"" . relative2absolute($m[2]) . "\""
class="citation" data-cites="magidor15myth" title="Magidor, Ofra. 2015. “The Myth of the de Se.”
Philosophical Perspectives 29: 259–83.
">Magidor 2015</a>, or
"<a$m[1]href=\"" . relative2absolute($m[2]) . "\"" class="citation" data-cites="cappelen13inessential" title="Cappelen, Herman, and Josh Dever. 2013. The Inessential
Indexical. Oxford: Oxford University Press.
">Cappelen and
Dever 2013</a>.) Even those who accept that there is genuinely
perspectival or self-locating information tend to treat it as a special
case that requires special rules for integration with ordinary,
non-perspectival information. (See, e.g., "<a$m[1]href=\"" . relative2absolute($m[2]) . "\"" class="citation"
data-cites="bostrom2002anthropic" title="Bostrom, Nick. 2002. Anthropic Bias: Observation
Selection Effects in Science and Philosophy. New York: Routledge.
">Bostrom 2002</a>, "<a$m[1]href=\"" . relative2absolute($m[2]) . "\""
class="citation" data-cites="meacham2008sleeping" title="Meacham, Christopher. 2008. “Sleeping Beauty and the
Dynamics of de Se Beliefs.”
Philosophical Studies, 245–69. https://www.jstor.org/stable/40208872.
">Meacham 2008</a>,
"<a$m[1]href=\"" . relative2absolute($m[2]) . "\"" class="citation" data-cites="moss2012updating" title="Moss, Sarah. 2012. “Updating as Communication.”
Philosophy and Phenomenological Research 85 (2): 225–48.
">Moss 2012</a>,
"<a$m[1]href=\"" . relative2absolute($m[2]) . "\"" class="citation" data-cites="titelbaum2013quitting" title="Titelbaum, Michael G. 2013. Quitting Certainties: A
Bayesian Framework Modeling Degrees of Belief. Oxford:
Oxford University Press.
">Titelbaum
2013</a>, "<a$m[1]href=\"" . relative2absolute($m[2]) . "\"" class="citation"
data-cites="builes2020timeslice" title="Builes, David. 2020. “Time-Slice Rationality and
Self-Locating Belief.” Philosophical
Studies 177 (10): 3033–49. https://doi.org/10.1007/s11098-019-01358-1.
">Builes 2020</a>, or "<a$m[1]href=\"" . relative2absolute($m[2]) . "\""
class="citation" data-cites="isaacs2022multiple" title="Isaacs, Yoaav, John Hawthorne, and Jeffrey Sanford Russell. 2022.
“Multiple Universes and Self-Locating
Evidence.” The Philosophical Review 131 (3):
241–94. https://doi.org/10.1215/00318108-9743809.
">Isaacs, Hawthorne, and
Russell 2022</a>).</p>
<p>I think of this as <em>the tyranny of the objective</em>. (Compare
"<a$m[1]href=\"" . relative2absolute($m[2]) . "\"" class="citation" data-cites="chalmers1998tyranny" title="Chalmers, David J. 1998. “The Tyranny of the
Subjunctive.”
">Chalmers
1998</a>.)</p>
<p>In my view, all ordinary belief, and all ordinary information, is
perspectival. Our senses tell us how things are <em>here</em> and
<em>now</em>, <em>around us</em>. By scientific experiments and
observations, we can find out more about <em>our solar system</em> or
about the <em>biology of organisms on our planet</em>. When we learn
such facts, we may also learn objective facts: by coming to know that
<em>it is raining</em>, I also come to know that <em>it is raining
somewhere in the history of the world</em>. But this objective belief is
unusual and derivative. Ordinary confirmation is always confirmation of
perspectival hypotheses by perspectival evidence.</p>
<p>"<a$m[1]href=\"" . relative2absolute($m[2]) . "\"" class="citation" data-cites="lewis1979attitudes" title="Lewis, David. 1979. “Attitudes De Dicto and
De Se.” The Philosophical Review
88 (4): 513–43. https://doi.org/10.2307/2184843.
">Lewis
1979</a> explained how this can be modelled formally. We simply need
to replace the uncentred worlds of traditional confirmation theory with
centred worlds.</p>
<p>The clearest sign that something is amiss with the objectivist
mainstream is that it can't account for elementary facts about reasoning
with perspectival information. As "<a$m[1]href=\"" . relative2absolute($m[2]) . "\"" class="citation"
data-cites="isaacs2022multiple" title="Isaacs, Yoaav, John Hawthorne, and Jeffrey Sanford Russell. 2022.
“Multiple Universes and Self-Locating
Evidence.” The Philosophical Review 131 (3):
241–94. https://doi.org/10.1215/00318108-9743809.
">Isaacs, Hawthorne, and Russell 2022,
252</a> put it: "All the precise theories we know of face very
serious objections." I agree.</p>
<p>Of course, dropping the objectivist starting point doesn't
automatically solve the difficult puzzles discussed in "<a$m[1]href=\"" . relative2absolute($m[2]) . "\""
class="citation" data-cites="bostrom2002anthropic" title="Bostrom, Nick. 2002. Anthropic Bias: Observation
Selection Effects in Science and Philosophy. New York: Routledge.
">Bostrom 2002</a>
or "<a$m[1]href=\"" . relative2absolute($m[2]) . "\"" class="citation" data-cites="isaacs2022multiple" title="Isaacs, Yoaav, John Hawthorne, and Jeffrey Sanford Russell. 2022.
“Multiple Universes and Self-Locating
Evidence.” The Philosophical Review 131 (3):
241–94. https://doi.org/10.1215/00318108-9743809.
">Isaacs,
Hawthorne, and Russell 2022</a>. But I think it sets the ground for
a solution, and explains where certain arguments go wrong.</p>
<p>To see what I mean, let's have a closer look at "<a$m[1]href=\"" . relative2absolute($m[2]) . "\""
class="citation" data-cites="isaacs2022multiple" title="Isaacs, Yoaav, John Hawthorne, and Jeffrey Sanford Russell. 2022.
“Multiple Universes and Self-Locating
Evidence.” The Philosophical Review 131 (3):
241–94. https://doi.org/10.1215/00318108-9743809.
">Isaacs, Hawthorne, and
Russell 2022</a>.</p>
<p>The paper explores whether our evidence supports the hypothesis that
there are many universes. In the main part of the paper, the authors
(henceforth, IRH) assume that centred credences are derived from
uncentred priors by special rules. In Appendix B, IRH consider the
possibility of starting with centred priors, but they argue that this
doesn't affect the conclusion that our evidence supports the multiverse
hypothesis.</p>
<p>Concretely, IRH prove two theorems. The theorems are complicated, but
a simple example illustrates the key moves.</p>
<p>Let H1 be the hypothesis that there is exactly one universe, and H2
the hypothesis that there are two universes. Assume that each universe
has a fixed chance p of being inhabited. Assume that p &lt; 0.5. For
simplicity, let's assume that an inhabited universe contains exactly one
centre from which it is observed. The evidence that is received at such
a centre is "local" insofar as it doesn't reveal anything about what
might be the case in other universes. But it reveals (among other
things) that <em>this</em> universe is inhabited.</p>
<p>Does such evidence support H2 over H1?</p>
<p>To answer this question, we need some assumptions about the (centred)
priors.</p>
<p>Let Pr be a rational prior credence function. Let I=1 be the
hypothesis that there is exactly one inhabited universe. Since the
chance of any universe being inhabited is p, we might expect that</p>
<div class="example"><span class="exlabel">(1)</span><span class="extext">Pr(I=1 | H1) = p and</span></div>
<div class="example"><span class="exlabel">(2)</span><span class="extext">Pr(I=1 | H2) = 2p(1-p).</span></div>
<p>For (1), the idea is that if there's just one universe, and any
universe has a fixed chance p of being inhabited, then the probability
of this one universe being inhabited is p.</p>
<p>For (2), we assume that there are two universes, U1 and U2. Each has
an independent chance p of being inhabited. There are two ways for there
to be exactly one inhabited universe: U1 is inhabited and U2 isn't, or
U2 is inhabited and U1 isn't. Each scenario has probability p(1-p). So
the total probability of I=1 is 2p(1-p).</p>
<p>Now let E be our evidence. Plausibly,</p>
<div class="example"><span class="exlabel">(3)</span><span class="extext">Pr(E | H1 ∧ I=1) = Pr(E | H2 ∧ I=1).</span></div>
<p>The idea here is that our evidence E is not made any more or less
probable by the presence of a second, uninhabited universe.</p>
<p>Since p &lt; 0.5, it follows (by a little maths) that Pr(E | H2) &gt;
Pr(E | H1). And so E supports H2 over H1.</p>
<p>IRH's "Theorem 3" generalizes this result.</p>
<p>The point I want to make is that this line of reasoning is highly
dubious if we take centred priors seriously.</p>
<p>Let's return to the priors. We have to make a choice: Should the
prior Pr assign positive probability only to inhabited points, or can it
also assign positive probability to uninhabited points?</p>
<p>This is a somewhat arcane theoretical question, since any evidence
will immediately rule out uninhabited points anyway.</p>
<p>Suppose we decide that Pr assigns positive probability only to
inhabited points. Then (1) is false. Given that there is exactly one
universe, the prior probability that this universe is inhabited must be
1, not p. (2) is also false. In general, on this approach we can't
assume that the prior probabilities align with the chances.</p>
<p>We can hold on to (1) and (2) only if we allow uninhabited points to
have positive prior probability. But if we do that, we should give up
(3).</p>
<p>To see why, let &lt;E,-&gt; be a two-universe world in which E is
true at the first universe and the second universe is uninhabited. Let
&lt;E&gt; be a one-universe world in which E is true. If Pr assigns
positive probability to both locations in &lt;E,-&gt; then Pr(E |
&lt;E,-&gt;) is less than 1, while Pr(E | &lt;E&gt;) is 1. So Pr(E | H2
∧ I=1) &lt; Pr(E | H1 ∧ I=1).</p>
<p>There is no plausible view on which (1)-(3) are all true. So we don't
get "Theorem 3". Nor do we get the stronger "Theorem 4".</p>
<p>IRH assume that the prior Pr assigns positive probability only to
inhabited points, <em>except</em> in worlds that are entirely
uninhabited: here, the prior assigns positive probability to a "dummy"
centre. This is formally consistent and makes it possible to accept
(1)-(3), but it is an entirely implausible account of rational
priors.</p>
<div class="references"
data-entry-spacing="0" role="list">
<div id="ref-bostrom2002anthropic" class="csl-entry" role="listitem">
Bostrom, Nick. 2002. <em>Anthropic Bias: <span>Observation</span>
Selection Effects in Science and Philosophy</em>. New York: Routledge.
</div>
<div id="ref-builes2020timeslice" class="csl-entry" role="listitem">
Builes, David. 2020. <span>“Time-<span>Slice Rationality</span> and
<span>Self-Locating Belief</span>.”</span> <em>Philosophical
Studies</em> 177 (10): 3033–49. "<a$m[1]href=\"" . relative2absolute($m[2]) . "\"">https://doi.org/10.1007/s11098-019-01358-1</a>.
</div>
<div id="ref-cappelen13inessential" class="csl-entry" role="listitem">
Cappelen, Herman, and Josh Dever. 2013. <em>The Inessential
Indexical</em>. Oxford: Oxford University Press.
</div>
<div id="ref-chalmers1998tyranny" class="csl-entry" role="listitem">
Chalmers, David J. 1998. <span>“The <span>Tyranny</span> of the
<span>Subjunctive</span>.”</span>
</div>
<div id="ref-isaacs2022multiple" class="csl-entry" role="listitem">
Isaacs, Yoaav, John Hawthorne, and Jeffrey Sanford Russell. 2022.
<span>“Multiple <span>Universes</span> and <span>Self-Locating
Evidence</span>.”</span> <em>The Philosophical Review</em> 131 (3):
241–94. "<a$m[1]href=\"" . relative2absolute($m[2]) . "\"">https://doi.org/10.1215/00318108-9743809</a>.
</div>
<div id="ref-lewis1979attitudes" class="csl-entry" role="listitem">
Lewis, David. 1979. <span>“Attitudes <span><em>De Dicto</em></span> and
<span><em>De Se</em></span>.”</span> <em>The Philosophical Review</em>
88 (4): 513–43. "<a$m[1]href=\"" . relative2absolute($m[2]) . "\"">https://doi.org/10.2307/2184843</a>.
</div>
<div id="ref-magidor15myth" class="csl-entry" role="listitem">
Magidor, Ofra. 2015. <span>“The Myth of the de Se.”</span>
<em>Philosophical Perspectives</em> 29: 259–83.
</div>
<div id="ref-meacham2008sleeping" class="csl-entry" role="listitem">
Meacham, Christopher. 2008. <span>“Sleeping <span>Beauty</span> and the
<span>Dynamics</span> of de Se <span>Beliefs</span>.”</span>
<em>Philosophical Studies</em>, 245–69. "<a$m[1]href=\"" . relative2absolute($m[2]) . "\"">https://www.jstor.org/stable/40208872</a>.
</div>
<div id="ref-moss2012updating" class="csl-entry" role="listitem">
Moss, Sarah. 2012. <span>“Updating as Communication.”</span>
<em>Philosophy and Phenomenological Research</em> 85 (2): 225–48.
</div>
<div id="ref-stalnaker81indexical" class="csl-entry" role="listitem">
Stalnaker, Robert. 1981. <span>“Indexical Belief.”</span>
<em>Synthese</em> 49: 129–51.
</div>
<div id="ref-titelbaum2013quitting" class="csl-entry" role="listitem">
Titelbaum, Michael G. 2013. <em>Quitting Certainties: A
<span>Bayesian</span> Framework Modeling Degrees of Belief</em>. Oxford:
Oxford University Press.
</div>
</div>
]]></description>
  </item>
    <item>
    <title>Comment by Jonathan Mai on 'Teaching logic: Tarski vs Mates vs "logical constants"'</title>
    <link>https://www.umsu.de/blog/2025/822#c2462</link>
    <guid>https://www.umsu.de/blog/2025/822#c2462</guid>
    <pubDate>Mon, 16 Feb 2026 21:01:22 +0000</pubDate>
    <description><![CDATA[Yes, you&#039;re right about the associated proof systems using free variables. I forgot about that since I approach logic from a model theoretic standpoint.<br />
<br />
As an aside, using complete expansions of models that is, structures containing a constant for every member of the expanded model, is a usual device in elementary model theory (Robinson diagrams etc.). So in this respect the expansion based semantics for the quantifiers matches model theoretic perspectives rather well.]]></description>
  </item>
    <item>
    <title>Comment by wo on 'Teaching logic: Tarski vs Mates vs "logical constants"'</title>
    <link>https://www.umsu.de/blog/2025/822#c2461</link>
    <guid>https://www.umsu.de/blog/2025/822#c2461</guid>
    <pubDate>Mon, 16 Feb 2026 16:12:31 +0000</pubDate>
    <description><![CDATA[Thanks Jonathan. Right, I&#039;ve seen versions of this as well. I guess it&#039;s usually combined with proof systems that use free variables, so it&#039;s not quite what I had in mind. I&#039;m also worried that it reinforces use/mention mistakes: especially in maths context, many students need to be constantly reminded of the distinction between objects in the domain and our names for these objects.<br />
<br />
On the flip side, in proof theory/tableau texts, one sometimes finds &quot;eigenvariables&quot; or &quot;parameters&quot; that are used specifically to instantiate universal quantifiers. But these usually don&#039;t figure in the semantics, as far as I can tell.]]></description>
  </item>
    <item>
    <title>Comment by Jonathan Mai on 'Teaching logic: Tarski vs Mates vs "logical constants"'</title>
    <link>https://www.umsu.de/blog/2025/822#c2460</link>
    <guid>https://www.umsu.de/blog/2025/822#c2460</guid>
    <pubDate>Sat, 14 Feb 2026 22:47:34 +0000</pubDate>
    <description><![CDATA[Your approach resembles the alternative to a satisfaction based semantics for quantification you find in several introductory texts on mathematical logic (van Dalen and Hedman, for instance): What we define is a truth relation between models and sentences. The truth definition for universally quantified sentences involves the truth of all of its instances in an expanded model which contains a fresh constant for every member of the model in question.<br />
<br />
]]></description>
  </item>
    <item>
    <title>Comment by David Duffy on 'Are we living in a computer simulation?'</title>
    <link>https://www.umsu.de/blog/2026/827#c2458</link>
    <guid>https://www.umsu.de/blog/2026/827#c2458</guid>
    <pubDate>Wed, 11 Feb 2026 06:20:59 +0000</pubDate>
    <description><![CDATA[I see the simulation hypothesis as per Tipler 1994 (appealing to Dyson&#039;s &quot;Time without end: Physics and biology in an open universe&quot;):<br />
<br />
1) In the current cosmology &quot;observed&quot; within our simulation, a high computing future is quite possible.<br />
2) There will be an interest by such future entities in running simulations of universes<br />
3) The physics within simulations performed will be based the actual physics underlying those computers - Anthropic arguments might mean that these are the only ones that run nicely.<br />
<br />
3 is where the infinite skeptical possibilities are constrained.<br />
<br />
As to Boltzmann brains, you only need 1 good one.<br />
Iammarino, Darren. &quot;God is a Boltzmann Brane: Arriving at God via Physicalism.&quot; Theology and Science 23.1 (2025): 167-182<br />
]]></description>
  </item>
    <item>
    <title>Are we living in a computer simulation?</title>
    <link>https://www.umsu.de/blog/2026/827</link>
    <guid>https://www.umsu.de/blog/2026/827</guid>
    <pubDate>Mon, 09 Feb 2026 16:10:17 +0000</pubDate>
    <description><![CDATA[<p>I'm moderately confident that I don't live in a computer simulation.
My reasoning goes like this.</p>
<ol>
<li><p>A priori, simulation scenarios are less probable than
non-simulation scenarios.</p></li>
<li><p>My evidence is more likely in non-simulation scenarios than in
simulation scenarios.</p></li>
<li><p>So: It is highly improbable, given my evidence, that I'm in a
simulation scenario.</p></li>
</ol>
<p>By a "simulation scenario", I mean a scenario in which a subject's
experiences of themselves and their environment are generated by a
computer program that simulates an ordinary (non-simulated) subject and
their environment.</p>
<p>I assume that it is a priori possible for a computer program to
generate experiences (and a "subject") by simulating an ordinary subject
with experiences. I'm not 100% sure this is true. (If not, premise 1 can
be strengthened: simulation scenarios have probability 0.) But it seems
plausible, especially if we're liberal about what qualifies as a
computer program and as a simulation.</p>
<p>(Strictly speaking, a simulation scenario isn't just a scenario in
which <em>somebody</em> lives in a computer simulation. It's a scenario
in which <em>I</em> do.)</p>
<p>Now, why do I think that simulation scenarios are a priori less
probable than non-simulation scenarios (assuming they are possible)? In
short, because they are skeptical scenarios, and skeptical scenarios
have low a priori probability.</p>
<p>Consider first a more standard type of skeptical scenario.</p>
<p>There are worlds that match our world in all the ways we have
observed, but whose unobserved parts are utterly different. For example,
there are worlds that are just like this world until today, but in which
everything turns into plum jam tomorrow. On any way of counting
possibilities, there are at least as many such "counterinductive" worlds
as "inductive" worlds. But we shouldn't take them seriously. The plum
jam extinction scenario deserves negligible credence, even though it is
compatible with all our evidence. In a Bayesian framework, this implies
that counterinductive scenarios must have much lower a priori
probability than inductive scenarios.</p>
<p>Another type of skeptical scenario involves hallucinations and
illusions. There are scenarios in which it visually seems to me as if I
am looking at a dagger even though there's no real dagger there. For a
more extreme version, there's a scenario in which I have all my actual
experiences while I am living the external life of Napoleon Bonaparte:
it seems to me as if I am quietly typing into my laptop at a cafe, but
in fact I am standing on a battlefield in 1800s France, directing my
troops. Such scenarios, too, deserve negligible credence, even though
they are compatible with my evidence.</p>
<p>Back to simulation scenarios.</p>
<p>Many simulation scenarios are skeptical scenarios of the second type.
My current experiences suggest to me that I have a physical body, that I
move around in a physical space with tables and trees etc. If I'm in a
simulation scenario, arguably none of that is true: my sensory
experiences are radically deceptive. But rationality requires giving
negligible credence to scenarios in which my experiences are deceptive.
Hence simulation scenarios have low a priori probability.</p>
<p>This is a little too quick. David Chalmers has argued that our
mundane positive beliefs about ourselves and our environment need not be
false in simulation scenarios. (See, for example, "<a$m[1]href=\"" . relative2absolute($m[2]) . "\"" class="citation"
data-cites="chalmers2018structuralism" title="Chalmers, David J. 2018. “Structuralism as a Response to
Skepticism.” The Journal of Philosophy 115
(12): 625–60. https://doi.org/10.5840/jphil20181151238.
">Chalmers 2018</a>, and
relevant parts of "<a$m[1]href=\"" . relative2absolute($m[2]) . "\"" class="citation"
data-cites="chalmers12constructing" title="Chalmers, David J. 2012. Constructing the World. Oxford: Oxford
University Press.
">Chalmers 2012</a> and "<a$m[1]href=\"" . relative2absolute($m[2]) . "\""
class="citation" data-cites="chalmers2022realityplus" title="Chalmers, David J. 2022b. Reality+: Virtual Worlds and the
Problems of Philosophy. New York: W. W.
Norton &amp; Company.
">Chalmers
2022b</a>.) Roughly, the idea is that all it takes for 'there are
trees' to be true is that there are things that play a certain
structural/causal role, linking the things to one another and to our
experiences. I'm not convinced by these arguments, but I'm not fully
convinced that the conclusion is false either.</p>
<p>So perhaps there are simulation scenarios in which my sensory
experiences are not radically deceptive. These scenarios then don't
belong in the second group of skeptical scenarios. But they still belong
into the first. Such scenarios involve two layers of reality: a
simulated layer (with genuine tables and trees, or so we assume) and an
outer layer in which the simulation is running. We only have access to
the simulated layer. How is this different from other counterinductive
hypotheses that posit intricate events in unobserved parts of the world?
There is a possible world in which this cafe is populated by angels made
of ectoplasm that don't interact with ordinary matter. I can't rule out
such a world, but I don't take it seriously. Layers of reality should
not be multiplied without necessity.</p>
<p>So much for my first premise. In sum, simulation scenarios are
scenarios in which our empirical methods don't work. Such scenarios
deserve negligible a priori credence.</p>
<p>(Isn't this begging the question against someone who holds that we
<em>are</em> living in a simulation? Of course it is. We shouldn't try
to engage with a skeptic on their own ground.)</p>
<p>I also claim that my evidence is somewhat more likely in
non-simulation scenarios than in simulation scenarios. This is my second
premise. It's based on the thought that people in simulation scenarios
generally don't have the kind of rich and coherent experiences that I
have. "Most" computer programs that generate a simulation scenario would
generate much poorer, more fragmented and chaotic experiences.</p>
<p>Objection: The people running these simulations would have an
interest in generating rich and coherent experiences.</p>
<p>Response: Why should we assume this? Who said that there are people
running these simulations anyway? Perhaps the computers that run the
simulations are created by random fluctuations of matter in the outer
universe. And even if there are people running the simulations, why are
we entitled to any a priori views about their aims and interests?</p>
<p>This concludes my argument.</p>
<p>I need to say something about "the simulation argument", which
supposedly proves that we are likely to live in a simulation. There are
many versions of this argument. I'll focus on one problem that all of
them seem to share.</p>
<p>Let's begin with a version that makes the problem especially obvious.
The argument (intended to be a simplified version of the one in "<a$m[1]href=\"" . relative2absolute($m[2]) . "\""
class="citation" data-cites="bostrom2003are" title="Bostrom, Nick. 2003. “Are We Living in a
Computer Simulation?” The Philosophical
Quarterly 53 (211): 243–55. https://doi.org/10.1111/1467-9213.00309.
">Bostrom 2003</a>) goes
like this.</p>
<ol>
<li><p>Given what we know about physics, the brain, computer technology,
etc., it is likely that we will create lots of simulated beings,
including beings with experiences just like mine.</p></li>
<li><p>Given that there are N beings in the world with experiences just
like mine, it is equally likely that I am any one of them.</p></li>
<li><p>So: It is likely that I am simulated.</p></li>
</ol>
<p>There's something odd about this line of reasoning. The central
empirical premise (premise 1) says that there will probably be simulated
beings <em>in the future</em>. From this, the argument seems to infer
that I am probably one of these beings. But I know that I'm not in the
future!</p>
<p>Let <em>Physics</em> be the (relatively) uncontroversial empirical
assumptions invoked in the argument's first premise: that the laws of
physics allow for the existence of computers to simulate a human brain,
that we don't have such computers now but might have them in the future,
and so on. To bring out the problem, let's pretend that <em>Physics</em>
makes it highly probable that there will be simulated beings in the
future and highly improbable that there are simulated beings now. (You
might deny that we have such information, but let's pretend!) The first
premise is then true. But the empirical information that it appeals to
also entails that the conclusion is false. So there must be something
wrong with this kind of argument, although it's not obvious where the
mistake lies.</p>
<p>A similar problem arises for the following argument that we are
Boltzmann brains.</p>
<ol>
<li><p>Given what we know from physics, it is likely that there are many
Boltzmann brains with experiences just like mine.</p></li>
<li><p>Given that there are N beings in the world with experiences just
like mine, it is equally likely that I am any one of them.</p></li>
<li><p>So: It is likely that I am a Boltzmann brain.</p></li>
</ol>
<p>The first premise mentions "what we know from physics". But let's
list a few things we know from physics! We know, for example, that we
live on a planet orbiting a star at a distance of about 150 million km.
This is not true if we are Boltzmann brains. As before, the empirical
information that is supposed to make the Boltzmann hypothesis probable
belongs to a larger corpus of information that actually contradicts the
Boltzmann hypothesis.</p>
<p>Intuitively, these arguments are self-undermining: if we accept the
conclusion, we are no longer justified in accepting the first premise.
But this doesn't explain where the arguments go wrong.</p>
<p>Well, the conclusion undermines the first premise because it implies
that we can't trust the empirical methods on which that premise
rests.</p>
<p>As before, let <em>Physics</em> comprise what we take ourselves to
know about the physical world. Let's assume that <em>Physics</em> makes
it highly likely that there are many Boltzmann brains with experiences
just like mine. Let <em>Boltzmann</em> be the hypothesis that I am a
Boltzmann brain.</p>
<p>Since <em>Physics</em> entails that we live on a planet, we have
P(Physics | Boltzmann) = 0.</p>
<p>By the <em>self-locating indifference</em> principle in premise 2 of
the Boltzmann brain argument, P(Boltzmann | Physics) is high.</p>
<p>It follows by elementary probability theory that P(Physics) must be
low.</p>
<p>So we have two options. We must reject either the self-locating
indifference principle or the trustworthiness of our empirical methods
(i.e., take P(Physics) to be low).</p>
<p>I think it's clear which of these options we should choose. We should
reject self-locating indifference.</p>
<p>This should be unsurprising from what I said above. I argued that
skeptical scenarios deserve negligible a priori credence. Since
skeptical scenarios can coexist with non-skeptical scenarios in the same
world, we should not be indifferent between all scenarios in a world
that we can't rule out. Boltzmann scenarios deserve negligible a priori
credence.</p>
<p>The simulation argument fails for the same reason. Let's look at
Chalmers's version (from "<a$m[1]href=\"" . relative2absolute($m[2]) . "\"" class="citation"
data-cites="chalmers2022realityplus" title="Chalmers, David J. 2022b. Reality+: Virtual Worlds and the
Problems of Philosophy. New York: W. W.
Norton &amp; Company.
">Chalmers 2022b, ch.5</a> and
"<a$m[1]href=\"" . relative2absolute($m[2]) . "\"" class="citation"
data-cites="chalmers2022realityplusappendix" title="Chalmers, David J. 2022a. “Online Appendices for Reality+:
Virtual Worlds and the Problems of
Philosophy.”
">Chalmers 2022a</a>),
which doesn't have the flaw of suggesting that we somehow live in the
future.</p>
<p>The conclusion of Chalmers's official argument states that 'if there
are no sim blockers, we are probably sims'. But he also indicates that
he regards the antecedent as somewhat probable. So we can rephrase the
argument as follows.</p>
<ol>
<li><p>It is somewhat probable that most beings with experiences like
mine are sims.</p></li>
<li><p>If most beings with experiences like mine are sims, I am probably
a sim.</p></li>
<li><p>So: it is somewhat probable that I am a sim.</p></li>
</ol>
<p>Premise 1 is supposed to be supported by empirical information about
computer technology, the number of neurons in the human brain, etc. This
assumes that our empirical methods are reliable. But the conclusion
implies that our empirical methods are probably unreliable. Like the
previous two arguments, this argument is self-undermining. And as in the
previous two cases, the flaw lies in the second premise.</p>
<p>(I have no strong views about the first premise. This is an empirical
matter.)</p>
<p>Again, this line of thought is a little too quick, if we consider
Chalmers's view that our ordinary, positive thoughts about tables and
trees are true in some simulation scenarios. Suppose for the sake of the
argument that this is correct. In the relevant simulation scenarios, our
empirical methods for arriving at <em>positive</em> conclusions may then
be fairly reliable. It's only our methods for arriving at negative
conclusions that are unreliable. (A "negative" conclusion is a
conclusion about the non-existence of extra things or extra structure or
extra layers of reality.) If the empirical information adduced in the
first premise is entirely positive, the argument isn't
self-undermining.</p>
<p>Of course, I would still say that premise 2 is false: rationality
requires giving low credence to scenarios in which our empirical methods
for arriving at negative conclusions are unreliable.</p>
<p>Curiously, Chalmers might argue that the simulation scenarios in
question are ones in which our methods for arriving at negative
conclusions are reliable as well. The argument would go as follows.</p>
<blockquote>
<p>Our positive information suggests that there are more simulated
beings with our present experience than non-simulated beings.</p>
<p>Our empirical methods include highly restricted indifference
principles. In particular, they license indifference between sim and
non-sim scenarios in which we have our present experiences.</p>
<p>So, <em>our empirical methods</em> imply that it is highly likely
that we are simulated, and therefore that there is an extra layer of
reality.</p>
</blockquote>
<p>I guess this shows that there may be nothing inherently unstable or
incoherent about believing in the simulation hypothesis on empirical
grounds. But it doesn't persuade me that the hypothesis is
plausible.</p>
<div class="references"
data-entry-spacing="0" role="list">
<div id="ref-bostrom2003are" class="csl-entry" role="listitem">
Bostrom, Nick. 2003. <span>“Are <span>We Living</span> in a
<span>Computer Simulation</span>?”</span> <em>The Philosophical
Quarterly</em> 53 (211): 243–55. "<a$m[1]href=\"" . relative2absolute($m[2]) . "\"">https://doi.org/10.1111/1467-9213.00309</a>.
</div>
<div id="ref-chalmers12constructing" class="csl-entry" role="listitem">
Chalmers, David J. 2012. <em>Constructing the World</em>. Oxford: Oxford
University Press.
</div>
<div id="ref-chalmers2018structuralism" class="csl-entry"
role="listitem">
Chalmers, David J. 2018. <span>“Structuralism as a <span>Response</span> to
<span>Skepticism</span>.”</span> <em>The Journal of Philosophy</em> 115
(12): 625–60. "<a$m[1]href=\"" . relative2absolute($m[2]) . "\"">https://doi.org/10.5840/jphil20181151238</a>.
</div>
<div id="ref-chalmers2022realityplusappendix" class="csl-entry"
role="listitem">
Chalmers, David J. 2022a. <span>“Online Appendices for <span>Reality</span>+:
<span>Virtual Worlds</span> and the <span>Problems</span> of
<span>Philosophy</span>.”</span>
</div>
<div id="ref-chalmers2022realityplus" class="csl-entry" role="listitem">
Chalmers, David J. 2022b. <em>Reality+: <span>Virtual Worlds</span> and the
<span>Problems</span> of <span>Philosophy</span></em>. New York: W. W.
Norton &amp; Company.
</div>
</div>
]]></description>
  </item>
    <item>
    <title>Comment by wo on 'Integrating centred information'</title>
    <link>https://www.umsu.de/blog/2026/826#c2457</link>
    <guid>https://www.umsu.de/blog/2026/826#c2457</guid>
    <pubDate>Fri, 23 Jan 2026 12:20:49 +0000</pubDate>
    <description><![CDATA[Thanks Stephan! I didn&#039;t know that paper. I still haven&#039;t read it, but from what you say, I probably don&#039;t fully agree with how Ismael &amp; Pollock describe the situation.<br />
<br />
With respect to my beliefs, I don&#039;t think I genuinely locate myself inside my head. I self-attribute properties like being 180 cm tall, and I don&#039;t think anything inside my head is 180 cm tall. So the &quot;I am here&quot; arrow in my belief worlds doesn&#039;t seem to be pointing at something inside my head. So I don&#039;t &quot;take myself to be located&quot; at the focal point of my visual field, as I&amp;P claim, although I admit that there is some temptation to do so, in certain contexts, because my visual input is centred on this location.<br />
<br />
You might respond that I don&#039;t self-attribute properties like being 180 cm tall; rather, I self-attribute being part of a human body that is 180 cm tall. More precisely: we could explain my thoughts and assertions about my height (etc) even if we assumed that the &quot;I am here&quot; arrow in my belief space points at my brain, or at my pineal gland. That seems right, and it&#039;s interesting how little hangs on the choice. I still prefer the view that the arrow points at my entire body, because it&#039;s more straightforward. But this might ultimately just be a modelling choice. Not sure!<br />
<br />
(I also don&#039;t fully agree that it even &quot;appears to me visually&quot; that I&#039;m located inside my brain. I don&#039;t think my visual input represents me at all.)<br />
<br />
The practical reasoning connection seems right, but I&#039;m not sure how much it helps to determine what the &quot;I am here&quot; arrow singles out. Somehow, I must realize that my choices affect the movement of this mouth and these arms etc., but this doesn&#039;t settle whether the arrow&#039;s destination is a body that includes the mouth and arms, or just the brain, or the pineal gland, or a point in space 1m to the left of the pineal gland.<br />
]]></description>
  </item>
    <item>
    <title>Comment by Stephan on 'Integrating centred information'</title>
    <link>https://www.umsu.de/blog/2026/826#c2456</link>
    <guid>https://www.umsu.de/blog/2026/826#c2456</guid>
    <pubDate>Fri, 23 Jan 2026 10:35:13 +0000</pubDate>
    <description><![CDATA[Thanks Wo for yet another rich and thought-provoking post. Am I right that you think that the doxastic content will always be centred on the composite (possibly scattered) object or one of its parts?  I wonder if there are cases where the &quot;I am here&quot; arrow might point to a location where no part of the creature&#039;s body is located. Jenann Ismael &amp; John Pollock have a paper (https://johnpollock.us/ftp/PAPERS/Nolipsism.pdf) that discusses some issues relevant to the ones you raise in your post. They write:<br />
<br />
&quot;Human beings have their eyes embedded in the fronts of their heads, and accordingly<br />
they locate themselves somewhere inside their heads. That is where it appears to them<br />
visually that they are. But imagine a somewhat different kind of creature whose eyes<br />
were mounted on the ends of willowy stalks extending outwards some distance from<br />
the head. The focal point of the visual field of such an agent might be three feet in<br />
front of its head, and it would be natural to construct the cognitive architecture of<br />
such an agent so that it took itself to be located at that focal point. The interesting<br />
thing about this example is that there need be nothing physical that is at that location&quot; (p.24).<br />
<br />
That seems right to me. If such a creature locates herself at the center of her visual field, where no part of her body exists, it isn&#039;t clear to me that her self-locating beliefs would be false. I also like what Ismael and Pollock say about the relevance of de se goals. I&#039;m attracted to a view that the &quot;I am here&quot; arrow points to a location that allows the agent to achieve its de se goals: the de se content figures into practical reasoning in the right way to allow the creature to get what it wants (&quot;Open mouth now to catch shrimp&quot;), and in cases where sensory organs are scattered, the &quot;I am here&quot; arrow may point to a location where none of its parts are located. ]]></description>
  </item>
    <item>
    <title>Integrating centred information</title>
    <link>https://www.umsu.de/blog/2026/826</link>
    <guid>https://www.umsu.de/blog/2026/826</guid>
    <pubDate>Thu, 22 Jan 2026 14:30:37 +0000</pubDate>
    <description><![CDATA[<p>Sensory information is centred. Right now, for example, my visual
system conveys to me that <em>there's a red wall about 1 metre
ahead</em> (among much else); it does not convey that <em>Wolfgang
Schwarz is about 1 metre away from a red wall on 22 January 2026 at
12:04 UTC</em>.</p>
<p>We can quibble over what exactly is part of the sensory information.
We can also quibble over what "sensory information" is even meant to be.
But it should be uncontroversial that we gain information from our
senses. My point is that, on any plausible way of spelling this out, the
information we receive is centred: it doesn't have parameters that fix a
unique location in space and time. If I were unsure about what time it
is or who I am, looking at the wall in front of me wouldn't help. The
underlying reason, of course, is that photoreceptors are insensitive to
differences in spatiotemporal location: they don't produce different
outputs depending on where or when they are activated by photons.</p>
<p>Lewis "<a$m[1]href=\"" . relative2absolute($m[2]) . "\"" class="citation"
data-cites="lewis1979attitudes" title="Lewis, David. 1979. “Attitudes De Dicto and
De Se.” The Philosophical Review
88 (4): 513–43. https://doi.org/10.2307/2184843.
">1979</a> and others have argued
that belief contents are also centred. The idea is that there is a
theoretically fruitful and intuitive sense in which our representation
of the world assigns a special role to ourselves and the present time,
like a map with a "you are here" marker. I find this plausible.</p>
<p>If belief contents are centred, one can give a simple account of how
centred sensory information might update an agent's beliefs: the agent
can simply accept the sensory information; they might conditionalize or
Jeffrey-conditionalize on it.</p>
<p>But I don't think this simple account is correct.</p>
<p>When I'm holding a pencil right in front of my eyes, the information
coming from my visual system might be something like <em>pencil about 5
cm ahead</em>. If I were to conditionalize on this, I would come to
believe that there's a pencil about 5 cm ahead. But what I actually come
to believe is that there's a pencil about 5 cm in front <em>of my
eyes</em>.</p>
<p>Imagine your eyes are removed from your head and placed somewhere
else, with radio signals replacing the nerve connections. How should you
update on the visual information <em>there's a red wall ahead</em>?</p>
<p>Or suppose your eyes have been put at separate locations; your left
eye sends a signal of a red wall; your right eye of a snowy mountain.
What do you do with that?</p>
<p>One can even imagine that the eyes convey information about different
times. Perhaps one eye is put behind a transparent medium in which light
travels very slowly, so that its photoreceptors carry information about
how things were on the other side of the medium many years ago. Or
perhaps the radio signals from this eye travel with a long delay.</p>
<p>These are far-fetched scenarios. But they dramatise a real problem
encountered by our nervous system. After all, our eyes really are at
different locations. And since light travels faster than sound, our
auditory information is not exactly in sync with our visual information:
we hear the thunder after we see the lightning.</p>
<p>In short, our brain needs to <em>integrate</em> the sensory
information provided by our sense organs into a unified representation
of the world, and it shouldn't do that by simply accepting and
conjoining the sensory information.</p>
<p>An analogous problem arises for linguistic communication. If beliefs
are centred, how should we understand what is communicated by ordinary
assertions? Naively, one would think that in a simple case of
communication, the speaker utters a sentence that expresses a
"proposition" which the speaker believes and which they want the hearer
to believe. But on the centred-belief account, this simple model can't
be right, unless we never communicate our centred beliefs. Some have
concluded that we should reject the centred account of belief, or
supplement it with an uncentred account. (See, e.g., "<a$m[1]href=\"" . relative2absolute($m[2]) . "\""
class="citation" data-cites="perry77frege" title="Perry, John. 1977. “Frege on Demonstratives.”
Philosophical Review 86: 474–97.
">Perry 1977</a>, "<a$m[1]href=\"" . relative2absolute($m[2]) . "\""
class="citation" data-cites="stalnaker81indexical" title="Stalnaker, Robert. 1981. “Indexical Belief.”
Synthese 49: 129–51.
">Stalnaker
1981</a>, "<a$m[1]href=\"" . relative2absolute($m[2]) . "\"" class="citation"
data-cites="stalnaker08knowledge" title="Stalnaker, Robert. 2008. Our Knowledge of the Internal World. Oxford: Oxford
University Press.
">Stalnaker 2008</a>, "<a$m[1]href=\"" . relative2absolute($m[2]) . "\""
class="citation" data-cites="caie2025firstperson" title="Caie, Michael, and Dilip Ninan. 2025. “First-Person
Propositions.” Philosophers’ Imprint 25 (0). https://doi.org/10.3998/phimp.3481.
">Caie and Ninan
2025</a>.) I think we should instead revise the simple model of
communication, roughly along the lines suggested in "<a$m[1]href=\"" . relative2absolute($m[2]) . "\""
class="citation" data-cites="weber2013centereda" title="Weber, Clas. 2013. “Centered Communication.”
Philosophical Studies 166 (S1): 205–23. https://doi.org/10.1007/s11098-012-0066-6.
">Weber
2013</a>.</p>
<p>In any case, the uncentred-content move looks really unappealing for
the case of sensory integration. There must be a better solution.</p>
<p>Here is one idea. On closer inspection, our representation of the
world might be <em>multi-centred</em>, with several "you are here"
markers (compare, e.g., "<a$m[1]href=\"" . relative2absolute($m[2]) . "\"" class="citation"
data-cites="spohn1996objects" title="Spohn, Wolfgang. 1996. “On the Objects of Belief.” In
Intentional Phenomena in Context: Papers from the 14th
Hamburg Colloquium on Cognitive Science, edited by C. Stein and M.
Textor, 55:117–41.
">Spohn 1996</a>, "<a$m[1]href=\"" . relative2absolute($m[2]) . "\""
class="citation" data-cites="torre2010centered" title="Torre, Stephan. 2010. “Centered Assertion.”
Philosophical Studies 150 (1): 97–114. https://doi.org/10.1007/s11098-009-9399-1.
">Torre 2010</a>,
"<a$m[1]href=\"" . relative2absolute($m[2]) . "\"" class="citation" data-cites="ninan2013selflocation" title="Ninan, Dilip. 2013. “Self-Location and
Other-Location.” Philosophy and Phenomenological
Research 87 (2): 301–31. https://doi.org/10.1111/phpr.12051.
">Ninan
2013</a>). There could be one marker for each eye, one for each ear,
and so on. It's then easy to update such a representation in response to
centred sensory information. If, for example, the left eye "says"
<em>red wall ahead</em>, one can rule out all multi-centred
possibilities in which there's no red wall ahead of the marker for the
left eye.</p>
<p>But multi-centred contents are revisionary. They don't neatly plug
into standard formulations of Bayesian epistemology and decision theory.
Can we do with a single centre?</p>
<p>Imagine first a creature with a single sense organ, an eye, to which
it is connected by radio signals. If the eye says <em>red wall
ahead</em>, the creature can update on <em>red wall ahead of my
eye</em>.</p>
<p>More generally, we shouldn't think of the "you are here" marker as an
arrow pointing at a precise spacetime point. The marker for belief
contents usually singles out a temporal stage of a composite object. (In
Lewisian terms: the properties that are the attitude contents are
properties of stages of composite objects.) In unusual cases, the marker
might point at an object that's scattered across spacetime. When sensory
input arrives from the eye, all possibilities in the creature's doxastic
space in which the received content isn't true at "my eye" – i.e., at
the eye of the marked object – can be ruled out.</p>
<p>This assumes that the creature is aware that it has an eye. More
generally, the update might rule out all possibilities in which the
received content isn't true at some location from which "I" (i.e., the
marked object) receive(s) sensory input.</p>
<p>Note that it doesn't matter if the creature considers this location
as part of itself: it doesn't matter, for the purposes of integrating
sensory information, if the belief arrow points to an object that
includes the sense organ or not.</p>
<p>Things get more complicated if there are multiple sense organs. Here,
it might help to give each input channel a tag indicating from which
sense organ it comes, so that one can update on <em>red wall ahead of
organ A</em>. These tags would play a similar role to the centres in the
multi-centred account.</p>
<p>To spell this out more carefully, I suspect we should be more precise
about how to understand the content that is received from the senses. On
the account I describe in "<a$m[1]href=\"" . relative2absolute($m[2]) . "\"" class="citation"
data-cites="schwarz2018imaginary" title="Schwarz, Wolfgang. 2018. “Imaginary Foundations.”
Ergo 29: 764–89.
">Schwarz 2018</a>, the "tags"
could be folded into the "imaginary dimension" of the sensory content.
Intuitively, each organ would get a distinctive phenomenal character by
which it can be identified in the update.</p>
<div class="references"
data-entry-spacing="0" role="list">
<div id="ref-caie2025firstperson" class="csl-entry" role="listitem">
Caie, Michael, and Dilip Ninan. 2025. <span>“First-<span>Person
Propositions</span>.”</span> <em>Philosophers’ Imprint</em> 25 (0). "<a$m[1]href=\"" . relative2absolute($m[2]) . "\"">https://doi.org/10.3998/phimp.3481</a>.
</div>
<div id="ref-lewis1979attitudes" class="csl-entry" role="listitem">
Lewis, David. 1979. <span>“Attitudes <span><em>De Dicto</em></span> and
<span><em>De Se</em></span>.”</span> <em>The Philosophical Review</em>
88 (4): 513–43. "<a$m[1]href=\"" . relative2absolute($m[2]) . "\"">https://doi.org/10.2307/2184843</a>.
</div>
<div id="ref-ninan2013selflocation" class="csl-entry" role="listitem">
Ninan, Dilip. 2013. <span>“Self-<span>Location</span> and
<span>Other-Location</span>.”</span> <em>Philosophy and Phenomenological
Research</em> 87 (2): 301–31. "<a$m[1]href=\"" . relative2absolute($m[2]) . "\"">https://doi.org/10.1111/phpr.12051</a>.
</div>
<div id="ref-perry77frege" class="csl-entry" role="listitem">
Perry, John. 1977. <span>“Frege on Demonstratives.”</span>
<em>Philosophical Review</em> 86: 474–97.
</div>
<div id="ref-schwarz2018imaginary" class="csl-entry" role="listitem">
Schwarz, Wolfgang. 2018. <span>“Imaginary Foundations.”</span>
<em>Ergo</em> 29: 764–89.
</div>
<div id="ref-spohn1996objects" class="csl-entry" role="listitem">
Spohn, Wolfgang. 1996. <span>“On the Objects of Belief.”</span> In
<em>Intentional Phenomena in Context: <span>Papers</span> from the 14th
Hamburg Colloquium on Cognitive Science</em>, edited by C. Stein and M.
Textor, 55:117–41.
</div>
<div id="ref-stalnaker81indexical" class="csl-entry" role="listitem">
Stalnaker, Robert. 1981. <span>“Indexical Belief.”</span>
<em>Synthese</em> 49: 129–51.
</div>
<div id="ref-stalnaker08knowledge" class="csl-entry" role="listitem">
Stalnaker, Robert. 2008. <em>Our Knowledge of the Internal World</em>. Oxford: Oxford
University Press.
</div>
<div id="ref-torre2010centered" class="csl-entry" role="listitem">
Torre, Stephan. 2010. <span>“Centered Assertion.”</span>
<em>Philosophical Studies</em> 150 (1): 97–114. "<a$m[1]href=\"" . relative2absolute($m[2]) . "\"">https://doi.org/10.1007/s11098-009-9399-1</a>.
</div>
<div id="ref-weber2013centereda" class="csl-entry" role="listitem">
Weber, Clas. 2013. <span>“Centered Communication.”</span>
<em>Philosophical Studies</em> 166 (S1): 205–23. "<a$m[1]href=\"" . relative2absolute($m[2]) . "\"">https://doi.org/10.1007/s11098-012-0066-6</a>.
</div>
</div>
]]></description>
  </item>
    <item>
    <title>Kripke on empty names</title>
    <link>https://www.umsu.de/blog/2026/825</link>
    <guid>https://www.umsu.de/blog/2026/825</guid>
    <pubDate>Mon, 12 Jan 2026 10:27:25 +0000</pubDate>
    <description><![CDATA[<p>I (somewhat randomly) picked up "<a$m[1]href=\"" . relative2absolute($m[2]) . "\"" class="citation"
data-cites="kripke2011vacuous" title="Kripke, Saul A. 2011. “Vacuous Names and Fictional
Entities.” In Philosophical Troubles: Collected
Papers, Volume 1, 52–74. New York: Oxford University Press.
">Kripke 2011</a> the other day. This
is Kripke's first engagement with the problem of empty names. What
struck me is the biased selection of examples. Most of the paper is
concerned with names of fictional characters like 'Sherlock Holmes', and
Kripke only seems to consider simple utterances in which they figure as
the subject, like (1).</p>
<div class="example"><span class="exlabel">(1)</span><span class="extext">Sherlock Holmes is a detective.</span></div>
<p>He argues, plausibly enough, that an apparent assertion of (1) should
be understood as a pretend assertion, which only requires that it has a
content in the context of the pretense.</p>
<p>Kripke also points out that it's hard to evaluate (1) at
counterfactual scenarios: what would have to be the case for (1) to be
literally true? It's not enough, he says, that the descriptive claims of
the Sherlock Holmes stories are true at the scenario. If we identify the
"proposition expressed" by an utterance with its pattern of truth-values
at counterfactual scenarios (what "<a$m[1]href=\"" . relative2absolute($m[2]) . "\"" class="citation"
data-cites="jackson2004why" title="Jackson, Frank. 2004. “Why We Need A-intensions.” Philosophical
Studies 118 (1-2): 257–77.
">Jackson 2004</a> calls the
"C-intension" of the utterance), our inability to evaluate (1) at
counterfactual scenarios supports the idea that it expresses no
proposition.</p>
<p>I assume Kripke would make similar points about (2).</p>
<div class="example"><span class="exlabel">(2)</span><span class="extext">Vulcan is smaller than Mercury.</span></div>
<p>Since "<a$m[1]href=\"" . relative2absolute($m[2]) . "\"">Vulcan
(the hypothetical planet)</a> doesn't exist, it is not obvious what
would have to be the case for it to be smaller than Mercury. Moreover,
an utterance of (2) is intuitively defective: the speaker presupposes
that 'Vulcan' refers, and this presupposition fails. (We understand what
the speaker is <em>trying</em> to say, Kripke might suggest, by
acquiescing to her error, but she isn't actually saying anything.)</p>
<p>But let's look at some other examples.</p>
<div class="example"><span class="exlabel">(3)</span><span class="extext">I've had breakfast on Vulcan today.</span></div>
<p>Isn't this simply false, since I've had breakfast on Earth?</p>
<p>More seriously, we can use well-known devices to block the projection
of presuppositions:</p>
<div class="example"><span class="exlabel">(4)</span><span class="extext">Either Vulcan doesn't exist or it is smaller than
Mercury.</span></div>
<div class="example"><span class="exlabel">(5)</span><span class="extext">If Le Verrier's explanation of Mercury's perihelion is
correct, we will eventually observe Vulcan.</span></div>
<p>These are odd in a context in which the speaker knows that Vulcan
doesn't exist. But suppose she doesn't. She clearly doesn't presuppose
that 'Vulcan' refers. As a result, there's nothing defective about (4)
and (5). The speaker of (4) or (5) does not commit an error. We don't
understand them by acquiescing to the error, evaluating them on the
pretense that 'Vulcan' refers.</p>
<p>I also think Kripke was too quick to conclude that we can't evaluate
(1) or (2) at counterfactual scenarios. Granted, (1) isn't true at a
scenario even if all the descriptive claims of the Sherlock Holmes
stories are true at the scenario. But this is compatible with the
hypothesis that (1) is false at every scenario. On this account, (2) may
also be false at every scenario, but (4) is true at every scenario, and
(5) is true at some scenarios and false at others. Isn't this a more
accurate picture?</p>
<p>(Of course, these C-intensions are not plausible candidates for what
is said, but that's a general fact about C-intensions.)</p>
<div class="references"
data-entry-spacing="0" role="list">
<div id="ref-jackson2004why" class="csl-entry" role="listitem">
Jackson, Frank. 2004. <span>“Why We Need <span
class="nocase">A-intensions</span>.”</span> <em>Philosophical
Studies</em> 118 (1-2): 257–77.
</div>
<div id="ref-kripke2011vacuous" class="csl-entry" role="listitem">
Kripke, Saul A. 2011. <span>“Vacuous Names and Fictional
Entities.”</span> In <em>Philosophical Troubles: <span>Collected</span>
Papers, Volume 1</em>, 52–74. New York: Oxford University Press.
</div>
</div>
]]></description>
  </item>
    <item>
    <title>The absoluteness of consistency</title>
    <link>https://www.umsu.de/blog/2025/824</link>
    <guid>https://www.umsu.de/blog/2025/824</guid>
    <pubDate>Fri, 19 Dec 2025 12:34:46 +0000</pubDate>
    <description><![CDATA[<p>A somewhat appealing (albeit, to me, also somewhat obscure) view of
mathematics is the pluralist doctrine that every consistent mathematical
theory is true, insofar as it accurately describes some mathematical
structure. I want to comment on a potential worry for this view,
mentioned in "<a$m[1]href=\"" . relative2absolute($m[2]) . "\"" class="citation"
data-cites="clarke-doane2020morality" title="Clarke-Doane, Justin. 2020. Morality and Mathematics. Oxford
University Press.">(Clarke-Doane 2020)</a>: that
it has implausible consequences for logic.</p>
<p>Let's assume that first-order Peano Arithmetic (PA) is consistent.
Let Con be the arithmetized statement that PA is consistent. By Gödel's
Second Incompleteness Theorem, PA + ¬Con is consistent. But this theory
is false, as we've assumed that PA is consistent. So not every
consistent theory is true.</p>
<p>More generally, the worry is that pluralism seems to imply that there
is no fact of the matter about which theories are consistent, given that
consistency statements are mathematical statements – just like, say, the
Continuum Hypothesis. Doesn't pluralism imply that there's a structure
in which PA is consistent and one in which PA is inconsistent, and that
nothing favours one of these over the other?</p>
<p>Let's begin with the formal, arithmetized version of the worry.</p>
<p>We know what kind of structure is described by PA ∧ ¬Con. This theory
describes non-standard models of arithmetic with extra "numbers" besides
the standard natural numbers. The arithmetical statement ¬Con is short
for ¬∃xPr(x,'⊥'), where Pr(x,y) is a formula that "expresses the proof
relation of PA" in the sense that it is satisfied by two standard
numbers iff the first number codes a proof in PA of the sentence coded
by the second number. ¬Con is true in the structures described by PA ∧
¬Con because the formula Pr here holds between some non-standard
"number" n and the code of '⊥'. But this non-standard "number" n doesn't
really code any proof. So <em>if we think of PA ∧ ¬Con as talking about
the structure in which it is true</em>, we shouldn't read 'Con' as
saying that PA is consistent.</p>
<p>I want to say essentially the same about the non-arithmetized version
of the worry. Yes, the words 'PA is inconsistent' or 'there is a proof
in PA of ⊥' can be interpreted as saying something true. That's trivial!
But words don't become true just because they can be interpreted as
saying something true.</p>
<p>The point is that 'consistent' has a fixed meaning in (technical)
English. Given this meaning, 'PA is inconsistent' is false. It doesn't
become true by being included in some consistent theory.</p>
<p>The same is true for most mathematical expressions. If someone, due
to a calculation mistake, claims that 13 times 154 is 2004, they haven't
said something true – even though '13 x 154 = 2004' is part of some
consistent theory.</p>
<p>'Set' also has a fixed meaning, but its meaning might be less
determinate. Some structures are definitely not candidates for the
structure of sets, but arguably there isn't a unique structure for which
the word is reserved. We can study ZFC + CH and ZFC + ¬CH, and the
conventions of (technical) English don't dictate that only one of these
describes things that are properly called 'sets'.</p>
<p>We should separate the somewhat appealing metaphysical doctrine of
pluralism from the unappealing semantic doctrine that mathematical
expressions have no fixed meaning.</p>
<div class="references"
data-entry-spacing="0" role="list">
<div id="ref-clarke-doane2020morality" class="csl-entry"
role="listitem">
Clarke-Doane, Justin. 2020. <em>Morality and Mathematics</em>. Oxford
University Press.
</div>
</div>
]]></description>
  </item>
  </channel>
</rss>
