< 125 older entriesHome654 newer entries >

Fiction and History

Not much blogging these days because for some reason my wrist hurts, and I think it's better to let it rest for a while. So here are just a couple of brief remarks, typed with my left hand, about some parallels between fictional and and historical characters.

We might distinguish two modes of speaking about historical characters:

1. Past: Immanuel Kant is a philosopher; he lives at Königsberg; etc.

2. Present: Immanuel Kant does not exist; he does not live at Königsberg; etc.

Operator Overload

I wanted to blog something on how to treat discourse about fiction in the framework of a general multi-dimensional semantics, but this turned out to work so well that the entry is growing rather long, and I won't finish it today. In the meantime, here is a nice example of multiple context-shifting operators, from this BBC story (via Asa Dotzler):

In this place, for a few hours each day, just after noon in the summer, there could be liquid water on the surface of Mars.

Detecting Satisfiability with Free-Variable Tableaux

Recently I suggested the following restriction on free-variable tableaux:

The gamma rule must not be applied if the result of its previous application has not yet been replaced by the closure rule.

I think I've now found a proof that the restriction preserves completeness:

Let (GAMMA) be a gamma-node that has been expanded with a variable y even though the free variable x introduced by the previous expansion is still on the tree. I'll show that before the elimination of x, every branch that can be closed by some unifier U can also be closed by a unifier U' that does not contain y in any way (that is, y is neither in the domain nor in the range of the unification, nor does it occur as an argument of anything in the range of the unification.) Hence the expansion with y is completely useless before the elimination of x.

Before the y-expansion, no branch at any stage contains y. After the y-expansion, every open subbranch of (GAMMA) contains the formula created by the y-expansion, let's call it F(y). Among these branches select the one that first gets closed by some unifier U containing y in any way. Now I'll show that, whenever at some stage a formula G(y) containing y occurs on this branch, we can extend the branch by adding the same formula with every occurrence of y replaced by x.

At the stage immediately after the y-expansion, the only formula containing y is F(y). And because (GAMMA) has previously also been expanded with x, and x has not yet been eliminated, the branch also contains F(x). Next, assume that G(y) occurs at some later stage of the branch. Then it has been introduced either by application of an ordinary alpha-delta rule or by the closure rule. If it has been introduced by application of an alpha-delta rule then we can just as well derive G(x) from the corresponding ancestor with x instead of y (which exists by induction hypothesis). Now for the closure rule. Assume first this application of closure does not close the branch (but rather some other branch). Then by assumption, the applied unifier does not contain y in any way, Moreover, it does not replace x by anything else. So in particular, it will not introduce any new occurrences of y in any formula, and it will not replace any occurrences of x and y. Hence if G(y) is the result of applying this unifier to some formula G'(y), then G(x) is the result of applying the unifier to the corresponding formula G'(x).

Assume finally that the branch is now closed by application of some unifier U that contains y in any way. Let C1, -C2 be the unified complementory pair. (At least one of C1, C2 contains y, otherwise U wouldn't be minimal.) Then as we've just shown, the branch also contains the pair C1(y/x), -C2(y/x). Let U' be like U except that every occurrence of y (in its domain or range or in an argument of anything in its range) is replaced by x. Clearly, if U unifies C1 and C2, then U' unifies C1(y/x), C2(y/x).

In the other posting I also mentioned that this restriction can't detect the satisfiability of forallx((Fx wedge existsy negFy) vee Gx), which the Herbrand restriction on standard tableaux can. (A simpler example is forallx((Fx wedge negFa) vee Ga).) These cases can be dealt with by simply incorporating the Herbrand restriction into the Free Variable system:

Objects of Fiction

Here comes a positive theory of fictional characters. Disclaimer: Only read when you are very bored. I've started thinking and reading about this topic just a weak ago, so probably the following 1) doesn't make much sense, 2) fails for all kinds of well-known reasons, and 3) is not original at all. The main thesis certainly isn't original: it is simply that fictional characters are possibilia. Anyway, I begin with an account of truth in fiction, which largely derives from what Lewis says in "Truth in Fiction".

Do We Need Fictional Truth?

J from Blogosophy proposes that we use "in a manner of speaking" instead of "accoring to the fiction" as a prefix for fictional statements. This, J says, would also work for the problematic cases like "Sherlock Holmes consumed drugs that are illegal nowadays". I'm afraid I don't quite understand this operator. What are the truth conditions of "in a manner of speaking, p"?

Counterfactuals and Counterexamples

It is controversial whether indicative conditionals with false antecedents are generally true. As far as I know, which really is not very far at all, it is equally controversial whether counterfactual conditionals with necessarily false antecedents are generelly true. What's interesting is the different kinds of counterexamples that are brought forward against these views. For indicatives, the counterexamples are indicative conditionals with false antecedents that nevertheless appear to be false, e.g. "if I put diesel in my coffee, the coffee tastes fine." For counterfactuals however, the alleged counterexamples (brought forward e.g. by Field in §7.2 of Realism, Mathematics & Modality, Katz in §5 of "What mathematical knowledge could be", and Rosen in §1 of "Modal fictionalism fixed") are counterfactual conditionals with necessarily false antecedents that appear to be true, e.g. "if the axiom of choice were false, the cardinals wouldn't be linearly ordered". Isn't this quite puzzling? How can the fact that some instances are true be a problem for a theory that claims that all instances are true?

Parsimony and Ontological Dependence

This is part 2 of my comments on Fiction and Metaphysics.

Amie Thomasson argues that fictional objects are not as strange and special as one might have thought because they belong to the same basic ontological category as works of art, governments, chairs and other objects of everyday life. Doing without fictional entities, she says, would merely be "false parsimony" unless one can also do without other entities of this category.

I have three complaints.

Amie Thomasson's Fiction and Metaphysics

Brian has made so many puzzling remarks about fictional characters being real but abstract that I've decided to read Amie Thomasson's Fiction and Metaphysics. Here is my little review.

Thomasson's theory, in a nutshell, is that the Sherlock Holmes stories are not really about the adventures of a detective who lives at 221B Baker Street, but rather about the adventures of a ghostly, invisible character who lives at no place in particular and never does anything at all. We don't find this written in the Sherlock Holmes stories because, according to Thomasson's theory, Arthur Conan Doyle simply doesn't tell the truth about Holmes. In fact the only thing he gets right is his name: That ghostly character he is telling wildly false stories about is really called "Sherlock Holmes".

More About Analyticity

Here comes the promised reply to Sam's reply to my previous posting. In that posting, I first suggested that some sentence S (in a given language) is analytic iff you can't understand it unless you believe it. Then I said that, "put slightly differently", S is analytic iff it is impossible to believe that not-S.

As Sam notes, the first definition implies that even very complicated analytic truths have to be believed in order to be understood, which might be somewhat unintuitive. I'm not sure how bad this is for lack of a clear example. Sam uses "the sum of the digits of the first prime number greater than 1 million is even", but this is not analytic, so here I can perfectly well admit that you may understand it without either believing or disbelieving it. He also mentions infinitely long sentences, but I don't believe there are any of those in ordinary languages.

Universalia in rebus and universalia ante res?

Here at Humboldt University, there's a reading group about analytic philosophy (Sam already mentioned it). The flyer advertising this group describes analytic philosophy as a sort of new and fascinating kind of philosophy characterised by its perspicuity and ignorance of philosophical tradition. The funny thing is that the organisers of the reading group decided that we'll be discussing David Wiggins' Sameness and Substance Renewed. I don't want to know how much Hegel one has to read to find Wiggins perspicuous (and ignorant of philosophical tradition).

Explicating Analyticity

Some expression can't be properly understood unless one believes certain things: In some sense you don't understand "irrational number" unless you believe that no natural number is irrational; You don't understand "grandmother" unless you believe that grandmothers are female; Maybe you don't understand "cat" unless you believe that cats are animals.

This is all quite vague because "understanding" and "believing" are vague. I now want to suggest that a sentence is analytic iff you can't understand it unless you believe it. Analyticity is also vague, so the vagueness of the explicans is fine for this purpose.

Logic Programming Slides

I've made some slides about logic programming (PS) for my presentation next week in the logic seminar.

Restrict the Gamma Rule?

The following restriction might be a way out of the problems I mentioned in my last posting:

The gamma rule must not be applied if the result of its previous application has not yet been replaced by the Closure rule.

(The gamma rule deals with forall and negexists formulae; the Closure rule is the rule that allows to replace dummy constants by real constants iff that leads to the closure of at least one branch.)

Counter-Models and Free-Variable Tableau Systems

I'm half-way into programming a more efficient tree prover, based on free-variable tableaux. But now I'm not sure any more if this is really what I want.

The basic idea in free-variable tableaux is that you use dummy constants to instantiate universally quantified formulas, and only replace these dummy constants by real constants if this allows you to close a branch. In automatised tableaux, this dramatically decreases the steps required for certain proofs. For example, my old tree prover internally creates an 860-node tree to prove forallxforally(Fxy to forallzFzz) wedge existsx existsy Fxy to forallx Fxx, whereas a free-variable system only needs 12 nodes.

Oh Dear

I just noticed that my tree prover fails on this simple formula! This is a good opportunity to rewrite the part of the script that does the proving and implement some shortcuts, and maybe some "loop detection".

Keyboard Commands in Postbote

I've added keyboard commands to Postbote: If the focus is on the frame with the mail listing, press "R" to refresh, "A" to select all mails, and "G" to quickly change the listing offset (this is for Hermann, who has 1600 mails in his mailbox...).

How To Define Theoretical Predicates: The Problem

Suppose some theory T(F) implicitly defines the predicate F. If we want to apply the Ramsey-Carnap-Lewis account of theoretical expressions, we first of all have to replace F by an individual constant f, and accordingly change every occurrance of "Fx" in T by "x has f" etc. The empirical content of the resulting theory T'(f) can then be captured by something like its Ramsey sentence existsf T'(f), and the definition of f by the stipulation that 'f' denote the only x such that T'(x), or nothing if there is no such (unique) x.

Implicit Definitions, Part 4: Summing Up (And a Partial Defence of Implicit Definition)

In the previous three entries, I've tried to argue that there are no genuinely implicit definitions: Whenever a new expression is introduced via an alleged implicit definition, either there is no question of definition at all, as in the case of new expressions used as bound variables in mathematics, or there is an explicit definition nearby.

This latter fact, that sometimes explicit definitions are only nearby, provides a partial vindication of implicit definitions. For example, let's assume that folk psychology implicitly defines "pain". But folk psychology itself is not equivalent to the nearby explicit definition. To get an explicit definition, we have to turn folk psychology into something like its Carnap sentence. So the theory itself could be called a genuinely implicit definition.

Implicit Definitions, Part 3: Contextual Definition

I've said that an explicit definition introduces a new expression by stipulating that it be semantically equivalent to an old expression. If there are no non-explicit definitions, this means that you can only define expressions that are in principle redundant. Aren't there counterexamples to this claim?

Consider the definition of the propositional connectives. We can explicitly define some of them with the help of others, but what if we want to define all of them from scratch? The common strategy here is to recursively provide necessary and sufficient conditions for the truth of a sentence governed by the connective: A wedge B is true iff A is true and B is true.

Implicit Definitions, Part 2: Theoretical Terms

Scientific theories are often said to implicitly define their theoretical terms: phlogiston theory implicitly defines "phlogiston", quantum mechanics implicitly defines "spin". This is easily extended to non-scientific theories: ectoplasm theory implicitly defines "ectoplasm", folk psychology implicitly defines "pain".

The first problem from the mathematical case applies here too: Since all these theories make substantial claims about reality, their truth is not a matter of stipulation. For example, no stipulation can make phlogiston theory true. That's why, according to the standard Ramsey-Carnap-Lewis account, what defines a term (or several terms) t occurring in a theory T(t) is not really the stipulation of T(t) itself, but rather the stipulation of something like its 'Carnap sentence' existsx T(x) to T(t). All substantial claims in T(t) are here cancelled out by the antecedent.

Implicit Definitions, part 1: Mathematics

I vaguely believe that there are no implicit definitions. So I've decided to write a couple of entries to defend this belief. The defence may well lead me to give it up, though. Anyway, here is part 1.

Explicit definitions introduce a new expression by stipulating that it be in some sense synonymous or semantically equivalent to an old expression. For ordinary purposes this can be done without the use of semantic vocabulary by stipulations of the form

Luxury Flat

This weekend, I've moved into my new flat, which has both a bath room and a fridge, and also lot's of funny records from the 1970s.

How Many Sciences is Semantics?

I often wonder to what extent different theories and approaches in philosophy of language are conflicting theories about the same matter, or rather different theories about different matters. For example, some theories try to describe the cognitive processes involved in human speaking and understanding; Others try to find systematic rules for how semantic properties (like truth value or truth conditions) of complex expressions are determined by semantic properties (like reference or intension) of their components; Others try to spell out what mental and behavioural conditions somebody must meet in order to understand an expression (or a language); Others try to find physical relations that hold between expression tokens and other things iff these other things are in some intuitive sense the semantic values of the expression tokens; Others try to discover social rules that govern linguistic behaviour; and so on. How are all these projects related to each other?

< 125 older entriesHome654 newer entries >