< 14 older entriesHome743 newer entries >

Infinite probability spaces

Does anyone know a good resource on probability theory with infinite probabilty spaces (if there is such a thing)? For example, I would like to know if the probability that an arbitrary real number lies between 0 and 1 is defined, and if so, how the obvious awkwardness of any answer can be explained away.

Email Virus

If you recently received an email from somebody called 'Wolfgang Schwarz' mentioning wolfgang@umsu.de and containing a strange attachment, please don't open it. It is the worm W32.Bugbear@mm. If you have opened the attachment, this page tells you how to remove it. Also please don't reply to the sender who is not me and completely innocent, since the mess really spread from my old windows machine. I'm very sorry about this.

Apriority vs. Analyticity

It is often said, correctly I think, that there are contingent but a priori sentences, e.g. "water is the dominant liquid on earth". Are these sentences analytic or synthetic? That is, what puts you in a position to know these sentences? Does understanding suffice, or do you have to invoke some other a priori means, like Gödelian insight? To me this seems wildly and unnecessarily mysterious. Of course understanding suffices, at least in ordinary cases. So there are contingent but analytic sentences. I wonder why this is hardly ever said. Does anyone really believe that those statements are synthetic a priori?

Sharing narrow content

Since narrow content is not determined by external factors, it depends much more on other propositional states than wide content. For example, if you believe that Aristotle was human whereas I believe he was a poached egg, the narrow content of all our beliefs about Aristotle will differ. When I believe that Aristotle was Alexander's teacher, you can't have a belief with exactly the same narrow content unless you also come to believe that Aristotle was a poached egg. Likewise for imaginings: When we both imagine Aristotle teaching Alexander, our imaginings cannot have the same narrow content.

Similarly, I think, if Ted believes that for any atoms there is a fusion, whereas Cian disbelieves this, they cannot share any imagining about atoms.

Restricted deducibility and deferential understanding

Dave Chalmers kindly explained his views on deducibility to me. He thinks that anything one could reasonably call non-deferential understanding of the fundamental truths would suffice for being able in principle to deduce macrophysical facts, provided that these fundamental truths, unlike my P, contain phenomenal facts and laws of nature. He also notes that I shouldn't have called these restrictions (to non-deferential understanding and the rich content of fundamental truths) assumptions, since they are really just restrictions. I'm still not sure if any kind of non-deferential understanding would suffice, but with the restrictions in place it's not as easy to come up with counterexamples as I thought.

A priority, deducibility and understanding

Back to the question of deducibility.

According to the deducibility thesis, the fundamental truths (plus indexicals, plus a 'that's all' statement) a priori entail every truth. More precisely, when P is a complete description of the fundamental truths and M any other truth, then, according to the deducibility thesis, the material conditional 'Pto M' is a priori.

Infinit analyses?

Dave Chalmers agrees that any concept can be explicitly analyzed by an infinite conjunction of application-conditionals. But he wants to restrict 'explicit analysis' to finite analyses. That certainly makes sense, but I doubt that there are any concepts for which the application-conditionals cannot be determined by finite means. For example, I think it will usually suffice to partition the epistemic possibilities into, say, 50 zillion cases and specify the extension in each of these cases. Admittedly, I can't prove that, but the fact that concepts can be learned and our cognitive capacities are limited seem suggestive.

On the very idea of non-explicit analysis

Dave Chalmers told me to read some of his papers. I have, and I'll probably say more on the deducibility problem soon. Here is just a little thought on conceptual analysis.

Chalmers suggests that we don't need explicit necessary and sufficient conditions to analyse a concept. Rather, we can analyze it just by considering its extension in hypothetical scenarios. What is it to consider a hypothetical scenario? The result seems to depends on how the scenario is presented. For example, 'the actual scenario' denotes the same scenario as 'the closest scenario to the actual one in which water is H2O'. But the difference in description could make a difference for judgements about extensions. Chalmers avoids such problems by explaining (§3.2, §3.5) that to consider a scenario is to pretend that a certain canonical description is true. Hence to analyze a concept, we evaluate material conditionals of the form 'if D then the extension of C is E', where D is a canonical description. (Are there only denumerably many epistemic possibilities or can D be infinite?) Now fix on a particular concept C and let K be the (possibly infinite) conjunction of all those 'application conditionals' (§3) that get evaluated as true. Replace every occurrence of 'C' in K by a variable x. Then 'something x is C iff K' is an explicit analysis giving necessary and sufficient conditions for being C.

There may not always be a simple, obvious, or finite explicit analysis, but at least there always is some explicit analysis. If moreover satisficing is allowed, it is very likely that we can settle with something much less than infinite.

A priori deducibility and fundamental facts

When I tried to spell out the 'modus tollens' I mentioned on monday, I came across something that may be interesting.

Frank Jackson argues that facts about water are a priori deducible from facts about H2O:

1. H2O covers most of the earth.
2. H2O is the watery stuff.
3. The watery stuff (if it exists) is water.
C. Therefore, water covers most of the earth.

1 and 2 are a posteriori physical truths, 3 is an a priori conceptual truth.

More on privacy, apriority, and two-dimensionalism

Here are, very quickly, some more thoughts on the matters I talked about here and there, inspired by another discussion with Christian.

You don't have to know much about plutonium to be a competent member of our linguistic community. One thing you have to know is that plutonium is the stuff called 'plutonium' in our community. Maybe that alone suffices. Of course, if noone knew more about plutonium than this, the meaning of 'plutonium' would be quite undetermined. To fix the meaning, it would suffice if a few persons, the 'plutonium experts', knew in addition that this element (where each of the experts points at some heap of plutonium) is plutonium.

New hope for linguistic ersatzism?

Are all truths a priori entailed by the fundamental truths upon which everything else supervenes? If 'entailed' means 'strictly implied', this is trivially true. The more interesting question is: Are all truths deducible from the fundamental truths (deducible, say, in first-order logic) with the help of a priori principles?

If yes, then it seems that Lewis' 'primitive modality' argument against linguistic ersatzism (On the Plurality of Worlds, pp.150-157) fails. Recall: Lewis argues that if you take a very impoverished worldmaking language then even though it will be feasible to specify (syntactically) what it is for a set of sentences to be maximally consistent, it will be infeasible to specify exactly when such a set represents that, e.g., there are talking donkeys. Now if all truths are a priori deducible from fundamental truths, and -- as seems plausible -- fundamental truths are specifiable in a very impoverished language, then we can simply say that a maximal set of such sentences represents that p iff p is a priori deducible from it.

Unfortunately, I find the 'primitive modality' argument quite compelling. So, by modus tollens, I have to conclude that not all truths can be a priori deducible from fundamental truths. Does anyone know whether Lewis himself believes the deducibility claim he attributes to Jackson in 'Tharp's Third Theorem' (Analysis 62/2, 2002)?

Moved

After two weaks of homelessness I've moved into my new flat today.

Everything but the beetles cancels out

This is a continuation of my last post and also partly a reply to concerns raised by my tutor Brian Weatherson.

Imagine a small community consisting of three elm experts A, B, and C.

First case: Each of A, B, and C knows enough to determine the reference of 'elm', but their reference-fixing knowledge differs. However, they belief that their different notions of 'elm' necessarily corefer. This is the case Lewis discusses in 'Naming the Colours'.

Semi-public A-intensions

Some days ago, Christian and I had an interesting discussion about two-dimensionalism. While I don't agree with many of his criticisms (forthcoming in Synthese), I do agree that two-dimensionalism works best if both dimensions belong to an expression's public meaning. I think that Christian thinks that this holds only for context-dependent expressions. I think it holds almost universally. But this may be a matter of terminology: For me it is part of the meaning of 'the liquid that actually flows in rivers' that this would not denote H2O if it would turn out that XYZ flows in rivers, whereas for Christian this is a metasemantic fact. Anyway, problems for two-dimensionalism come when the first dimension doesn't belong to public meaning.

Relative rigidities

Don't miss Brian Weatherson's very insightful answer to my posting on rigidity (from which I've just stripped some irrelevant formalities). I happily agree with everything he says, so I'll just add a footnote here.

Many advantages of the counterpart theory derive from its denial of the equivalence between 'a=b', 'possibly a=b', and 'necessarily a=b'. For example, this allows for a statue to be identical to a lump of gold even though it might not have been. Since, as Weatherson argues, the rejected equivalence is built into the customary ('strong') concept of rigidity, that concept must be weakened to be useful for counterpart-theorists.

Locating the paradox

Brian Weatherson correctly argues that, since premise 2 of argument Z is analytically true, it can be simplified to

Argument Z':
1. If the conclusion of argument Z' is true, then argument Z' isn't sound.
Therefore: Argument Z' isn't sound.

The paradox then arises in two different ways. First, for premise 1 to be false, it must be the case that 'Argument Z isn't sound' is true and argument Z is sound.

Second, and more interestingly, the falseness of premise 1 analytically implies that argument Z is sound, which in turn analytically implies that all premises of argument Z are true, which implies that premise 1 is true.

This second paradox can be further simplified to:

Argument Z'':
1. Argument Z'' isn't sound.
Therefore: Snow is white or snow isn't white.

Rigidity without trans-world-identity?

I wonder how rigidity can be characterized without begging the question against a lot of good semantic theories.

Usually, a rigid expression is defined as an expression which has the same extension in all possible worlds (that is, as an expression with a constant intension, or C-intension).This characterization presupposes literal trans-world-identity between extensions, which is bad, since it carries a commitment to precise essences of individuals on the one hand and (presumably abundant) universals as extensions of predicates on the other, thereby ruling out counterpart theories and accounts on which tropes or classes are the extensions of predicates.

A paradoxical argument

An argument is called sound if it is deductively valid and its premises are true. Now consider the following argument, which I'll dub 'argument Z':

1. If the conclusion of argument Z is true, then argument Z isn't sound.
2. If the conclusion of argument Z is not true, then argument Z isn't sound.
Therefore: Argument Z isn't sound.

Is argument Z sound? (If not, which premise is false?)

What's wrong with canberra-planning causation?

If you're asked to explain how your preferred theory of everything -- that is, your brand of physicalism -- can accomodate some entity X, the first thing to try is the Canberra Plan. It goes as follows: First, collect features that could be said to characterise X. If you're lazy, simply collect everything the folk says about X. Next, say that since these features comprise the essence of X, whatever physical entity has (more or less exactly) those features is X. Finally, explain that of course there is such a physical entity, since otherwise statements about X wouldn't be true.

Hello World!

Within the last 24 hours, this page has been literally flooded by tens of people, most of them following a friendly link at Brian Weatherson's weblog. What's more, I'm now the world's leading authority on higher-order mereological contradictions! Seid umschlungen, Millionen.

A remark on Field's evaluative notion of Apriority

There are many ways to update a belief system. For example, 1) believe every proposition that comes to your mind; 2) believe everything that makes you feel good; 3) believe everything Reverend Moon says. In "A Priority as an Evaluative Notion", Hartry Field argues that there is no fact of the matter as to which way is best.

In one sense, this is trivial. Of course the normative question which way you should choose does not have a purely factual answer. Which way you should choose depends on what you want from your belief system.

Convention T and the Redundancy Theory of Truth

A sentence is context-dependent if different utterances of it in different contexts have different truth values. A common kind of context-dependence is contingency. For instance, 'there are unicorns' is true when uttered in a world that contains unicorns, and false otherwise. Now look at Convention T:

'p' is true iff p.

When 'p' is context-dependent, it doesn't really make sense just to call it true. However, Convention T certainly isn't meant to apply only to non-contingent (and otherwise non-context-dependent) sentences. So what shall we make of it? Two possibilites come to mind:

1) 'p', uttered in the present context, is true iff p.

A true contradiction?

Let S be the sentence "S contains a quantifier that does not range over everything".

S (and every utterance of S) is contradictory. Interestingly, it is so even if the quantifier in S really does not range over everything. From which it follows that either there are true contradictions, or "S contains a quantifier that does not range over everything" is not true iff S contains a quantifier that does not range over everything.

Three questions about fundamental particles

First: Are fundamental particles mereological atoms?

Fundamental particles are 'the ultimate constituents of the world', those upon whose properties and relations everything else supervenes. Many of us believe that the instrinsic properties of complex things supervene upon the properties and relations of their consituents. Then maybe the fundamental particles can be identified with the ultimate constituents of the world, if there are any. In fact, when we find that some things are composed out of smaller things, we will usually not call the complex things 'fundamental particles'. I think it is in this sense that fundamental particles are supposed to be indivisible -- not because we lack the means to break them into parts, nor because it is impossible 'in principle' to break them, but simply because they lack (proper) parts.

Idle remarks on Russell's paradox and higher-order entities

Okay, as promised here comes the third and last part of my little series on Rieger's paradox. I will first describe a general version of Russell's paradox, of which Rieger's is a special case. Then I'll discuss whether Frege is already prey to the paradox by his admission of too many concepts. Whether he is will depend on whether it makes sense to say that there are entities which are not first-order entities. I'm sorry that there is probably nothing new in all this.

First, the general version of Russell's paradox.

< 14 older entriesHome743 newer entries >