Austin and Chalmers on two tubes cases

If we want to model rational degrees of belief as probabilities, the objects of belief should form a Boolean algebra. Let's call the elements of this algebra propositions and its atoms (or ultrafilters) worlds. Every proposition can be represented as a set of worlds. But what are these worlds? For many applications, they can't be qualitative possibilities about the universe as a whole, since this would not allow us to model de se beliefs. A popular response is to identify the worlds with triples of a possible universe, a time and an individual. I prefer to say that they are maximally specific properties, or ways a thing might be. David Chalmers (in discussion, and in various papers, e.g. here and there) objects that these accounts are not fine-grained enough, as revealed by David Austin's "two tubes" scenario. Let's see.

First some more background. The two tubes case belongs in the category of "Frege cases" where a subject attributes one property to an object x, a possibly different property to an object y, without knowing that in fact x is the same object as y. Such cases spell trouble for the view that all there is to the relevant beliefs is captured by saying which properties the subject attributes to which objects. The trouble is especially pressing for the explanation of behaviour. Why does Pierre try to buy a ticket to London at a (French) travel agency in London? Why does Ralph reveal confidential information to Ortcutt, if he thinks he is a spy? But there is also trouble within epistemology, so to speak. If Pierre is such a good logician, why doesn't he realise that he holds contradictory opinions about London? How can we understand the fact that the ancient Babylonians received evidence confirming that Hesperus is Phosphorus, but not that Hesperus is Hesperus?

In most Frege cases, the answer is that we must somehow take into account that the subject attributes the relevant properties to the relevant objects only under a particular "guise" or "mode of presentation". Ralph attributes spyhood to Ortcutt qua man in the brown hat but not qua man on the beach. In Bayesian epistemology and decision theory, subjective probabilities should therefore be sensitive to guises. Perhaps they attach to pairs of a "singular proposition" and a guise. A simpler view, which I prefer, is to drop the singular propositions and have the probabilities attach to guises only: by the lights of Ralph's doxastic state, it is probable that the man in the brown hat is a spy, and not that the man on the beach is a spy, and that's all we need to know about the case to predict his behaviour and to understand the inferences he does and doesn't draw. (Challenge: find something Ralph does that he wouldn't do if he only had such qualitative beliefs.)

In some Frege cases, however, this response doesn't work. When Perry doesn't know that he himself is the messy shopper, or when one of Lewis's two Gods doesn't know that he is the God on the tallest mountain, their ignorance cannot be modelled as ignorance of a qualitative fact about the world. One might know all qualitative facts about the world and still not know where one is in the world. This is why we need centred qualitative propositions (or centred qualitative propositions paired with singular propositions, if we don't want to go guises-only) as objects of subjective probability. Roughly speaking, each possible universe divides into many centred worlds, one for each possible answer to `who am I?' and `when is now?'. Since a property is, roughly, something that assigns to each world and time a set of individuals, a set of word-time-individual triples can also be understood, roughly, as a property. According to Chalmers, Austin's two tubes case shows that we have to expand centred worlds to also fix the answers to other questions.

So here is the case, as presented in chapter 2 of Austin's What's the meaning of this?.

Smith is the subject of a psychological experiment. He faces an opaque screen with two small eye holes, each of which leads to a tube. Smith has learned to focus his eyes independently of each other. When he looks through the tubes, he sees a red dot (which he dubs `this' or `Harold') with his left eye and an indistinguishable red dot (which he dubs `that' or `Mauve') with his right eye. He doesn't know that he sees the same dot with both eyes, because he can't tell exactly how the tubes and his eyes are oriented.

Now the topic of Austin's book is not the modelling of subjective probability in Bayesian epistemology and decision theory. Rather, his starting point is the idea that there is a relation BELIEF which holds between a subject S and a proposition P whenever one can truly utter `S believes that Q' in a context where Q linguistically expresses the proposition P. Since in a suitable context, one can truly say `Ralph believes that Ortcutt is a spy', and the proposition linguistically expressed by `Ortcutt is a spy' is arguably not a qualitative proposition (because its truth value at counterfactual situations depends only on whether Ortcutt himself is a spy in those situations), it follows that the objects of the BELIEF relation are not always qualitative. Nothing I said above contradicts this claim, since I haven't been talking about BELIEF. So we have to be a bit careful when looking at Austin's arguments against various ways of handling the two tubes case. In particular, his main objection against the idea (attributed to Schiffer, Chisholm and Lewis) that the objects of BELIEF are qualitative properties is that this conflicts with intuitions about the non-qualitative content linguistically expressed by sentences like `that man is a spy'. Obviously, this is not an argument against the proposal that subjective probabilities are defined on an algebra of qualitative properties. This proposal is not a claim about attitude reports or about what is linguistically expressed by this or that sentence.

Our question is whether we can account for Smith's doxastic situation, for the purposes of (say) a broadly Bayesian epistemology and decision theory, in a framework where his subjective probabilities are defined over qualitative properties, or over possible individuals at possible times in (qualitative) possible worlds.

Well, can't we run the standard response to Frege cases? Smith is acquainted in two different ways with the same object, so that when he wonders, `is this = that?', he assigns middling probability to the hypothesis that the object he is acquainted with in the first way is identical to the object he is acquainted with in the second way.

Austin considers the related proposal that for Smith, `this' and `that' are synonymous to different qualitative definite descriptions in Smith's language. Obvious candidates are `the dot I see with my left eye' and `the dot I see with my right eye'. In a footnote (n.15 on p.45), Austin tries to block this move by stipulating that Smith does not identify his two visual fields as left or right, because he falsely believes (Austin should have said: suspects) that he suffers from allesthesia, a condition in which sensations from one body side appear to come from the other.

But this complication is easily circumvented, for example by moving to `the dot I see with my left eye if I don't have allesthesia, otherwise the dot I see with my right eye'. Alternatively, we might try `the dot I see with the eye that seems to me to be my left eye'. Or `the dot I baptized "this"'.

Of course, we don't really have to find a description in Fred's language, and if we do, it doesn't matter (for our context) whether Fred would regard `this' as synonymous with the description. The question is whether we can adequately model Fred's doxastic situation by saying which possible individuals at which worlds and times he can rule out, and which of the others he deems more likely than others. Austin himself appears to accept that this is possible when he discusses Stalnaker's view in chapter 5. Here he accepts that there are different "diagonal propositions" for `this is red' and `that is red'. He raises several objections to Stalnaker, but he never suggests that these diagonal propositions don't exist.

So Austin's example seems to lend itself well to the standard treatment of Frege cases. It hardly shows that we need to add new coordinates to centred worlds.

But perhaps I've been sticking too closely to Austin's own discussion. Here is Dave Chalmers's rendition of the two tubes case, from p.625 of his Frege's Puzzle paper (2011):

Fred is looking down two tubes, one attached to each eye, and has a symmetrical experience as of two red balls. Fred is objectively omniscient and knows that in fact [the] tubes are connected to one red ball and one orange ball. He also knows his own location in the world and the current time, and so knows which centred world he inhabits. When he entertains the hypothesis That is red and that is orange (using two simultaneous perceptual demonstratives), he is not in a position to determine that it is true and has rational credence 0.5 in it, but there is no verifying centred world.

Lots of changes here. Smith has been replaced by Fred. The tubes have been reoriented to point at two different objects (balls, in fact), one of which is red, the other orange. For some reason, both balls appear red to Fred. Instead of `this' and `that', Fred is "using two simultaneous perceptual demonstratives", both of which are written as `that'. Finally, Fred is omniscient about the world as a whole as well as his own present location. Nevertheless, there is supposed to be some hypothesis to which he assigns probability 0.5. If this were correct, we would indeed have an argument for more fine-grained objects of probability. But is the scenario really coherent?

Fred's stipulated omniscience immediately entails that he does not identify the two balls by different relations or descriptions, so that the object of his uncertainty could be represented as the hypothesis that the F1 is red and the F2 is orange.

Suppose for concreteness that the ball Fred sees with his left eye is the orange one. Since Fred is omniscient, he knows this. That is, he knows that the ball he sees with his left eye is orange and that the other one is red, although they both appear red. Fred also knows that he does not suffer from allesthesia. So he knows that the ball he sees in what appears to him to be his left visual field really is the left ball, which is orange. Similarly, he knows that the ball he sees in what appears to him to be his right visual field is the red ball. But then what is he uncertain about?

Let's put ourselves into Fred's shoes. The visual experience isn't too hard to imagine. It involves two distinct sensations of what appears to be a red ball at the end of a tube, one for each eye. Hold fixed this visual image, and assume you know that the ball at the end of the left tube is orange and the other one red. Now ask yourself, `is that red and that orange?'. Don't you know the answer?

Perhaps the idea is that for some reason, Fred can't tell his two sensations apart. He can't tell which one seems to be coming from his left eye. In general, he is not aware of any feature F that distinguishes the two sensations -- otherwise he would automatically know which ball is responsible for the F sensation and which for the non-F sensation. (These are straightforward centred facts.)

But then I don't see how his two uses of `that' could determinately attach to particular ones of his sensations, and thereby to particular balls. Suppose the first `that' in Fred's question `is that identical to that?' denotes the orange ball and the second the red one. Surely this is not a primitive fact. What makes it true? Normally, when you wonder `is that identical to that?', your attention shifts from one guise (or sensation) to the other between uttering or thinking the two occurrences of `that'. Not so for Fred, otherwise he would know that the sensation first attended to comes from the orange ball and the other one from the red ball.

If the two `that's are indeterminate in reference, then there is no hypothesis expressed by `that is red and that is orange'. There are several hypotheses, all of which arguably have either probability 1 or 0.

So Austin's two tubes scenario looks like a harmless Frege case. Chalmers's scenario is stipulated to block this response, but it is not at all clear that the scenario is coherent.

One more remark on the form of these arguments. If Fred's probability function is only defined for qualitative centred worlds (say), then it is easy to find all sorts of entities for which it is undefined. One could stipulate that Fred is omniscient about qualitative matters, but nevertheless assigns probability 0.5 to the singular proposition that Aristotle was fond of dogs. It would follow that we must include singular propositions as objects of probability. Similarly, one might stipulate that Fred assigns probability 0.5 to the moon, wherefore we must include the moon as an object of probability. For these proposals to pose any serious threat to the standard centred-worlds conception, it has to be shown that the extended probabilities are needed in order for subjective probabilities to do their job. Recall the case of attitudes de se. The main reason for going beyond uncentred qualitative propositions is that an agent's rational behaviour typically can't be explained if probabilities are only assigned to such uncentred propositions. I don't see any analogous motivation from the two tubes cases.

Comments

# on 25 March 2013, 08:27

i think you're a bit too sanguine about the use of 'left' and 'right'. those are ultimately grounded in demonstratives for two different orientations that aren't descriptively distinguished. try doing it with a symmetrical subject in a symmetrical universe (only asymmetry: the two balls and the perception thereof) to make the issues purer. if fred is objectively omniscient he'll know that the ball he sees with one eye is red and the ball he sees with the other eye is orange, but he won't be able to tie these to 'left' and 'right' unless left/right facts are built in to the world-description in addition (which is just in effect to expand the center). one can also run the case with a subject with two separate visual fields, to avoid left/right issues entirely.

# on 25 March 2013, 12:11

I wasn't only relying on 'left' and 'right'. Any difference between the two sensations would do. I can see a version of the case where Fred has no way at all to distinguish the sensations -- not as left/right, not as attended to first/attended to second, etc. But then I don't understand what he is uncertain about when he wonders whether 'that is orange and that is red'. What is the difference between an agent who believes 'that is orange and that is red' and one who, in the same situation, believes 'that is red and that is orange'? Does the difference show up in their actions or their response to information?

# on 26 March 2013, 09:09

symmetrical brain, two distinct (but symmetrical) experiences of red, one caused by a red thing, one caused by an orange thing. the subjective is omniscient about the objective state of world and here/now locating information. the subject entertains the thought <that is red and that is red>. the subject has credence 0 in the conjunctive thought, but is uncertain about each of the conjuncts.

# on 27 March 2013, 01:52

One approach for the Fregean is to insist that there's a sense of 'being the center of a subject's attention' that cannot apply to two things at the same time. Then one mental occurrence of 'that' is evaluated relative to one world/subject/time, and the other to another world/subject/time. (Because the time differs.) Austin's case is pretty odd already (with two non-integrated, undifferentiable fields of vision), so this reply doesn't seem to be too ad hoc.

# on 27 March 2013, 05:26

@brandt: yes, I was thinking along similar lines; more specifically, /if/ you have a mental occurrence of 'that' that determinately picks out one of two things, then your attention cannot equally apply to both of these things at all times.

@dave: are you thinking of a case where each 'that' uniquely picks out one of the ball experiences and thereby one of the balls? In this case, why doesn't her knowledge of the world and herself reveal which 'that' goes with which ball? I can also imagine a case where the subject is aware of two things which she cannot distinguish at all and stipulatively calls them 'a' and 'b', or 'that' and 'that', but without succeeding to attach each of these labels to a unique one of the objects. Compare: all we know is that there are two people in a room; we might choose to call them 'a' and 'b'; when one of them is leaving the room, it would make no sense to wonder whether it is a or b: each name is ambiguous between both persons (and the ambiguity is correlated so that the joint ambiguity only has two resolutions). Similarly, in this version of the two tubes case, I think it would make no sense to wonder whether 'that is red' is true: since 'that' is ambiguous, it's true on one resolution and false on another.

# on 28 March 2013, 15:32

descriptive knowledge of world and self tells her that she has two symmetrical and simultaneous 'that'-thoughts referring to each of the two balls. it doesn't tell her how to align her introspectively presented thoughts with those two thoughts in order to assess their truth-values. likewise it won't tell her how to align perceptually presented balls with those specified in the world-description. she has no problem numerically distinguishing the perceptually presented balls and she refers to each of them unambiguously in thought with a distinct demonstrative. she makes two distinct judgments, one is true and the other is false; it's reasonable for her to wonder about the truth of each, but she's not in a position to be certain.

# on 29 March 2013, 04:23

Hmm, I see. The subject has a conscious experience that somehow involves two representations of the two balls, each determinately representing a unique ball. The subject can somehow use these representations as building-blocks in two distinct "thoughts" which attribute redness to the relevant ball. One of these more complex representations is true, the other false, depending on which of the balls is red and which orange. Without further information, the subject couldn't rationally have probability 1 attached to one of the representations and 0 to the other. Similarly, when she consciously entertains the two thoughts, she shouldn't rationally feel certain about which is true. When she now receives further information about herself and the world, in the form of further sentence-like complex representations, she won't be able to settle her state of uncertainty unless these representations also involve the original ball representations (directly or indirectly, perhaps by involving representations of the different thoughts involving the representations of the balls).

The story assumes that one can (in principle) form token thoughts involving perceptual representations non-descriptively, without identifying the relevant part of the perceptual content by any feature, such as the feature of being the present focus of attention. To the extent that I understand the assumption, I'm a bit skeptical that this is true for humans, but perhaps this is just an empirical fact. I can imagine creatures that encode perceptual information in a sense-data language that also figures in the encoding of their beliefs or thoughts.

Another preliminary question is why de se or de dicto information doesn't help to settle the subject's ignorance. We know that the information received by the subject must be encoded in a way that involves the original ball representations, but why can't ordinary information be encoded in that way? Let Q and R be the two ball representations. The subject attaches middling probability to the complex representation [Q,RED]. When she learns all truths about the world, why doesn't she learn [Q,RED]? I suppose the background assumption here is that the relevant truths are all qualitative. The subject doesn't learn [MARS,RED], nor [Q,RED], because these involve direct representations of particular objects. But qualitatively, Q and R are indistinguishable. -- This means that the sense-data language must individuate different sensations by brute tags. That seems a bit odd.

Anyway, I think I'm willing to grant that in this framework, subjective probabilities can't always be limited to representations of qualitative de se propositions. But all this is a bit alien to the way I usually think of subjective probabilities. I understand an agent's probability and utility measure mainly as a representation of her behavioural dispositions: to have probability function P and utility function U is (roughly, and mainly) to be disposed to act in a way that maximises the P-expectation of U. It doesn't matter what numbers the agent's cognitive architecture assigns to mental encodings, or whether the agent has a sense of uncertainty when entertaining a certain conscious thought. That's why I asked whether the new dimension of uncertainty brought out by two tubes cases can ever matter for the explanation of (ideally) rational behaviour. Is there anything the agent would do differently if she were not uncertain about <Q,RED> and <R,RED>?

There is another important disanalogy between the case of perceptual demonstratives and cases of self-location: when we hold fixed an agent's qualitative beliefs about the world as a whole, there are many possible ways of filling in her self-locating beliefs. By contrast, in the two tubes case, it seems to me that if the agent is omniscient about the qualitative world, there is only one possible distribution of probabilities over [Q,RED] and [R,RED]. If she assigns, say, higher probability to [Q,RED] than [R,RED], then the symmetry is broken: she can then consult her knowledge of the world to find out whether the representation with the higher probability is true. So if the agent is uncertain about [Q,RED] and [R,RED], she must assign them exactly equal probability. So in a sense there isn't really a new dimension of uncertainty or ignorance here: once we've filled in the probabilities for ordinary de se propositions (or representations), we already know the probabilities for [Q,RED] and [R,RED], at least if the agent is sufficiently reflective.

# on 10 April 2013, 20:42

"Is there anything the agent would do differently if she were not uncertain about <Q,RED> and <R,RED>?"

sure. assuming she wants a red ball, she'd confidently reach out with her left hand or her right hand to get it.

"So in a sense there isn't really a new dimension of uncertainty or ignorance here: once we've filled in the probabilities for ordinary de se propositions (or representations), we already know the probabilities for [Q,RED] and [R,RED], at least if the agent is sufficiently reflective."

i don't think the first half follows from the second half. we know what the agent's probabilities must be (0.5 for each), but those probabilities themselves involve a new dimension of uncertainty, since the agent has credence 0 or 1 in all ordinary de se propositions.

# on 11 April 2013, 16:34

A lot rides on this, I think: "'left' and 'right'…are ultimately grounded in demonstratives for two different orientations that aren't descriptively distinguished."

I'm not sure. Those orientations--the demonstratives for which 'left' and 'right' are "ultimately grounded" in--do seem to me descriptively distinguished (*as*, in effect, *there* and *there*, if you will!). No visual presentation can, it seems, abstract from that descriptive content. But how to settle the question? What turns on the "descriptivity" of the difference?

Cf. also K. Fine's case about Bruce in *Semantic Relationism* (pp. 36, 42, and 70-1).

# on 12 April 2013, 15:08

"descriptive" could be replaced by "non-demonstrative". if you have to make primitive appeal to demonstratives (over and above "i" and "now"), that makes the point that we need more than the usual resources of the de se to articulate the relevant contents.

# on 14 April 2013, 04:14

The crucial question for me is what we would lose if we model Fred's doxastic state by a probability distribution over qualitative properties. Clearly any model on this level of abstraction fails to capture many aspects of the agent's cognitive system: how the beliefs are stored, how these representations change through reasoning and perception, etc. I don't take this to be a problem. There's always a trade-off in modelling between detail and generality. What would be a problem is if Fred's rational choices could not be explained within the property-valued model, or if the standard tools of Bayesian epistemology and statistics would not explain the dynamics of the property-valued probabilities. (Both of these problems would arise if we used uncentred qualitative propositions as objects of probability.)

So what would we miss about Fred in a model that uses properties as objects of probability rather than more complex values? Perhaps we would no longer capture Fred's feeling of uncertainty directed at representations composed of visual demonstratives. But I don't even want to capture that. I want my model to be neutral on how agents store information, whether they have individual "thoughts", and whether they have phenomenal consciousness. Perhaps we would no longer be able to predict whether Fred will reach out with his left or with his right hand. That I would find more troubling. However, the difference has to be a genuine difference, and in a highly symmetrical universe this becomes doubtful. Most seriously, if the property-valued model allows us to fill in the complex values (because Fred's credence in [Q,RED] can only be 0.5), then there is a sense in which it isn't missing anything at all compared to the alternative model. If in every situation we can reconstruct the P1 values from the P2 values and vice versa, then it doesn't matter which of P1 or P2 we take as basic; the two models are essentially the same.

# on 15 April 2013, 18:05

Even if the appeal to demonstratives here should be primitive, they seem to have ineliminable descriptive content--there's a distinctive "quality" things to the left are presented as having.

Add a comment

Please leave these fields blank (spam trap):

No HTML please.
You can edit this comment until 30 minutes after posting.