Philosophical models and ordinary language

A lot of what I do in philosophy is develop models: models of rational choice, of belief update, of semantics, of communication, etc. Such models are supposed to shed light on real-world phenomena, but the connection between model and reality is not completely straightforward.

For example, consider decision theory as a descriptive model of real people's choices. It may seem straightforward what this model predicts and therefore how it can be tested: it predicts that people always maximize expected utility. But what are the probabilities and utilities that define expected utility? It is no part of standard decision theory that an agent's probabilities and utilities conform in a certain way to their publicly stated goals and opinions. Assuming such a link is one way of connecting the decision-theoretic model with real agents and their choices, but it is not the only (and in my view not the most fruitful) way. A similar question arises for the agent's options. Decision theory simply assumes that a range of "acts" are available to the agent. But what should count as an act in a real-world situation: a type of overt behaviour, or a type of intention? And what makes an act available? Decision theory doesn't answer these questions.

In general, there are often many ways of linking a model to reality. One can to some extent develop a model without caring about those links, but at some point they need to be clarified.

The links from model to reality are important for what counts as a counterexample to a model. Take another example: possible-worlds models of belief and knowledge. Such models are widely regarded as (descriptively) inadequate because real agents are not "logically omniscient". But what exactly are the data here, and how do they contradict the models?

People tend to assume a simple connection between whatever is reported by ordinary-language attitude reports and possible-worlds models. In particular, they tend to assume something like this:

(*) A model on which an agent A believes a possible-world proposition P is accurate only if "A believes that S" is true (in ordinary English) for every sentence S that is true at precisely the worlds in P.

For example, people assume that because "Hesperus is Phosphorus" is supposedly true at every possible world, any possible-worlds model on which agents trivially believe the set of all worlds falsely predicts that everyone believes that Hesperus is Phosphorus.

But why should we accept (*)? It is certainly not part of standard possible-worlds models. Rather, the principle proposes a connection between such models and ordinary attitude reports and thus an indirect link between models and reality. But is that the best way to apply or interpret the models? Arguably not.

(My point is not that useful models don't have to be 100\% accurate and hence that possible-worlds models can be useful even if they falsely predict logical omniscience. That may be true as well. But my point is that we need to first discuss whether possible-worlds models really make false predictions about logical omniscience. I'm inclined to think that they don't.)

A main reason why principles like (*) are often take for granted is that descriptions of formal models often contain ordinary-language expressions — in this case, 'believe' or 'knowledge'. From this, people mistakenly infer that the model is supposed to do double-duty as a semantics of the ordinary-language expressions.

That mistaken tendency seems to be almost unique to philosophers. Compare physics. 'Force', 'velocity', 'momentum', 'mass' did not have clear meaning at all before Newton introduced them into his model. Whatever meaning they had definitely didn't coincide with their meaning in Newton's model. If contemporary philosophers had been around, they would have concluded that Newton's model makes false predictions about force and velocity. But that's clearly silly. Newton's physics was not supposed to give an analysis of those ordinary-language expressions. It rather gave people a grip on quantities they couldn't clearly see before.

One could not have come up with a good physics in the terminology of 16th century English. Why should the terminology of 21st century English be perfect for a good epistemology or semantics? I can see no good reason for a positive answer. If we want to make progress in epistemology or semantics, we should abandon our obsession with the ordinary conceptions of "knowledge", "belief", "meaning", etc.

Comments

No comments yet.

Add a comment

Please leave these fields blank (spam trap):

No HTML please.
You can edit this comment until 30 minutes after posting.