My paper "Imaginary Foundations" has been accepted at Ergo (after rejections from Phil Review, Mind, Phil Studies, PPR, Nous, AJP, and Phil Imprint). The paper has been in the making since 2005, and I'm quite fond of it.
The question I address is simple: how should we model the impact of perceptual experience on rational belief? That is, consider a particular type of experience – individuated either by its phenomenology (what it's like to have the experience) or by its physical features (excitation of receptor cells, or whatever). How should an agent's beliefs change in response to this type of experience?
Why care? A few reasons:
First, the question is closely related to several traditional issues in epistemology. Intuitively, many of our beliefs are justified because they are suitably connected to relevant perceptual experiences. But what is that connection? How does a "nonconceptual" experience support a "conceptual" belief? How do we accommodate the holism of confirmation? How can an experience justify a belief about an external world if one could have the same experience even if there were no external world? A good model of how perceptual experiences should affect belief would help to make progress on these issues.
Second, the question is important for the "interpretationist" (formerly known as functionalist) approach to belief. On this approach, what makes a physical state a belief state with such-and-such content is that it plays a certain causal role – the role characteristic of the relevant beliefs. Part of this role links the beliefs (and desires) to choice behaviour. Another part of the role links them to perceptual experience: it specifies how beliefs tend to change through perceptual experience. In the paper, I'm effectively trying to spell out that part of the role.
Third, the question plays an import role in cognitive science. People in artificial intelligence have models of how incoming perceptual stimuli should affect a belief system. Similar models have proved fruitful in the neuroscience of perception. The model I propose looks a lot like these models from neuroscience and artificial intelligence. But the models have strange features that call for philosophical comment. In particular, they seem to imply a form of sense datum theory: perceptual experiences are supposed to provide infallible information about a special realm of sense data. How should we understand these sense data? Aren't there decisive philosophical arguments against sense datum theories?
Fourth, the question might provide the key to the hard problem of consciousness. In the paper I suggest that the appearance of irreducibly non-physical properties in perceptual experience is a predictable artefact of the way our brain processes sensory information.
Fifth, the question is interesting because it's really hard to answer. A common idea in the philosophy of perception seems to be that (1) perceptual experiences represent the world as being a certain way, and that (2) in the absence of defeaters, having the experience makes it rational to believe that the world is that way. But that's hardly a full answer. For one, how does an experience – individuated, say, by its physical features – come to represent the world as being a certain way? Moreover, how should the rest of an agent's belief system change if there is no defeater? How should the agent's beliefs change if there is a defeater? (Surely it should still change in some way.)
In the paper, I assume a Bayesian framework. So the question becomes: how should a given type of experience affect an agent's subjective probabilities? The classical Bayesian answer assumes that perceptual experiences make the agent certain of a particular proposition, so that her probabilities can be updated by conditionalization. But that doesn't seem right. Richard Jeffrey proposed an alternative which allows experiences to convey less-than-certain information. But the relevant less-than-certain information in Jeffrey's model is not just a function of the experience; it also depends on the agent's prior probabilities. So how do the prior probabilities together with an experience determine the input to a Jeffrey update? No-one knows. In fact, I argue that it is impossible to know, because the effect an experience should have on a belief system is not fixed by the experience and the (prior) belief system at all. It depends on a further aspect of the agent's cognitive state.