A few thoughts on Brown (2014) and Brown (2020) and the composition of value.
Some propositions (or properties, but let's run with propositions) have value. They are reasons to act one way rather than another. We may ask how this kind of value distributes over the space of propositions.
Since logically equivalent propositions plausibly have the same value, we can picture the propositions as regions in logical space – sets of possible worlds. Now how is the value of a region related to the value of other regions – to its subregions, for example? This is the question Campbell Brown raises in Brown (2014) and Brown (2020).
When something is good, or desirable, or a reason, then this is usually because it has some good (desirable, etc.) features. The thing may also have bad features, but if the thing is good then the good features outweigh the bad features. How does this weighing work? I'd like to say that the total goodness of a thing is always the sum of the goodness of its features. This "additive" view seems to be unpopular in both ethics and economics. I'll try to defend it.
I first need to state the view more precisely.
To begin, I assume that there are ultimate bearers of value. If we're talking about personal desire, this means that there are some things an agent desires "intrinsically" or "non-derivatively". Being free from pain might be a common example. If you desire to be free from pain then this is typically not because you really desire something else, and you think being free from pain is either a means to the other thing or evidence for the other thing. You simply desire being free from pain, and that's the end of the story.
Gómez Sánchez (2023) asks an important and, in my view, unsolved question: what kinds of properties may figure in the laws of "special science" (chemistry, genetics, etc.)?
For the most part, the patterns captured in special science laws are not entailed by the fundamental laws of physics, nor by the intrinsic powers and dispositions of the relevant objects. Some kind of best-systems account looks appealing: the Weber-Fechner law, the laws of population dynamics, the laws of folk psychology etc. are useful summaries of pervasive and robust regularities in their respective domains. They are the "best systematisation" of the relevant facts, in terms of desiderata like simplicity and strength.
I've decided to write somewhat regular short pieces on interesting papers I've recently read. This one is about Smithies, Lennon, and Samuels (2022).
Smithies, Lennon, and Samuels (henceforth, SLS) criticise the view that there are a priori connections between having a belief with a certain content and other states that would be rational given this belief. A simple example of the target view says that believing P is being disposed to act in a way that would bring one closer to satisfying one's desires if P were true. A more complicated example of the target view, on which SLS focus, is Lewis's. According to Lewis, for a mental state to be a belief state with such-and-such content, the state must, under normal conditions, be connected in a certain way to behaviour, perceptual experiences, and other propositional attitudes. SLS deny this.
Some people – important people, like Richard Jeffrey or Brian Skyrms – seem to believe that Laplace and de Finetti have solved the problem of induction, assuming nothing more than probabilism. I don't think that's true.
I'll try to explain what the alleged solution is, and why I'm not convinced. I'll pick Skyrms as my adversary, mainly because I've just read Skyrms and Diaconis's Ten Great Ideas about Chance, in which Skyrms presents the alleged solution in a somewhat accessible form.
There's a striking tension in Lewis's philosophy. His epistemology and philosophy of mind, on the one hand, leave no room for (non-trivial) a priori knowledge or a priori inquiry. Yet for most of his career, Lewis was engaged in just this kind of inquiry, wondering about the nature of causation, the ontology of sets, the extent of logical space, the existence of universals, and other non-contingent matters. My paper "The problem of metaphysical omniscience" explores some options for resolving the tension. The paper has just come out in a volume, Perspectives on the Philosophy of David K. Lewis, edited by Helen Beebee and A.R.J. Fisher.
A popular idea in recent (formal) epistemology is that an externalist conception of evidence is somehow useful, or even required, to block the threat of skepticism. (See, for example, Das (2019), Das (2022), and Lasonen-Aarnio (2015). The trend was started by Williamson (2000).)
Here's an idea that might explain a number of puzzling linguistic phenomena, including neg-raising, the homogeneity presupposition triggered by plural definites, the difficulty of understanding nested negations, and the data often assumed to support conditional excluded middle.
An utterance of
(1a) We will not have PIZZA tonight
conveys two things. Unsurprisingly, it conveys that we will not have pizza tonight. But it also conveys, due to the focus on 'PIZZA', that we will have something else. By comparison,
Greaves (2013) describes a case in which adopting a single false belief would (supposedly) be rewarded by many true beliefs.
Emily is taking a walk through the Garden of Epistemic Imps. A child plays on the grass in front of her. In a nearby summerhouse are n further children, each of whom may or may not come out to play in a minute. They are able to read Emily's mind, and their algorithm for deciding whether to play outdoors is as follows. If she forms degree of belief 0 that there is now a child before her, they will come out to play. If she forms degree of belief 1 that there is a child before her, they will roll a fair die, and come out to play iff the outcome is an even number. […]
Neg-raising occurs when asserting ¬Fp (or denying Fp) tends to communicate F¬p. For example, 'John doesn't believe that he will win' tends to communicate that John believes that he won't win.
There appears to be no consensus on why this happens. Some think ¬Fp really does entail F¬p. Others think the effect is an implicature. Still others think it's caused by a presupposition of opinionatedness or "settledness": when we talk about whether Fp holds, we presuppose that F holds either for p or for an alternative to p, denying Fp therefore commits us to F¬p.