< 757 older entriesHome14 newer entries >

Writing with github copilot

I've been using github copilot for a while now to write philosophy and logic texts. It's definitely useful for more technical writing. Here you can see how it fills in a clause in a proof by induction:

Champollion, Ciardelli, and Zhang on de Morgan's law

Champollion, Ciardelli, and Zhang (2016) argue that truth-conditionally equivalent sentences can make different contributions to the truth-conditions of larger sentences in which they embed. This seems obviously true. 'There are infinitely many primes' and Fermat's Last Theorem are truth-conditionally equivalent, but 'I can prove that there are infinitely many primes' is true, while 'I can prove that there are no integers a, b, c, and n > 2 for which an + bn = cn' is false. Champollion, Ciardelli, and Zhang (henceforth, CCZ) have a more interesting case in mind. They argue that substituting logically equivalent sentences in the antecedent of a subjunctive conditional can make a difference to the conditional's truth-value.

Christensen on ideal rationality

I want to say something about a passage in Christensen (2023) that echoes a longer discussion in Christensen (2007).

Here's a familiar kind of scenario from the debate about higher-order evidence.

Wilhelm and Lando on centred credence and chance

Wilhelm (2021) and Lando (2022) argue that the Sleeping Beauty problem reveals a flaw in standard accounts of credence and chance. The alleged flaw is that these accounts can't explain how attitudes towards centred propositions are constrained by information about chance.

I assume you remember the Sleeping Beauty problem. (If not, look it up: it's fun.) Wilhelm makes the following assumptions about Beauty's beliefs on Monday morning.

First, Beauty can't be sure that it is Monday:

Gustafsson on decision-making under ignorance

Decision theory textbooks often distinguish between decision-making under risk and decision-making under uncertainty or ignorance. The former is supposed to arise in situations where the agent can assign probabilities to the relevant states, the second in situations where they can't.

I've always found this puzzling. Why would a decision maker be unable to assign probabilities (even vague or indeterminate ones) to the states? I don't think there are any such situations.

I haven't looked at the history of this distinction, but I suspect it comes from von Neumann, who (I suspect) had no concept of subjective probability. If the only relevant probabilities are objective, then of course it may happen that an agent can't make their choice depend on the probability of the states because these probabilities may not be known.

DiPaolo on second best epistemology

Covid finally caught me, so I fell behind with everything. Let's try get back to the blogging schedule. This time, I want to recommend DiPaolo (2019). It's a great paper that emphasizes the difference between ideal ("primary") and non-ideal ("secondary") norms in epistemology.

The central idea is that epistemically fallible agents are subject to different norms than infallible agents. An ideal rational agent would, for example, never make a mistake when dividing a restaurant bill. For them, double-checking the result is a waste of time. They shouldn't do it. We non-ideal folk, by contrast, should sometimes double-check the result. As the example illustrates, the "secondary" norms for non-ideal agents aren't just softer versions of the "primary" norms for ideal agents. They can be entirely different.

Cariani on the modal future

I've been reading Fabrizio Cariani's The Modal Future (Cariani (2021)). It's great. I have a few comments.

This book is about the function of expressions like 'will' or 'gonna' that are typically used to talk about the future, as in (1).

(1) I will write the report.

Intuitively, (1) states that a certain kind of writing event takes place – but not right here and now. 'Will' is a displacement operator, shifting the point of evaluation. Where exactly does the writing event have to take place in order for (1) to be true?

Here's a natural first idea. (1) is true as long as a relevant writing event takes place at some point in the future. This yields the standard analysis of 'will' in tense logic:

Dietrich and List on reasons

Let's return to my recent explorations into the formal structure of reasons. One important approach that I haven't talked about yet is that of Dietrich and List, described in Dietrich and List (2013a), Dietrich and List (2013b), and Dietrich and List (2016).

Gallow on causal counterfactuals without miracles and backtracking

Gallow (2023) spells out an interventionist theory of counterfactuals that promises to preserve two apparently incompatible intuitions.

Suppose the laws of nature are deterministic. What would have happened if you had chosen some act that you didn't actually choose? The two apparently incompatible intuitions are:

(A1) Had you chosen differently, no law of nature would have been violated.

(A2) Had you chosen differently, the initial conditions of the universe would not have been changed.

Rejecting one of these intuitions is widely thought to spell trouble for Causal Decision Theory. Gallow argues that they can both be respected. I'll explain how. Then I'll explain why I'm not convinced.

Kocurek on chance and would

A lot of rather technical papers on conditionals have come out in recent years. Let's have a look at one of them: Kocurek (2022).

The paper investigates Al Hajek's argument (e.g. in Hájek (2021)) that "chance undermines would". It begins with a neat observation.

< 757 older entriesHome14 newer entries >