I wanted to call attention to a relevant and underappreciated paper by John Leslie: "Ensuring Two Bird Deaths with One Throw" (Mind, 1991) <jstor.org/stable/2254984>. If you have a perfect clone... then by killing a bird with a stone, you ensure that your clone does likewise.

Leslie calls this phenomenon "quasi-causation" and applies it to Newcomb's Problem, among other issues.]]>

I have Garson's book (at home), but didn't remember that it mentions this issue.

I'll look up the Greco vs Carter debate. The same point arguably arises for Lewis's account of knowledge in "Elusive Knowledge". It is sometimes claimed (e.g. by Williamson) that the logic of Elusive Knowledge is S5. That would be (almost) correct on an absolutist reading of Lewis's rules, on which the line between ignored and non-ignored worlds depends only on the context of utterance. ("Almost", because the logic would actually be KD45.) But some of Lewis's rules are relativist. For example, the rule of actuality says that the /subject/'s world is never properly ignored. This ensures that the accessibility relation is reflexive, and it breaks symmetry, among other things. For example, if subject 1 is looking at a zebra and subject 2 has the same experiences but is looking at a disguised mule, then we can properly ignore subject 2 when we talk about subject 1, but not when we talk about subject 2. By contrast, when we talk about subject 2, we can never properly ignore subject 1. The subject's beliefs and stakes also matter in Lewis's account.

]]>

http://fitelson.org/piksi/deontic_logic_problems.pdf

There is a bit of discussion of the condition itself in Garson's "Modal Logic for Philosophers" (p.109). (If you are interested I can e-mail you a PDF of a screenshot).

I wish I had a discussion like yours to reference!]]>

I see that on a population-level statistical average, purely selfish FDT agents often do better than purely selfish CDT agents. I said as much in the post, so I don't think we disagree here. Except that I don't think average population-level success among selfish agents is an adequate test for the right decision theory. A somewhat more adequate test, I think, is to look at which theory gives better results across a wide range of decision problems, no matter how these problems came about. On that measure, selfish CDT agents generally do better than selfish FDT agents. But of course I can't prove to you that my test is more adequate.

]]>

Or maybe we’re using the versions of the problems where the blackmailer is not entirely predictable and might still blackmail the functional decision theorist (but be more likely to blackmail the causal decision theorist), or where the Newcomb predictor is not a perfect predictor but only very likely to predict correctly, or where the other prisoner twin might be hit by a cosmic ray with low probability and not make the same decision as you. If so, situations where CDT does better than FDT are less likely than situations where FDT does better, so FDT still comes out ahead.

Let’s assume that we’re using the deterministic version of each of these problems, rather than the probabilistic version: the blackmailer is guaranteed to know what decision theory you use and to act accordingly, the Newcomb predictor is guaranteed to predict correctly, your twin is guaranteed to make the same prediction as you, your father is guaranteed to procreate if and only if you do.

Now let’s consider the blackmail problem. The post says, “If you face the choice between submitting to blackmail and refusing to submit (in the kind of case we’ve discussed), you fare dramatically better if you follow CDT than if you follow FDT.” This is true. The problem is that, if you are being blackmailed, this means that you are not going to follow FDT. If you were going to follow FDT, the blackmailer would not have blackmailed you. The fact that you have been blackmailed means you can be 100% certain that you will not follow FDT. In itself, being 100% certain that you will not follow FDT does not prevent you from following FDT. But it does make the situation where you follow FDT and come worse off impossible, which is relevant to our determination of which decision theory is better.

Let’s consider the Newcomb problem. If the Newcomb predictor is guaranteed to predict your choice correctly, it is impossible for an agent using CDT to see a million in the right-hand box.

It never does any good to dismiss a logical inconsistency and to consider what happens anyway.

What happens if we ignore this and suppose that the CDT agent does see a thousand in the left-hand box and a million in the right-hand box? Then using this supposition we can prove that they will get both amounts if they two-box. But since they are a CDT agent, we know that they will two-box, therefore there is nothing in the right-hand box, so we can prove that they will only get a thousand if they two-box. But suppose that they one-box instead. Since they are a CDT agent, we know that they will two-box, so we know that there is nothing in the right-hand box, so we can prove that if they one-box they will get nothing. However, we know that they see a million in the right-hand box, so we can prove that if they one-box, they will get a million. So we can prove that they should one-box, and we can prove that they should two-box. At this point we can conclude that a million and nothing are the same thing, and that a thousand is equal to a million plus a thousand. Avec des si, on mettrait Paris en bouteille.

The procreation example is harder to prove inconsistent because it relies on infinite regress.

Here’s a first way to resolve it. Should I procreate? If I do, my life will be miserable. But my father followed the same decision theory I do, so if I choose not to procreate, that means my father will have chosen not to procreate. So I will not exist. So I can prove that, if I end up choosing not to procreate, that means I do not exist. However, I do exist. That’s a contradiction. I guess that means I will not choose not to procreate. Knowing that I will not make that choice does not in itself prevent me from making the choice though. Should I choose not to procreate anyway? Well, I can prove that if I do not procreate, then I will not exist, and that if I do, then my life will be miserable. A miserable life is better than not existing, so I should procreate. However, I know that I exist, and that is the consequent of the implication “if I do not procreate, then I [will] exist”, so the implication is true, whereas if I choose to procreate I still exist but my life is miserable. A miserable life is worse than a non-miserable life, so I should not procreate. Oops, I can prove that I should procreate and that I should not procreate? That’s a contradiction, and this one doesn’t rely on the supposition that I made any particular choice. The world I am living in must be inconsistent.

We can also solve it by directly addressing the infinite regress.

Should I procreate? If I do, my life will be miserable. But my father followed the same thought process I did, would have made the same decisions, so if I choose not to procreate, that means my father will have chosen not to procreate. Then I would not exist, and a miserable life is better than not existing, so I should procreate.

Why did my father procreate, though, if that made his life miserable?

Oh, right. My grandfather followed the same thought process that my father did, so if he chose not to procreate, that means his father would have chosen not to procreate, and so he would not exist either. Since he too considered a miserable life better than not existing, he chose to procreate.

Why did my grandfather procreate, though, if that made his life miserable? What about my great-grandfather? What about—

The recursive buck stops *here*.

My {The Recursive Buck Stops Here}-great-. . .-great-grandfather did not choose to procreate because that would have made his life miserable. Therefore I do not exist. That’s a contradiction. The assumption that each generation of ancestry uses FDT and only exists if the previous chose to procreate is inconsistent with the assumption that any of them exist. No FDT agent can ever face this problem, and no designer can ever have to pick a decision theory for an agent that could have to face this problem. And if we only assume that it is unlikely that the father made a different decision from you, and not that it is certain that he did not, then FDT makes it less likely that you will not exist, and so it again comes out ahead of CDT.

There is one category of situations (the one exception I mentioned) where FDT can leave you worse off than CDT, and that is what happens when “someone is set to punish agents who use FDT, giving them choices between bad and worse options, while CDTers are given great options”. FDT can change your decisions to make them optimal, but it can’t change the initial decision theory you used to make the decisions. It can only pick decisions identical to those of another decision theory. That doesn’t prevent an environment from knowing what your initial decision theory was and punishing you on that basis. This is unsolvable by any decision theory. Therefore it can hardly be taken as a point against FDT.

I said that it never does any good to dismiss a logical inconsistency. I want to clarify that this is not the same as saying that we should dismiss thought experiments because their premises are unlikely. “Extremism In Thought Experiment Is No Vice”. Appealing to our intuitions about extreme cases is informative. But logical impossibility is informative too, and is what we care about when comparing decision theories. Nate Soares has claimed “that *all* decision-making power comes from the ability to induce contradictions: the whole reason to write an algorithm that loops over actions, constructs models of outcomes that would follow from those actions, and outputs the action corresponding to the highest-ranked outcome is so that it is contradictory for the algorithm to output a suboptimal action.”]]>