On 1. I still think (E3) is satisfied: there's no non-constant function f for

which H = f(B) throughout the selected worlds. Perhaps you're suggesting

that there might be a non-constant and non-deterministic function f for which

H=f(B) throughout the selected worlds, so that a plausible strengthening of (E3)

is violated. In that case, what if the bet affects which hand I use to flip the

coin, but each hand comes with the same probability over upwards and angular

velocity? My intuition is that the Morgenbesser conditional is still false.

On 2. Yes, we could let other things vary in the closest worlds. (I mean

really other things, not silly things like your V.) Would be good to have an

independent motivation for this, though.

On 3 and 4. I'm afraid I don't really understand your response. Why would E

taking any value presuppose that R=1? If no cars enter (E=0) then surely there

are no cars on the road afterwards (R=0). Also, isn't the problem precisely that

we *want* the ramp variable E to wiggle with R?

On 6-8. I think we agree that the variables in a causal model should satisfy

some substantive condition of naturalness/non-disjunctiveness/intrinsicality.

You don't mention any such condition in the paper, and I worry that the

condition would rule out M, given that M=1 says that the actual laws are

violated at some time somewhere. It helps to assume that M=1 describes a more

local divergence miracle. But I thought B is a bet on some violation or other,

not on a particular miracle.

On 9. I see that your extra requirement would help. But isn't it a strange

requirement? It's natural to think that a miraculous world has a different

causal structure than our world. If that's true, then your requirement says that

for a causal model to be correct here at our world it must also be correct at

counterfactual worlds with a different causal structure.

On 10. I'm not convinced that representing the world's entire causal structure

would require modelling overlapping variables. Also, aren't you betting that the

"worthwhile research project" won't succeed? Suppose we found a way to model the

entire structure between M, C, B, and W, with a formula for intervening on B.

Given that B=1 is logically incompatible with C=1 & M=0, the result of a B=1

intervention will either have C=0, or M=1, or it will be a logically impossible

configuration of variables. All three possibilities are intuitively problematic,

and the three views of conditionals and CDT that result from them have been

defended by other people. I though it's important to your proposal that it

doesn't fall into any of these camps.

]]>

What I said above was wrong, because on a 'miraculous' understanding of the selection function, s(B=1,@) will be a world at which L=0 (even though it's not the case that L(0), which was I think the source of my confusion), and so the equation W:=B*C*L will be true.

I think what's going on with the example is that one of the simplifications I made for the 2023 paper has led to trouble. The issue is that, in the worlds in s(L=B=C=1,@), the equation W=L*B*C is true (because the laws are L(0), the initial conditions are C, the holocaust didn't happen, and therefore I win the bet. Nonetheless, the structural equation W:=L*B*C is not correct.

I discuss this kind of issue in sections 3.1 and 4.1 of my 2016 paper "A Theory of Structural Determination". There, I require (very roughly) that the conditions (E1), (E2), and (E3) not only hold for the 'base' world, but also for every world you can reach by making counterfactual suppositions about the variables on the right-hand-side. With this requirement in place, the equation W:=L*B*C won't be correct, since at any world w in s(L=B=C=1,@), L(0) and C are true, which means that the holocaust didn't happen. So, on a 'miraculous' understanding of the selection function, s(B=1,w) includes worlds at which W=1 even though L=0. (Alternatively, on a 'backtracking' understanding, s(B=1, w) includes worlds at which W=1 even though C=0.) So the equation W:=L*B*C is not correct at w.

(In the original draft of the paper (https://philarchive.org/archive/GALCCWv3), I warned about this issue in footnote 32 page 17, but I seem to have deleted the warning before publication.) ]]>

1. How the coin lands (heads or tails) is influenced by the upwards and angular velocity with which it is flipped. In this version of the case, the upwards and angular velocities of the coin are influenced by which hand its flipped with, which is in turn influenced by whether you take the bet. So there's influence leading from whether you take the bet to how the coin lands. This influence won't be deterministic, since the influence of the hand on initial velocities won't be deterministic. So we'll need a more general treatment of indeterministic relations of influence.

However, the existence of this larger model, with the path of influence leading from whether you take the bet to how the coin lands, is enough to show that the equation W := B * H is not correct. That equation says that there's no influence from B to H, which is not true. So the equation violates condition (E3) of "Causal Influence".

2. Insofar as taking the bet in precisely the way you actually do isn't an option, I don't think this worry will affect the decision theory. But the English conditional "if you hadn't taken the bet in precisely that way, there would have been a miracle" still seems false. However, I don't see why there can't be multiple not-too-different ways for you to choose the bet in precisely the way you actually did. Those other ways will involve slight changes to other states of the world.

You might want to open the door to all kinds of variables, including one---call it "V"---which only takes on the value v at the actual world. This makes it all the more important to use a selection function which isn't strongly centred, since, if we were to use a strongly centred selection function, we'd say that literally every variable causally determines the value of V, since whenever U is actually u, s(U=u, @) = { @ } implies that V=v, and s(U=u', @) implies V=/=v.

3. I see the need for a temporal 'ramp' in some conditionals as a species of a more general problem. Suppose there's a switch connected to a lever connected to a duct. Consider a variable which describes the position of the switch, S=0 if the switch is down, S=1 if the switch is up. In fact, the position of the switch causally influences whether the duct is open. When the switch is down, that pushes the lever up which pulls the duct open. When the switch is up, that pushes the lever down which closes the duct. But now there's two different kinds of worlds we could include in s(S=1,@). We could consider worlds where the switch is moved but the lever is kept in place and is no longer connected to the switch. If we do that, then wiggling the switch won't wiggle the duct. Alternatively, we could consider worlds in which the switch is moved and the lever remains connected to the switch. In that case, wiggling the switch will wiggle the duct.

This strikes me as the same problem, though it doesn't involve time at all. My solution is to say that variables carry presuppositions. They only take on values when those presuppositions are met. There's a variable for the switch's position which takes on a value even when the switch isn't connected to the lever. And there's another which takes on a value only if the switch is connected to the lever. The second one is the one which influences whether the duct is open or not. The presuppositions we are usually inclined to make about a system influence which kinds of variables we're inclined to talk about when describing that system.

4. In the road example: one of the presuppositions of the variable E (at least, the one we're normally inclined to talk about) is that R=1. So the value of R won't wiggle when E wiggles.

I didn't say that variables can't be too disjunctive, but I take it that it's compatible with everything I say in the paper that there is a naturalness constraint on which variables can enter into relations of causal influence. (Related to my discussion below, if we want to use anything like the Lewisian mereology for variable values, we'll need a naturalness constraint.)

5. It's not obvious to me that we need the conditional "If you had played poker, you would have played cards" in order to apply causal decision theory. Let C be a variable whose value depends upon whether I play cards and which card game I play. If I assign basic value to playing cards, then my value function can be determined by the value of the variable C. I don't need an additional, more coarse-grained variable for whether I play cards.

I agree that this is a true English language counterfactual. But I don't think there's causal influence between whether you play poker and whether you play cards. I'd be interested to see a broader theory which allowed us to handle counterfactuals between overlapping variables. Woodward has some suggestions, but I don't have anything constructive to add myself. Developing a theory like that strikes me as a worthwhile research program.

6. Lewis recognises two different kinds of mereology for events: since events are classes, they have the mereology from his "parts of classes". With this mereology, M and B will not be distinct, since there are members of M=1 which are also members of B=1. But this is not the mereology that matters for Lewis's theory of causation when he demands that causes be distinct from their effects. It is instead the rather complicated "spatiotemporal" mereology. We can import this mereology over to the case of variables straightforwardly.

Following Lewis, say that V=v *in* a region R iff R is a member of V=v. Say that V=v *within* R iff V=v in some subregion of R. Say that V=v *implies* U=u iff necessary, if V=v in R, then U=u in R, too. Say that V=v is *essentially a part of* U=u iff necessarily, if V=v in R, then U=u within R. Say that an actual variable value V=v is part of an actual variable value U=u iff there is an actual variable value, I=i, such that I=i implies V=v and there is a variable value J=j such that I=i is essentially a part of J=j and J=j implies U=u. Then, V=v and U=u overlap iff they have a part in common. They are distinct iff they have no part in common.

Neither M=1 nor M=0 implies B=0 or B=1, nor does B=0 or B=1 imply M=0 or M=1, since you could take the bet and not take the bet in a world with a miracle and in a world without a miracle. So neither M=1 nor M=0 is essentially a part of B=0 or B=1. And neither B=0 nor B=1 is essentially a part of M=0 or M=1, for the same reason. As with Lewis's theory of events, much will depend upon which variable values we countenance. With gerrymandered variables, we could get variable values I=i and J=j so that both I=i and J=j contain a single region at the actual world, R and R', respectively, so that B=1 in R and M=0 in R'. Then, I=i will imply B=1, I=i is essentially a part of J=j, and J=j implies M=0. So we'll have that B=1 is a part of M=0. But that's just the same problem Lewis faces with gerrymandered events. If we want to use a mereology like Lewis's, it's important that we restrict the kinds of variable values we countenance.

7. Relatedly, variables should depend upon the intrinsic properties of the regions in which they occur. So there is no variable value corresponding to 'Xanthippe's widowing'. This rules out a variable like X, but does not rule out variables like B and M, since whether there's a violation of the laws in a region of spacetime R doesn't depend upon what happens in regions outside of R, and whether you take the bet in a region R doesn't depend upon what happens outside of R.

8. Relatedly, as I was thinking about things, "M" was a variable describing whether the goings-on in some region of spacetime just before you decide whether to take the bet satisfy the laws of nature or not. So it will be local, at least.

9. I don't see why the equation W:= C * L * B will be true in this situation. In order for the equation to be correct, it must be that s(B=1,@) implies that you win the bet, W=1. (Since, actually, C=L=1.) Take a 'miraculous' understanding of the selection function. We've assumed that the holocaust actually happened. Changing whether I take the bet with a localised miracle won't change whether the holocaust happened in the past. Since the bet pays out only if the holocaust didn't happen, s(B=1,@) won't imply W=1. So the equation won't be correct.

10. Mereologically overlapping variables influence and are influenced. So if we want to represent the entirety of the world's causal structure in a single model, we will need a model which allows us to include these overlapping variables. Doing this in a way which allows us to model interventions on variable's values will require more bells and whistles than you can find in the models people are currently playing around with. So, no, those models cannot always be combined into a larger correct model. (Though you could always just represent the entirely of the world's causal structure with a set of all correct models.) Developing models like this strikes me as a worthwhile research project.

11. This is a nice argument for agglomeration (the principle that, if X is not under your control and Y is not under your control, then X&Y is not under your control). To be clear: I don't take a stand on agglomeration. And while I like the argument, I'm still on the fence. I don't think that, in general, a bet on X&Y is rational whenever a bet on X would be rational and a bet on Y would be rational. Compare: A is a $1 bet on whether the coin doesn't land heads which costs 50 cents. You take the bet by flipping the coin. B is a $1 bet on whether the coin doesn't land tails. You take the bet by flipping the coin. C is a $1 bet on whether the coin doesn't land heads and doesn't land tails. Again, you take the bet by flipping the coin. Taking bet A is rational and taking bet B is rational, but taking bet C is not.]]>

I've outlined a possible explanation of acquaintance above: Suppose my model is correct for a certain agent, and suppose the agent conceptualises perceptual episodes as "conscious experiences" and individuates their type by the phenomenal/imaginary information that is conveyed -- same information, same type of "experience"; different information, different type of "experience". One would then expect that the agent has the impression that normal perceptual episodes directly, immediately, and conclusively reveal to her that she has a certain type of experience.

If a sense of acquaintance is an "impression that experiences are presented to us in some sort of peculiarly direct, concrete, immediate and revelatory way" (Kammerer), then this would arguably explain the agent's sense of acquaintance.]]>

It might be that your model has the resources to answer this request, but it is not (yet?) obvious to me at this stage.]]>

(a) Certainty plays a role in explaining acquaintance.

(b) Certainty explains acquaintance.

I'm saying (a), you're denying (b).]]>