Ability, control, and chance
In my paper "Ability and Possibility", I argued that ability statements should be analysed as simple possibility modals: 'S can phi' is true iff S phis at some world compatible with relevant circumstances.
This view is widely considered inadequate because it seems to violate two (related) intuitions about ability.
One is that ability requires a kind of robustness: if you have the ability to phi, then you reliably phi whenever the need arises, under a variety of circumstances.
The other intuition is that ability requires a kind of control: if you have the ability to phi, then you can guarantee that you phi, without relying on favourable external circumstances.
In response, I've argued that (1) our concept of ability is ambiguous between a weaker and a stronger reading, that (2) only the strong reading requires robustness/control, and that (3) the kind of control required by the strong reading is epistemic.
In the recent literature, some appear to agree with (1) but disagree with (2) and (3). In particular, it is widely assumed (e.g. in Mandelkern et al 2017, Fusco 2021, or Boylan 2021) that even the weak sense of ability requires (a non-epistemic kind of) control. I haven't found many arguments in favour of this assumption, so I'll look at some hypothetical arguments.
First I need to say a little more on the two readings. The ambiguity is best illustrated with examples. Imagine Cyril doesn't know the first ten digits of pi. Then there's a sense in which he can't recite the first ten digits of pi. But there's also a sense in which he can: all he needs to do is utter the numerals 'three', 'one', 'four', etc., and he can do that.
In my paper, I called the reading of 'can' on which Cyril can't recite the first ten digits of pi "transparent"; the other one is "effective". Roughly speaking, you have an effective reading to phi if there's something you can do that would amount to phiing. For a transparent reading you must, in addition, know what you'd have to do in order to phi.
The knowledge requirement in the stronger, transparent reading implies a kind of robustness/control. Suppose you know what you'd have to do in order to phi. Then it's up to you whether you phi, and you can phi in each of your epistemically accessible worlds. In my paper, I suggested that this is all the robustness and control we need.
Now let's set this reading aside and focus on the weaker, effective reading (following the authors cited above). Here my account does not require any kind of robustness or control.
One might argue that this yields false predictions for so-called "general" ability statements like "I can play the piano" or "I can beat Magnus Carlsen at chess". These seem to convey that I have the relevant ability reliably, across a wide range of circumstances. If we consider different (realistic) circumstances under which I play chess against Carlsen, I almost always lose. But there will be a few circumstances under which I win. On my analysis, this is enough for it to be true that I can beat Carlsen at chess. Surely this prediction is false!
No it isn't. Remember that we're focusing on the effective reading. On this reading, I surely can beat Carlsen at chess: all I need to do is to move the chess pieces in a certain way.
So examples likes these don't work.
A better challenge was raised to me by Matthias Böhm. On my account, "the elevator can carry 5000 kg" is true provided there is at least one relevantly possible world at which the elevator carries 5000 kg. Intuitively, however, the statement conveys that the elevator can reliably carry this load under arbitrary realistic circumstances.
Since an elevator is not an epistemic subject, my account predicts that only the effective reading of 'can' is available here, so I can't claim that the intuition is driven by the transparent reading.
Instead, I'm inclined to say that the felt implication is a matter of pragmatics. Suppose the elevator can carry 5000 kg only if the temperature is between 0 and 1 degrees Celsius. Now is it true that the elevator can carry 5000 kg? I'd say it is. The elevator can carry 5000 kg, but you have to cool down the air to 0-1 degrees. (In some contexts, we're not considering the possibility of operating in low temperatures. In such a context, my account correctly predicts that "the elevator can carry 500 kg" is false.)
So I'm also not convinced by this kind of example.
In fact, I'd like to turn the tables and point out that some effective ability statements are clearly true despite an almost complete lack of robustness and control. An example I use in the paper is "Usain Bolt can run 100 meters in 9.58 seconds". This was true until recently, even though Bolt only managed to run 9.58 seconds under very specific (internal and external) conditions, and even though he had little control over running with that exact speed.
Or imagine a machine with a button that launches a dart in a seemingly random direction. The direction is determined by a pseudo-random number generator whose output is sensitive to certain details of the machine's present state. Imagine we know that the present state is such that if the button were pressed, the dart would hit the bulls eye. Finally, imagine that Bob has the option of pressing the button. Can he hit the bulls eye, by pressing the button? The answer, I think, is yes, even though he fails in almost all nearby possible worlds.
Most alternative accounts of abilities threaten to get these cases wrong.
But here's a different type of case that worries me more.
Suppose the dart-throwing machine is genuinely indeterministic. And suppose Bob doesn't actually push the button. Does he have the ability to hit the bulls eye, by pressing the button? My account says yes. But I worry that the answer might be no.
However, as before, we mustn't slip into the stronger, transparent sense of ability. My account only predicts that Bob has the effective ability. But one might argue that he does not, since there is nothing he can do that would amount to hitting the bulls eye: he could press the button, and that might amount to hitting the bulls eye, but more likely it would not.
As it stands, this objection relies on my heuristic paraphrase of the effective reading in terms of counterfactuals, and on the falsity of conditional excluded middle. We should really ask directly whether Bob is able to hit the bulls eye (by pushing the button), in the same sense in which Cyril is able to recite the digits of pi.
To block the distracting strong reading, it might help to ask what the machine itself can do. Imagine that the machine fires the darts automatically, once every minute. We want to know whether it has the ability to hit the bulls eye on the next throw. (Let's say in fact the next throw misses the bulls eye.)
I'm somewhat tempted to say that the machine does have the ability. Perhaps this judgement is triggered by conceiving of the randomised choice of direction as a kind of decision-making. Let's change the example so that the direction is deterministic, but whether the dart hits the bulls eye depends on genuinely indeterministic wind patterns in between the machine and the dartboard. Then does the machine have the ability to hit the bulls eye?
I'm not sure.
Another example: I have a computer program that takes a text as input and outputs a genuinely random string of symbols. Can my program translate War and Peace from Russian into English?
If we give an affirmative response (as my account suggests we should), then we should arguably also say that we have the (effective) ability to jump 30 meters into the air (as Alex Pruss argues here, before taking it back in the comments). According to quantum mechanics, if we tried to jump into the air, there is a non-zero chance that we'll jump 30 meters into the air.
Oddly, the probabilities seem to matter. There is also a non-zero chance that we wouldn't jump at all, but we don't want to conclude that we aren't able to jump 10 cm into the air.
As I said, my intuitions about these cases are unclear.
I could block some of the putative counterexamples by saying that worlds with "quasi-miracles" are generally not considered circumstantially possible. A quasi-miracle is a remarkable event with low objective probability. (The term comes from Lewis's postscript to "Counterfactual dependence and time's arrow".) A randomly thrown dart hitting the bulls eye is a quasi-miracle. So is a random string generator producing a translation of War and Peace.
If quasi-miracle worlds are normally inaccessible, then my account predicts that the dart machine can't hit the bulls eye, although it can hit a specific unremarkable spot on the wall. Similarly, it predicts that the computer program can produce a specific string of gibberish, but not a translation of War and Peace.
I'm not sure that's better. If the machine can hit that spot, why not this one? If the program can generate these strings, why not those?