Gödel, Mechanism, Paradox
A famous argument, first proposed in (Lucas 1961), supposedly shows that the human mind has capabilities that go beyond those of any Turing machine. In its basic form, the argument goes like this.
Let S be the set of mathematical sentences that I accept as true. S includes the axioms of Peano Arithmetic. Let S+ be the set of sentences entailed by S. Suppose for reductio that my mind is equivalent to a Turing machine. Then S is computably enumerable, and S+ is a computably axiomatizable extension of Peano Arithmetic. So Gödel's First Incompleteness Theorem applies: there is a true sentence G that is unprovable in S+. By going through Gödel's reasoning, I can see that G is true. So G is in S and thereby in S+. Contradiction!
I'm not really familiar with the literature on this argument, but I've read a few discussions and summaries. I want to raise an objection that I haven't encountered: the argument naively assumes that we can coherently reason about our mathematical attitudes. There are strong reasons to think that this is false.
Consider, for example, the formalized Knower Paradox. Let's add to the language of arithmetic a one-place predicate K, so that K(⌜A⌝) is meant to express 'I know A'. Since knowledge is factive, all instances of K(⌜A⌝) → A should be true. Let PA+ be Peano Arithmetic extended by this schema. PA+ is obviously not a complete theory of mathematical knowledge, but it is a sound starting point. By Gödel's diagonal lemma (a.k.a. fixed point lemma), there is a sentence G such that PA+ can prove G ↔︎ ¬K(⌜G⌝). By factivity, PA+ can also prove K(⌜G⌝) → G. So it proves K(⌜G⌝) → ¬K(⌜G⌝), and thereby ¬K(⌜G⌝) and thereby G.
So the sentence G is derivable from the axioms of PA and the factivity schema. One would think that I can know the relevant premises: it certainly seems to me that I know the axioms of PA and the one instance of factivity that figures in the derivation. So I should be able to go through the proof of G, thereby coming to know G. But we've just seen that this is mathematically impossible: PA and the factivity of knowledge imply ¬K(⌜G⌝)!
This is one of the toughest paradoxes I know of. Note that we haven't assumed that my mathematical knowledge is computably enumerable. Nor have we assumed that I know which sentences I know. All we've assumed is that knowledge is factive, and that I can come to know an arithmetical sentence by competently deducing it from known premises. Given that factivity is built into the concept of knowledge, the easiest way to block the paradox is to deny the second assumption: somehow, I can't come to know G by going through its proof. But if that's the right response to the formalized Knower Paradox, we should be wary of the Gödelian argument against mechanism, which relies on essentially the same assumption.
The formalized Knower Paradox is part of a broader family of paradoxes, discussed, for example, in (Montague 1963), (Thomason 1980), and (Koons 1992). Here is another example that I find particularly striking. (I don't know if it's original; I don't think I've seen it in the literature.)
This time, we don't assume factivity. So let's use a different predicate B. This might express mathematical knowledge, or belief, or acceptance of a mathematical sentence. For concreteness, I'll paraphrase it as belief: B(⌜A⌝) is true iff I believe A. The language of arithmetic now gives us a formula BC(x) so that BC(⌜A⌝) is true iff A is deducible in PA from premises that I believe.
(How so? BC(x) says that there is a sequence of sentences A1, …, An such that B(Ai) for each 1 ≤ i ≤ n, and there is a proof of A1 ∧ … ∧ An → A in PA. By the compactness of first-order logic, any PA-consequence of my beliefs is a PA-consequence of finitely many of my beliefs. "There is a proof of A1 ∧ … ∧ An → A in PA" can be expressed in the language of arithmetic by standard Gödelian techniques, which also have to be used to formalize the quantification over sequences of sentences.)
Now assume that the set of sentences that I believe is computably enumerable and consistent with PA. (We assume that I'm a "mechanical mind". Whatever the merits of Lucas's argument, such a mind is surely possible.) Let B* be the set of sentences that are deducible in PA from premises that I believe. Then B* is a consistent, axiomatizable extension of PA. It follows that the predicate BC(x) satisfies the Hilbert-Bernays-Löb provability conditions. So Löb's Theorem applies: whenever B* contains BC(⌜A⌝) → A, it also contains A. From this, we immediately get Gödel's second incompleteness theorem for B*: B* cannot contain ¬BC(⌜0=1⌝), unless it is inconsistent. And so B can't contain ¬BC(⌜0=1⌝) either.
These are highly counterintuitive results. Among other things, they seem to prove that I can't consistently believe that my mathematical beliefs are consistent.
I've assumed that my mathematical beliefs are computably enumerable. The assumption can be weakened. Let's check what we need for BC(x) to satisfy the Hilbert-Bernays-Löb conditions with respect to PA.
The first condition requires that if A is provable in PA then so is BC(⌜A⌝). This obviously holds, without any assumptions about B.
The second condition requires that PA proves BC(⌜A → B⌝) → (BC(⌜A⌝) → BC(⌜B⌝)). This also holds.
The third condition is the tricky one. It requires that PA proves BC(⌜A⌝) → BC(⌜BC(⌜A⌝)⌝). If B is computably enumerable, BC(x) is a Σ1 formula, which is the key assumption in the standard proof of this condition. (See e.g. (Boolos 1993, ch.2).) If B isn't computably enumerable, BC(x) may not be Σ1. So we need some further assumption.
I think one sufficient condition is positive introspection for B: whenever B(⌜A⌝) then B(⌜B(⌜A⌝)⌝). I'm suspect weaker assumptions would do the trick as well.
Proof sketch. Let PA+ be PA plus positive introspection. We now reason inside PA+.
Assume BC(⌜A⌝). That is, there are sentences A1, …, An such that (1) B(⌜Ai⌝) for each 1 ≤ i ≤ n, and (2) there is a PA proof Σ of A1 ∧ … ∧ An → A. We need to derive BC(⌜BC(⌜A⌝)⌝): there are sentences A1', …, Am' such that (3) B(⌜A'i⌝) for each 1 ≤ i ≤ m, and (4) there is a PA proof Σ' of A1' ∧ … ∧ Am' → BC(⌜A⌝).
We use B(⌜A1⌝), …, B(⌜An⌝) as A1', …, Am'.
(3) requires B(⌜B(⌜Ai⌝)⌝) for each 1 ≤ i ≤ n. We get that by positive introspection from (1).
To get (4), we need to show that PA+ can derive BC(⌜A⌝) from B(⌜A1⌝), …, B(⌜An⌝). From (2), we know that there is a PA proof Σ of A1 ∧ … ∧ An → A. PA can verify that there is such a proof, so PA can prove that it can derive A from A1, …, An. From the assumption B(⌜A1⌝), …, B(⌜An⌝), PA+ can therefore derive (i) B(⌜A1⌝) ∧ … ∧ B(⌜An⌝) (trivially), and (ii) there is a PA proof Σ of A1 ∧ … ∧ An → A. This is what BC(⌜A⌝) says. So PA+ can derive BC(⌜A⌝) from B(⌜A1⌝), …, B(⌜An⌝). So (4) holds.
So even if my mind is not mechanical, I cannot consistently believe that my beliefs are consistent as long as I'm aware of what I believe. As above, this seems to refute an assumption that is at least closely related to an assumption in the anti-mechanist argument.
In my view, the paradoxes go deeper. The "assumptions" they require us to give up are too central to our concepts of mathematical knowledge and belief. They (or some of them) are built into our concepts, just as the T-schema is built into our concept of sentential truth. The paradoxes reveal that our concept of mathematical knowledge or belief (like our concept of truth) is subtly inconsistent. We can get away with this in most contexts, but we can't rest any serious philosophical arguments on these concepts.