In a previous post I discussed the status of Church’s Law in type theory, showing that it fails to hold *internally* to extensional type theory, even though one may see *externally* that the definable numeric functions in ETT are λ-definable, and hence Turing computable. The distinction between internal and external is quite important in logic, mainly because a logical formalism may be unable to express precisely an externally meaningful concept. The classical example is the Löwenheim-Skolem Theorem of first-order logic, which says that any theory with an infinite model has a countable model. In particular the theory of sets has a countable model, which would seem to imply that the set of real numbers, for example, is countable. But internally one can prove that the reals are uncountable (Cantor’s proof is readily expressed in the theory), which seems to be a paradox of some kind. But no, all it says is that the function witnessing the countability of the term model cannot be expressed internally, and hence there is no contradiction at all.

A similar situation obtains with Church’s Law. One may observe empirically, so to say, that Church’s Law holds externally* *of ETT, but this fact cannot be internalized. There is a function given by Church’s Law that “decompiles” any (extensional) function of type *N→N* by providing the index for a Turing machine that computes it. But this function cannot be definable internally to extensional type theory, because it may be used to obtain a decision procedure for halting of Turing machines, which is internally refutable by formalizing the standard undecidability proof. In both of these examples it is the *undefinability* of a function that is important to the expressive power of a formalism, contrary to naïve analyses that would suggest that, when it comes to definability of functions, the more the merrier. This is a general phenomenon in type theory. The power of type theory arises from its *strictures*, not its *affordances*, in direct opposition to the ever-popular language design principle “first-class *x*” for all imaginable values of *x*.

Another perspective on the same issue is provided by Martin-Löf’s *meaning explanation* of type theory, which is closely related to the theory of *realizability* for constructive logic. The high-level idea is that a justification for type theory may be obtained by *starting* with an untyped concept of computability (*i.e.,* a programming language given by an operational semantics for closed terms), and then giving the meaning of the judgments of type theory in terms of such computations. So, for example, the judgment *A type*, where *A* is a closed expression means that *A* evaluates to a *canonical* type, where the canonical types include, say, *Nat,* and all terms of the form *A’→A”*, where *A’* and *A”* are types. Similarly, if *A* is a type, the judgment *a:A *means that *A* evaluates to a canonical type *A’* and that *a* evaluates to a canonical term *a’* such that *a’* is a canonical element of *A’*, where, say, any numeral for a natural number is a canonical member of *Nat.* To give the canonical members of the function type *A’→A”* requires the further notion of *equality* of elements of a type, *a=b:A*, which all functions are required to respect. A meaning explanation of this sort was suggested by Martin-Löf in his landmark paper *Constructive Mathematics and Computer Programming*, and is used as the basis for the NuPRL type theory, which extends that account in a number of interesting directions, including inductive and coinductive types, subset and quotient types, and partial types.

The relation to realizability emerges from applying the meaning explanation of types to the semantics of propositions given by the propositions-as-types principle (which, as I’ve previously argued, should *not* be called “the Curry-Howard isomorphism”). According to this view a proposition *P* is identified with a type, the type of its proofs, and we say that *P true* iff *P *evaluates to a canonical proposition that has a canonical member. In particular, for implication we say that *P→Q* *true* if and only if *P true* implies *Q true* (and, in addition, the proof respects equality, a condition that I will suppress here for the sake of simplicity). More explicitly, the implication is true exactly when the truth of the antecedent implies the truth of the consequent, which is to say that there is a constructive transformation of proofs of *P* into proofs of *Q*.

In *recursive realizability* one accepts Church’s Law and demands that the constructive transformation be given by the index of a Turing machine (*i.e.*, by a program written in a fixed programming language). This means, in particular, that if *P* expresses, say, the decidability of the halting problem, for which there is *no recursive realizer*, then the implication *P→Q* is vacuously true! By taking *Q* to be falsehood, we obtain a realizer for the statement that the halting problem is undecidable. More generally, any statement that is not realized is automatically *false * in the recursive realizability interpretation, precisely because the realizers are identified with Turing machine indices. Pressing a bit further, there are statements, such as the statement that every Turing machine either halts or diverges on its own input, that are *true* in classical logic, yet have no recursive realizer, and hence are *false* in the realizability interpretation.

In contrast in the meaning explanation for NuPRL *Church’s Law is not assumed*. Although one may show that there is no *Turing machine* to decide halting for Turing machines, it is impossible to show that there is no *constructive transformation* that may do so. For example, an oracle machine would be able to make the required decision. This is entirely compatible with intuitionistic principles, because although intuitionism does not *affirm* LEM, neither does it *deny* it. This point is often missed in some accounts, leading to endless confusions. Intuitionistic logic, properly conceived, is *compatible* with classical logic in that classical logic may be seen as an idealization of intuitionistic logic in which we heuristically postulate that all propositions are decidable (all instances of LEM hold).

The crucial point distinguishing the meaning explanation from recursive realizability is precisely the *refusal* to accept Church’s Law, a kind of comprehension principle for functions as discussed earlier. This refusal is often called *computational open-endedness* because it amounts to avoiding a commitment to the blasphemy of limiting God’s programming language to Turing machines (using an apt metaphor of Andrej Bauer’s). Rather, we piously accept that richer notions of computation are possible, and avoid commitment to a “final theory” of computation in which Church’s Law is postulated outright. By avoiding the witnessing function provided by Church’s Law we *gain* expressive power, rather than losing it, resulting in an elegant theory of constructive mathematics that enriches, rather than diminishes, classical mathematics. In short, contrary to “common sense” (*i.e.,* uninformed supposition), *more is not always better*.

*Update*: corrected minor technical error and some typographical errors.

*Update*: clarified point about incompatibility of recursive realizability with classical logic.