There seems to be a popular misunderstanding about the propositions-as-types principle that has led some to believe that intensional type theory (ITT) is somehow preferable to or more sensible than extensional type theory (ETT). Now, as a practical matter, few would dispute that ETT is much easier to use than ITT for mechanizing everyday mathematics. Some justification for this will be given below, but I am mainly concerned with matters of principle. Specifically, I wish to dispute the claim that t ETT is somehow “wrong” compared to ITT. The root of the problem appears to be a misunderstanding of the fundamental ideas of intuitionism, which are expressed by the proposition-as-types principle.

The most popular conception appears to be the trivial one, namely that certain inductively defined formal systems of logic correspond syntactically to certain inductively defined formal systems of typing. Such correspondences are not terribly interesting, because they can easily be made to hold by construction: all you need to do is to introduce proof terms that summarize a derivation, and then note that the proofs of a proposition correspond to the terms of the associated type. In this form the propositions-as-types principle is often dubbed, rather grandly, the *Curry-Howard** Isomorphism*. It’s a truism that most things in mathematics are named after anyone but their discoverers, and that goes double in this case. Neither Curry nor Howard discovered the principle (Howard himself disclaims credit for it), though they both did make contributions to it. Moreover, this unfortunate name deprives credit to those who did the real work in inventing the concept, including Brouwer, Heyting, Kolmogorov, deBruijn, and Martin-Löf. (Indeed, it is sometimes called the *BHK Correspondence*, which is both more accurate and less grandiose.) Worse, there is an “isomorphism” only in the most trivial sense of an identity by definition, hardly worth emphasizing.

The interesting conception of the propositions-as-types principle is what I call *Brouwer’s Dictum*, which states that all of mathematics, including the concept of a proof, is to be derived from the concept of a *construction*, a computation classified by a type. In intuitionistic mathematics proofs are themselves “first-class” mathematical objects that inhabit types that may as well be identified with the proposition that they prove. Proving a proposition is no different than constructing a program of a type. In this sense logic is a branch of mathematics, the branch concerned with those constructions that are proofs. And mathematics is itself a branch of computer science, since according to Brouwer’s Dictum all of mathematics is to be based on the concept of computation. But notice as well that there are many more constructions than those that correspond to proofs. Numbers, for example, are perhaps the most basic ones, as would be any inductive or coinductive types, or even more exotic objects such as Brouwer’s own choice sequences. From this point of view the judgement stating that is a construction of type is of fundamental importance, since it encompasses not only the formation of “ordinary” mathematical constructions, but also those that are distinctively intuitionistic, namely mathematical proofs.

An often misunderstood point that must be clarified before we continue is that the concept of proof in intuitionism is *not* to be identified with the concept of a *formal proof* in a fixed formal system. What constitutes a proof of a proposition is a *judgement*, and there is no reason to suppose *a priori* that this judgement ought to be decidable. It should be possible to recognize a proof when we see one, but it is not required that we be able to *rule out* what is a proof in all cases. In contrast formal proofs are inductively defined and hence fully circumscribed, and we expect it to be decidable whether or not a purported formal proof is in fact a formal proof, that is whether it is well-formed according to the given inductively defined rules. But the upshot of Gödel’s Theorem is that as soon as we fix the concept of formal proof, it is immediate that it is not an adequate conception of proof *simpliciter*, because there are propositions that are true, which is to say have a proof, but have no *formal proof* according to the given rules. The concept of truth, even in the intuitionistic setting, eludes formalization, and it will ever be thus. Putting all this another way, according to the intuitionistic viewpoint (and the mathematical practices that it codifies), there is no truth other than that given by proof. Yet the rules of proof cannot be given in decidable form without missing the point.

It is for this reason that the first sense of the propositions-as-types principle discussed above is uninteresting, for it only ever codifies a decidable, and hence incomplete, conception of proof. Moreover, the emphasis on an *isomorphism* between propositions and types also misses the point, because it fails to account for the many forms of type that do not correspond to propositions. The formal correspondence is useful in some circumstances, namely those in which the object of study is a formal system. So, for example, in LF the goal is to encode formal systems, and hence it is essential in the LF methodology that type checking be decidable. But when one is talking about a general theory of computation, which is to say a general theory of mathematical constructions, there is no reason to expect either an isomorphism or decidability. (So please stop referring to propositions-as-types as “the Curry-Howard Isomorphism”!)

We are now in a position to discuss the relationship between ITT and ETT, and to correct the misconception that ETT is somehow “wrong” because the typing judgement is not decidable. The best way to understand the proper relationship between the two is to place them into the broader context of *homotopy type theory*, or *HTT*. From the point of view of homotopy type theory ITT and ETT represent extremal points along a spectrum of type theories, which is to say a spectrum of conceptions of mathematical construction in Brouwer’s sense. Extensional type theory is the theory of *homotopy sets*, or *hSets* for short, which are spaces that are homotopically discrete, meaning that the only path (evidence for equivalence) of two elements is in fact the trivial self-loop between an element and itself. Therefore if we have a path between and in , which is to say a proof that they are equivalent, then and are *equal*, and hence interchangeable in all contexts. The bulk of everyday mathematics takes place within the universe of hSets, and hence is most appropriately expressed within ETT, and experience has born this out. But it is also interesting to step outside of this framework and consider richer conceptions of type.

For example, as soon as we introduce universes, one is immediately confronted with the need to admit types that are not hSets. A universe of hSets naturally includes non-trivial paths between elements witnessing their isomorphism as hSets, and hence their interchangeability in all contexts. Taking a single universe of hSets as the sole source of such additional structure leads to *(univalent)* *two-dimensional type theory. *In this terminology ETT is then to be considered as *one-dimensional *type theory. Universes are not the only source of higher dimensionality. For example, the interval has two elements, and connected by a path, the segment between them, which may be seen as evidence for their interchangeability (we can slide them along the segment one to the other). Similarly, the circle is a two-dimensional inductively defined type with one element, a base point, and one path, a non-reflexive self-loop from the base point to itself. It is now obvious that one may consider *three-dimensional type theory*, featuring types such as , the sphere, and so forth. Continuing this through all finite dimensions, we obtain *finite-dimensional type theory*, which is just ITT (type theory with no discreteness at any dimension).

From this perspective one can see more clearly why it has proved so awkward to formalize everyday mathematics in ITT. Most such work takes place in the universe of hSets, and makes no use of higher-dimensional structure. The natural setting for such things is therefore ETT, the theory of types as homotopically discrete sets. By formalizing such mathematics within ITT one is paying the full cost of higher-dimensionality without enjoying any of its benefits. This neatly confirms experience with using NuPRL as compared to using Coq for formulating the mathematics of homotopy sets, and why even die-hard ITT partisans find themselves wanting to switch to ETT for doing real work (certain ideological commitments notwithstanding). On the other hand, as higher-dimensional structure becomes more important to the work we are doing, something other than ETT is required. One candidate is a formulation of type theory with explicit *levels*, representing the dimensionality restriction appropriate to the problem domain. So work with discrete sets would take place within level 1, which is just extensional type theory. Level 2 is two-dimensional type theory, and so forth, and the union of all finite levels is something like ITT. To make this work requires that there be a theory of *cumulativity *of levels, a theory of *resizing *that allows us to move work at a higher level to a lower level at which it still makes sense, and a theory of *truncation* that allows suppression of higher-dimensional structure (generalizing proof irrelevance and “squash” types).

However this may turn out, it is clear that the resulting type theory will be far richer than merely the codification of the formal proofs of some logical system. Types such as the geometric spaces mentioned above do not arise as the types of proofs of propositions, but rather are among the most basic of mathematical constructions, in complete accordance with Brouwer’s dictum.

As far as I understand, Brouwer dislikes formalism, but his ideas become known to many people in Heyting’s (or Kolmogorov’s?) formal systems. In addition, fewer CS people pay attention to the underlying philosophies (e.g. intuitionism) but most CS people are good at manipulating certain formal systems (e.g. some programming languages). Perhaps that’s the source of confusion. As you pointed out, the CH correspondence is about forms.

As a side note, I think the B in BHK is more honorific for his position in philosophy of mathematics.

A minor point: has structures in even higher dimensions, which might not fit into the three-dimensional type theory.

Oops I meant

non-trivialstructures in even higher dimensions.Yes, I was just referring to the thing itself, not, for example, its homotopy groups.

I just realized how ambiguous my side note is. :-( My concern is that while the BHK interpretation was definitely inspired by Brouwer’s ideas, he himself might not fully approve of this interpretation (or any kind of formalization).

Hi Bob,

It’s worth noting that Brouwer was motivated to invent intuitionism out of his distaste for the idea that the continuum is a set (i.e., the natural numbers come out of the “first act” of intuitionism, and the continuum comes from the “second act”). In HTT this has a very cute formalization: the interval is a higher inductive type with two points and a declared path between them, and so is trivially not an h-set!

But it’s unclear to me whether this is sufficient to capture his thinking.

1. What is the status of the fan theorem? I don’t know enough about HTT to guess what properties like compactness look like in this setting.

2. He also posited free choice sequences as part of intuitionistic analysis, and they are

weird. They look like streams, but they aren’t, since (a) there is no rule to produce them, but (b) streams necessarily have such a rule (since they are all constructed from an unfold).The Fan Theorem is a theorem of extensional type theory. Constable and Bickford use it to prove a new completeness theorem for intuitionistic first-order logic. They point out that the Fan Theorem is inconsistent with Church’s Law in the presence of extensionality, so this is another (pragmatic) argument against Church’s Law.

I’m not expert on this topic, but I like to think of free choice sequences in terms of processes. It’s a stream whose generator is inaccessible to you, coming from an independent process. You’re required to reason in such a way that you impose no assumptions about how that process generates its outputs. I find it fascinating that Brouwer used this as his conception of the continuum. This contrasts with “recursive analysis” which represents reals as Cauchy sequences given by a Turing machine index. It’s not very intuitionistic in that it is self-consciously computable, whereas the beauty of intuitionism is that you just “do math” in such a way that a computable interpretation is always available.

Update: I misinterpreted the Constable and Bickford paper. Their result is conditional on the Fan “Theorem”, and Th. Coquand pointed out to me that FT is not provable in ETT, contrary to what I said above. Apologies to all. (Perhaps it should be called the Fan Principle or somesuch when used in contexts in which it is not a theorem.)

Hear hear! The misguided emphasis on “isomorphism” between propositions and types confused me for a long time.