What’s the big deal with HoTT?

June 22, 2013

Now that the Homotopy Type Theory book is out, a lot of people are asking “What’s the big deal?”.  The full answer lies within the book itself (or, at any rate, the fullest answer to date), but I am sure that many of us who were involved in its creation will be fielding this question in our own ways to help explain why we are so excited by it.  In fact what I think is really fascinating about HoTT is precisely that there are so many different ways to think about it, according to one’s interests and backgrounds.  For example, one might say it’s a nice way to phrase arguments in homotopy theory that avoids some of the technicalities in the classical proofs by treating spaces and paths synthetically, rather than analytically.  Or one might say that it’s a good language for mechanization of mathematics that provides for the concise formulation of proofs in a form that can be verified by a computer.  Or one might say that it points the way towards a vast extension of the concept of computation that enables us to compute with abstract geometric objects such as spheres or toruses.  Or one might say that it’s a new foundation for mathematics that subsumes set theory by generalizing types from mere sets to arbitrary infinity groupoids,  sets being but particularly simple types (those with no non-trivial higher-dimensional structure).

But what is it about HoTT that makes all these interpretations and applications possible?  What is the key idea that separates HoTT from other approaches that seek to achieve similar ends?  What makes HoTT so special?

In a word the answer is constructivity.  The distinctive feature of HoTT is that it is based on Per Martin-Löf’s Intuitionistic Theory of Types, which was formulated as a foundation for intuitionistic mathematics as originally put forth by Brouwer in the 1930′s, and further developed by Bishop, Gentzen, Heyting, Kolmogorov, Kleene, Lawvere, and Scott, among many others.  Briefly put, the idea of type theory is to codify and systematize the concept of a mathematical construction by characterizing the abstract properties, rather than the concrete realizations, of the objects used in everyday mathematics.  Brouwer’s key insight, which lies at the heart of HoTT, is that proofs are a form of construction no different in kind or character from numbers, geometric figures, spaces, mappings, groups, algebras, or any other mathematical structure.  Brouwer’s dictum, which distinguished his approach from competing alternatives, is that logic is a part of mathematics, rather than mathematics is an application of logic.  Because for him the concept of a construction, including the concept of a proof, is prior to any other form of mathematical activity, including the study of proofs themselves (i.e., logic).

So under Martin-Löf’s influence HoTT starts with the notion of type as a classification of the notion of construction, and builds upwards from that foundation.  Unlike competing approaches to foundations, proofs are mathematical objects that play a central role in the theory.  This conception is central to the homotopy-theoretic interpretation of type theory, which enriches types to encompass spaces with higher-dimensional structure.  Specifically, the type \textsf{Id}_A(M,N) is the type of identifications of M and N within the space A.  Identifications may be thought of as proofs that M and N are equal as elements of $A$, or, equivalently, as paths in the space A between points M and N.  The fundamental principles of abstraction at the heart of type theory ensure that all constructs of the theory respect these identifications, so that we may treat them as proofs of equality of two elements.  There are three main sources of identifications in HoTT:

  1. Reflexivity, stating that everything is equal to itself.
  2. Higher inductive types, defining a type by giving its points, paths, paths between paths, and so on to any dimension.
  3. Univalence, which states that an equivalence between types determines a path between them.

I will not attempt here to explain each of these in any detail; everything you need to know is in the HoTT book.  But I will say a few things about their consequences, just to give a flavor of what these new principles give us.

Perhaps the most important conceptual point is that mathematics in HoTT emphasizes the structure of proofs rather than their mere existence.  Rather than settle for a mere logical equivalence between two types (mappings back and forth stating that each implies the other), one instead tends to examine the entire space of proofs of a proposition and how it relates to others.  For example, the univalence axiom itself does not merely state that every equivalence between types gives rise to a path between them, but rather that there is an equivalence between the type of equivalences between two types and the type of paths between them.  Familiar patterns such as “A iff B” tend to become “A\simeq B“, stating that the proofs of A and the proofs of B are equivalent.  Of course one may choose neglect this additional information, stating only weaker forms of it using, say, truncation to suppress higher-dimensional information in a type, but the tendency is to embrace the structure and characterize the space of proofs as fully as possible.

A close second in importance is the axiomatic freedom afforded by constructive foundations.  This point has been made many times by many authors in many different settings, but has particular bite in HoTT.   The theory does not commit to (nor does it refute) the infamous Law of the Excluded Middle for arbitrary types: the type A+(A\to \textbf{0}) need not always be inhabited.  This property of HoTT is absolutely essential to its expressive power.  Not only does it admit a wider range of interpretations than are possible with the Law included, but it also allows for the selective imposition of the Law where it is needed to recover a classical argument, or where it is important to distinguish the implications of decidability in a given situation.  (Here again I defer to the book itself for full details.)  Similar considerations arise in connection with the many forms of Choice that can be expressed in HoTT, some of which are outright provable, others of which are independent as they are in axiomatic set theory.

Thus, what makes HoTT so special is that it is a constructive theory of mathematics.  Historically, this has meant that it has a computational interpretation, expressed most vividly by the propositions as types principle.  And yet, for all of its promise, what HoTT currently lacks is a computational interpretation!  What, exactly, does it mean to compute with higher-dimensional objects?  At the moment it is difficult to say for sure, though there seem to be clear intuitions in at least some cases of how to “implement” such a rich type theory.  Alternatively, one may ask whether the term “constructive”, when construed in such a general setting, must inevitably involve a notion of computation.  While it seems obvious on computational grounds that the Law of the Excluded Middle should not be considered universally valid, it becomes less clear why it is so important to omit this Law (and, essentially, no other) in order to obtain the richness of HoTT when no computational interpretation is extant.  From my point of view understanding the computational meaning of higher-dimensional type theory is of paramount importance, because, for me, type theory is and always has been a theory of computation on which the entire edifice of mathematics ought to be built.


The Homotopy Type Theory Book is out!

June 20, 2013

By now many of you have heard of the development of Homotopy Type Theory (HoTT), an extension of intuitionistic type theory that provides a natural foundation for doing synthetic homotopy theory.  Last year the Institute for Advanced Study at Princeton sponsored a program on the Univalent Foundations of Mathematics, which was concerned with developing these ideas.  One important outcome of the year-long program is a full-scale book presenting the main ideas of Homotopy Type Theory itself and showing how to apply them to various branches of mathematics, including homotopy theory, category theory, set theory, and constructive analysis.  The book is the product of a joint effort by dozens of participants in the program, and is intended to document the state of the art as it is known today, and to encourage its further development by the participation of others interested in the topic (i.e., you!).  Among the many directions in which one may take these ideas, the most important (to me) is to develop a constructive (computational) interpretation of HoTT.  Some partial results in this direction have already been obtained, including fascinating work by Thierry Coquand on developing a constructive version of Kan complexes in ITT, by Mike Shulman on proving homotopy canonicity for the natural numbers in a two-dimensional version of HoTT, and by Dan Licata and me on a weak definitional canonicity theorem for a similar two-dimensional theory.  Much work remains to be done to arrive at a fully satisfactory constructive interpretation, which is essential for application of these ideas to computer science.  Meanwhile, though, great progress has been made on using HoTT to formulate and formalize significant pieces of mathematics in a new, and strikingly beautiful, style, that are well-documented in the book.

The book is freely available on the web in various formats, including a PDF version with active references, an ebook version suitable for your reading device, and may be purchased in hard- or soft-cover from Lulu.  The book itself is open source, and is available at the Hott Book Git Hub.  The book is under the Creative Commons  CC BY-SA license, and will be freely available in perpetuity.

Readers may also be interested in the posts on Homotopy Type Theory, the n-Category Cafe, and Mathematics and Computation which describe more about the book and the process of its creation.


More Is Not Always Better

January 28, 2013

In a previous post I discussed the status of Church’s Law in type theory, showing that it fails to hold internally to extensional type theory, even though one may see externally that the definable numeric functions in ETT are λ-definable, and hence Turing computable.  The distinction between internal and external is quite important in logic, mainly because a logical formalism may be unable to express precisely an externally meaningful concept.  The classical example is the Löwenheim-Skolem Theorem of first-order logic, which says that any theory with an infinite model has a countable model.  In particular the theory of sets has a countable model, which would seem to imply that the set of real numbers, for example, is countable.  But internally one can prove that the reals are uncountable (Cantor’s proof is readily expressed in the theory), which seems to be a paradox of some kind.  But no, all it says is that the function witnessing the countability of the term model cannot be expressed internally, and hence there is no contradiction at all.

A similar situation obtains with Church’s Law.  One may observe empirically, so to say, that Church’s Law holds externally of ETT, but this fact cannot be internalized.  There is a function given by Church’s Law that “decompiles” any (extensional) function of type N→N by providing the index for a Turing machine that computes it.  But this function cannot be definable internally to extensional type theory, because it may be used to obtain a decision procedure for halting of Turing machines, which is internally refutable by formalizing the standard undecidability proof.  In both of these examples it is the undefinability of a function that is important to the expressive power of a formalism, contrary to naïve analyses that would suggest that, when it comes to definability of functions, the more the merrier.  This is a general phenomenon in type theory.  The power of type theory arises from its strictures, not its affordances, in direct opposition to the ever-popular language design principle “first-class x” for all imaginable values of x.

Another perspective on the same issue is provided by Martin-Löf’s meaning explanation of type theory, which is closely related to the theory of realizability for constructive logic.  The high-level idea is that a justification for type theory may be obtained by starting with an untyped concept of computability (i.e., a programming language given by an operational semantics for closed terms), and then giving the meaning of the judgments of type theory in terms of such computations.  So, for example, the judgment A type, where A is a closed expression means that A evaluates to a canonical type, where the canonical types include, say, Nat, and all terms of the form A’→A”, where A’ and A” are types.  Similarly, if A is a type, the judgment a:A means that A evaluates to a canonical type A’ and that a evaluates to a canonical term a’ such that a’ is a canonical element of A’, where, say, any numeral for a natural number is a canonical member of Nat.  To give the canonical members of the function type A’→A” requires the further notion of equality of elements of a type, a=b:A, which all functions are required to respect.  A meaning explanation of this sort was suggested by Martin-Löf in his landmark paper Constructive Mathematics and Computer Programming, and is used as the basis for the NuPRL type theory, which extends that account in a number of interesting directions, including inductive and coinductive types, subset and quotient types, and partial types.

The relation to realizability emerges from applying the meaning explanation of types to the semantics of propositions given by the propositions-as-types principle (which, as I’ve previously argued, should not be called “the Curry-Howard isomorphism”).  According to this view a proposition P is identified with a type, the type of its proofs, and we say that P true iff evaluates to a canonical proposition that has a canonical member.  In particular, for implication we say that P→Q true if and only if P true implies Q true (and, in addition, the proof respects equality, a condition that I will suppress here for the sake of simplicity).  More explicitly, the implication is true exactly when the truth of the antecedent implies the truth of the consequent, which is to say that there is a constructive transformation of proofs of P into proofs of Q.

In recursive realizability one accepts Church’s Law and demands that the constructive transformation be given by the index of a Turing machine (i.e., by a program written in a fixed programming language).  This means, in particular, that if P expresses, say, the decidability of the halting problem, for which there is no recursive realizer, then the implication P→Q is vacuously true!  By taking Q to be falsehood, we obtain a realizer for the statement that the halting problem is undecidable.  More generally, any statement that is not realized is automatically false  in the recursive realizability interpretation, precisely because the realizers are identified with Turing machine indices.  Pressing a bit further, there are statements, such as the statement that every Turing machine either halts or diverges on its own input, that are true in classical logic, yet have no recursive realizer, and hence are false in the realizability interpretation.

In contrast in the meaning explanation for NuPRL Church’s Law is not assumed.  Although one may show that there is no Turing machine to decide halting for Turing machines, it is impossible to show that there is no constructive transformation that may do so.  For example, an oracle machine would be able to make the required decision.  This is entirely compatible with intuitionistic principles, because although intuitionism does not affirm LEM, neither does it deny it.  This point is often missed in some accounts, leading to endless confusions.  Intuitionistic logic, properly conceived, is compatible with classical logic in that classical logic may be seen as an idealization of intuitionistic logic in which we heuristically postulate that all propositions are decidable (all instances of LEM hold).

The crucial point distinguishing the meaning explanation from recursive realizability is precisely the refusal to accept Church’s Law, a kind of comprehension principle for functions as discussed earlier.  This refusal is often called computational open-endedness because it amounts to avoiding a commitment to the blasphemy of limiting God’s programming language to Turing machines (using an apt metaphor of Andrej Bauer’s).  Rather, we piously accept that richer notions of computation are possible, and avoid commitment to a “final theory” of computation in which Church’s Law is postulated outright.  By avoiding the witnessing function provided by Church’s Law we gain expressive power, rather than losing it, resulting in an elegant theory of constructive mathematics that enriches, rather than diminishes, classical mathematics.    In short, contrary to “common sense” (i.e., uninformed supposition), more is not always better.

Update: corrected minor technical error and some typographical errors.

Update: clarified point about incompatibility of recursive realizability with classical logic.


Exceptions are shared secrets

December 3, 2012

It’s quite obvious to me that the treatment of exceptions in Haskell is wrong. Setting aside the example I gave before of an outright unsoundness, exceptions in Haskell are nevertheless done improperly, even if they happen to be sound. One reason is that the current formulation is not stable under seemingly mild extensions to Haskell that one might well want to consider, notably any form of parameterized module or any form of shadowing of exception declarations. For me this is enough to declare the whole thing wrong, but as it happens Haskell is too feeble to allow full counterexamples to be formulated, so one may still claim that what is there now is ok … for now.

But I wish to argue that Haskell exceptions are nevertheless poorly designed, because it misses the crucial point that exception values are shared secrets. Let us distinguish two parties, the raiser of the exception, and the handler of it. The fundamental idea of exceptions is to transfer a value from the raiser to the handler without the possibility of interception by another party. While the language of secrecy seems appropriately evocative, I hasten to add that I am not here concerned with “attackers” or suchlike, but merely with the difficulties of ensuring modular composition of programs from components. In such a setting the “attacker” is yourself, who is not malicious, but who is fallible.

By raising an exception the raiser is “contacting” a handler with a message. The raiser wishes to limit which components of a program may intercept that message. More precisely, the raiser wishes to ensure that only certain previously agreed-upon components may handle that exception, perhaps only one. This property should remain stable under extension to the program or composition with any other component. It should not be possible for an innocent third party to accidentally intercept a message that was not intended for it.

Achieving this requires a secrecy mechanism that allows the raiser and the handler(s) to agree upon their cooperation. This is accomplished by dynamic classification, exactly as it is done properly in Standard ML (but not O’Caml). The idea is that the raiser has access to a dynamically generated constructor for exception values, and any handler has access to the corresponding dynamically generated matcher for exception values. This means that the handler, and only the handler, can decode the message sent by the raiser; no other party can do anything with it other than pass it along unexamined. It is “perfectly encrypted” and cannot be deciphered by any unintended component.

The usual exception mechanisms, as distinct from exception values, allow for “wild-card handlers”, which means that an exception can be intercepted by a third party. This means that the raiser cannot ensure that the handler actually receives the message, but it can ensure, using dynamic classification, that only a legitimate handler may decipher it. Decades of experience with Standard ML shows that this is a very useful thing indeed, and has application far beyond just the simple example considered here. For full details, see my forthcoming book, for a full discussion of dynamic classification and its role for ensuring integrity and confidentiality in a program. Dynamic classification is not just for “security”, but is rather a good tool for everyday programming.

So why does Haskell not do it this way? Well, I’m not the one to answer that question, but my guess is that doing so conflicts with the monadic separation of effects. To do exceptions properly requires dynamic allocation, and this would force code that is otherwise functional into the IO monad. Alternatively, one would have to use unsafePerformIO—as in ezyang’s implementation—to “hide” the effects of exception allocation. But this would then be further evidence that the strict monadic separation of effects is untenable.

Update: Reworked last paragraph to clarify the point I am making; the previous formulation appears to have invited misinterpretation.

Update: This account of exceptions also makes clear why the perennial suggestion to put exception-raising information into types makes no sense to me. I will write more about this in a future post, but meanwhile contemplate that a computation may raise an exception that is not even in principle nameable in the type. That is, it is not conservativity that’s at issue, it’s the very idea.


PFPL is out!

December 3, 2012

Practical Foundations for Programming Languages, published by Cambridge University Press, is now available in print! It can be ordered from the usual sources, and maybe some unusual ones as well. If you order directly from Cambridge using this link, you will get a 20% discount on the cover price (pass it on).

Since going to press I have, inevitably, been informed of some (so far minor) errors that are corrected in the online edition. These corrections will make their way into the second printing. If you see something fishy-looking, compare it with the online edition first to see whether I may have already corrected the mistake. Otherwise, send your comments to me.

By the way, the cover artwork is by Scott Draves, a former student in my group, who is now a professional artist as well as a researcher at Google in NYC. Thanks, Scott!

Update: The very first author’s copy hit my desk today!


Univalent Foundations at IAS

December 3, 2012

As many of you may know, the Institute for Advanced Study is sponsoring a year-long program, called “Univalent Foundations for Mathematics” (UF), which is developing the theory and applications of Homotopy Type Theory (HTT).  The UF program is organized by Steve Awodey (CMU), Thierry Coquand (Chalmers), and Vladimir Voevodsky (IAS).  About two dozen people are in residence at the Institute to participate in the program, including Peter Aczel, Andrej Bauer, Peter Dybjer, Dan Licata, Per Martin-Löf, Peter Lumsdaine, Mike Shulman, and many others.  I have been shuttling back and forth between the Institute and Carnegie Mellon, and will continue to do so next semester.

The excitement surrounding the program is palpable.  We all have the sense that we are doing something important that will change the world.  A typical day consists of one or two lectures of one or two hours, with the rest of the day typically spent in smaller groups or individuals working at the blackboard.  There are many strands of work going on simultaneously, including fundamental type theory, developing proof assistants, and formulating a body of informal type theory.  As visitors come and go we have lectures on many topics related to HTT and UF, and there is constant discussion going on over lunch, tea, and dinner each day.  While there I work each day to the point of exhaustion, eager to pursue the many ideas that are floating around.

So, why is homotopy type theory so exciting?  For me, and I think for many of us, it is the most exciting development in type theory since its inception.  It brings together two seemingly disparate topics, algebraic topology and type theory, and provides a gorgeous framework in which to develop both mathematics and computer science.  Many people have asked me why it’s so important.  My best answer is that it’s too beautiful to be ignored, and such a beautiful concept bmust be good for something!  We’ll be at this for years, but it’s too soon to say yet where the best applications of HTT will arise.  But I am sure in my bones that it’s as important as type theory itself.

Homotopy type theory is based on two closely related concepts:

  1. Constructivity.  Proofs of propositions are mathematical objects classified by their types.
  2. Homotopy.  Paths between objects of a type are proofs of their interchangeability in all contexts.  Paths in a type form a type whose paths are homotopies (deformations of paths).

Homotopy type theory is organized so that maps and families respect homotopy, which, under the identification of paths with equality proofs, means that they respect equality.  The force of this organization arises from axioms that specify what are the paths within a type.   There are two major sources of non-trivial paths within a type, the univalence axiom, and higher inductive types.

The univalence axiom specifies that there is an equivalence between equivalences and equalities of the objects of a universe.  Unravelling a bit, this means that for any two types inhabiting a universe, evidence for their equivalence (a pair of maps that are inverse up to higher homotopy, called weak equivalence) is evidence for their equality.  Put another way, weak equivalences are paths in the universe.  So, for example, a bijection between two elements of the universe \textsf{Set} of sets constitutes a proof of the equality (universal interchangeability) of the two sets.

Higher inductive types allow one to define types by specifying their elements, any paths between their elements, any paths between those paths, and so on to any level, or dimension.  For example, the interval, I, has as elements the endpoints 0, 1 : I, and a path \textsf{seg} between 0 and 1 within I.  The circle, S^1 has an element \textsf{base} and a path \textsf{loop} from \textsf{base} to itself within S^1.

Respect for homotopy means that, for example, a family F of types indexed by the type \textsf{Set} must be such that if A and B are isomorphic sets, then there must be an equivalence between the types F(A) and F(B) allowing us to transport objects from one “fiber” to the other.  And any function with domain \textsf{Set} must respect bijection—it could be the cardinality function, for example, but it cannot be a function that would distinguish \{\,0,1\,\} from \{\,\textsf{true},\textsf{false}\,\}.

Univalence allows us to formalize the informal convention of identifying things “up to isomorphism”.  In the presence of univalence equivalence types (spaces) are, in fact, equal.  So rather than rely on convention, we have a formal account of such identifications.

Higher inductives generalize ordinary inductive definitions to higher dimensions.  This means that we can now define maps (computable functions!) between, say, the 4-dimensional sphere and the 3-dimensional sphere, or between the interval and the torus.  HTT makes absolutely clear what this even means, thanks to higher inductive types.  For example, a map out of S^1 is given by two pieces of data:

  1. What to do with the base point.  It must be mapped to a point in the target space.
  2. What to do with the loop.  It must be mapped to a loop in the target space based at the target point.

A map out of I is given similarly by specifying

  1. What to do with the endpoints.  These must be specified points in the target space.
  2. What to do with the segment.  It must be a path between the specified points in the target space.

It’s all just good old functional programming!  Or, rather, it would be, if we were to have a good computational semantics for HTT, a topic of intense interest at the IAS this year.  It’s all sort-of-obvious, yet also sort-of-non-obvious, for reasons that are difficult to explain briefly.  (If I could, they would probably be considered obvious, and not in need of much explanation!)

A game-changing aspect of all of this is that HTT provides a very nice foundation for mathematics in which types (\infty-groupoids) play a fundamental role as classifying all mathematical objects, including proofs of propositions, which are just types.  Types may be classified according to the structure of their paths—and hence propositions may be classified according to the structure of their proofs.  For example, any two proofs of equivalence of two natural numbers are themselves equivalent; there’s only one way to say that 2+2=4, for example.  Of course there is no path between 2+2 and 5.  And these two situations exhaust the possibilities: any two paths between natural numbers are equal (but there may not even be one).  This unicity of paths property lifts to function spaces by extensionality, paths between functions being just paths between the range elements for each choice of domain element.  But the universe of Sets is not like this: there are many paths between sets (one for each bijection), and these are by no means equivalent.  However, there is at most one way to show that two bijections between sets are equivalent, so the structure “peters out” after dimension 2.

The idea to apply this kind of analysis to proofs of propositions is a distinctive feature of HTT, arising from the combination of constructivity, which gives proofs status as mathematical objects, and homotopy, which provides a powerful theory of the equivalence of proofs.  Conventional mathematics ignores proofs as objects of study, and is thus able to express certain ideas only indirectly.  HTT brings out the latent structure of proofs, and provides an elegant framework for expressing these concepts directly.

Update: edited clumsy prose and added concluding paragraph.


Introductory FP Course Materials

September 15, 2012

The course materials for our first-semester introductory programming course (which I’ve discussed elsewhere on this blog) are now available here.

The course materials for our second-semester data structures and algorithms course are available here.

Thanks to Dan Licata and Guy Blelloch for helping make these available.  Comments are most welcome.

Update: I will be giving a talk about this at Google Pittsburgh on Thursday the 20th of September.


Follow

Get every new post delivered to your Inbox.

Join 1,014 other followers