Transformations as strict groupoids

May 30, 2011

The distinguishing feature of higher-dimensional type theory is the concept of equivalence of the members of a type that must be respected by all families of types.  To be sufficiently general it is essential to regard equivalence as a structure, rather than a property.  This is expressed by the judgement

\displaystyle \Gamma\vdash \alpha::M\simeq N:A

which states that M and N are equivalent members of type A, as evidenced by the transformation \alpha.  Respect for equivalence is ensured by the rule

\displaystyle{{\Gamma,x:A\vdash B\,\textsf{type}\quad  \Gamma\vdash \alpha :: M\simeq N:A \quad  \Gamma\vdash P:B[M/x]}\over  {\Gamma\vdash \textit{map}\{x:A.B\}[\alpha](P):B[N/x]}},

which states that equivalent members determine equivalent instances of a family of types.  The equivalence between instances is mediated by the operation \textit{map}\{x:A.B\}[\alpha](-), which sends members of B[M/x] to members of B[N/x].  We call this mapping the action of the family x:A.B on the transformation \alpha.

For reasons that will only become apparent as we go along, it is important that “equivalence” really be an equivalence: it must be, in an appropriate sense, reflexive, symmetric, and transitive.  The “appropriate sense” is precisely that we require the existence of transformations

\displaystyle{\Gamma\vdash \textit{id}::M\simeq M:A}

\displaystyle{{\Gamma\vdash\alpha::M\simeq N:A}\over{\Gamma\vdash\alpha^{-1}::N\simeq M:A}}

\displaystyle{{\Gamma\vdash \beta:N\simeq P:A\quad  \Gamma\vdash \alpha:M\simeq N:A}\over{\Gamma\vdash\beta\circ\alpha:M\simeq P:A}}

Moreover, these transformations must be respected by the action of any family, in a sense that we shall make clear momentarily.  Before doing so, let us observe that these transformations constitute the operations of a groupoid, which we may think of either as an equivalence relation equipped with evidence or a category in which every map is invertible (a generalized group).  While the former interpretation may not suggest it, the latter formulation implies that we should impose some requirements on how these transformations interact, namely the axioms of a groupoid:

  1. Composition (multiplication) is associative: \gamma\circ(\beta\circ\alpha)\equiv (\gamma\circ\beta)\circ\alpha::M\simeq N:A.
  2. Identity is the unit of composition: \textit{id}\circ\alpha\equiv\alpha::M\simeq N:A and \alpha\circ\textit{id}\equiv\alpha::M\simeq N:A.
  3. Inverses cancel: \alpha^{-1}\circ\alpha\equiv\textit{id}::M\simeq M:A and \alpha\circ\alpha^{-1}\equiv\textit{id}::N\simeq N:A.
These conditions, which impose equalities on transformations, demand that the second-dimensional structure of a type form a strict groupoid.  I will come back to an important weakening of these requirements later.

We further require that the action of a type family preserve the groupoid structure.  For this it is enough to require that it preserve identities and composition:

\displaystyle{\textit{map}\{x:A.B\}[\textit{id}](-) \equiv \textit{id}(-):B[M/x]}

and

\displaystyle{\begin{array}{c}\textit{map}\{x:A.B\}[\beta\circ\alpha](-)\\\equiv\\\textit{map}\{x:A.B\}[\beta](\textit{map}\{x:A.B\}[\alpha](-))\end{array}}.

Thinking of a groupoid as a category, these conditions state that the action of a type family be (strictly) functorial.  (Here again we are imposing strong requirements in order to facilitate the exposition; eventually we will consider a relaxation of these conditions that will admit a richer range of applications.)

(The alert reader will note that I have not formally introduced the concept of a transformation between types, nor the equality of these, into the theory.  There are different ways to skin this cat; for now, I will be a bit loose about the axiomatics in order to focus attention on the main ideas.  Rest assured that everything can be made precise!)

By demanding that the groupoid axioms hold strictly (as equalities) and that the action of families be strictly functorial, we have simplified the theory considerably by restricting it to dimension 2.  To relax these restrictions requires higher dimensions.  For example, we may demand only that the groupoid conditions hold up to a transformation of transformations, but hold strictly from then on; this is the 3-dimensional case.  Or we can relax all such conditions to hold only up to a higher transformation, resulting in finite dimensional type theory.  Similar considerations will apply to other conditions that we shall impose on the action of families, in particular to specify the action of type constructors on transformations, which I will discuss next time.  The presentation of finite-dimensional type theory will be aided by the introduction of identity types (also called path types).  Identity types avoid the need for an ever-expanding nesting of transformations between transformations between ….  More on that later!

Update (August 2012): Egbert Rijke has written lucidly on the topic of Yoneda’s Lemma and it’s relation to homotopy type theory in his Master’s Thesis, which I encourage readers to consult for a nice summary of higher-dimensional type theory.


Higher-Dimensional Type Theory

May 30, 2011

Ideas have their time, and it’s not for us to choose when they arrive.  But when they do, they almost always occur to many people at more or less the same time, often in a slightly disguised form whose underlying unity becomes apparent only later.  This is perhaps not too surprising, the same seeds taking root in many a fertile mind.  A bit harder to explain, though, is the moment in time when an idea comes to fruition.  Often all of the ingredients are available, and yet no one thinks to put two-and-two together and draw what seems, in retrospect, to be the obvious inference.  Until, suddenly, everyone does.  Why didn’t we think of that ages ago?  Nothing was stopping us, we just didn’t notice the opportunity!

The recent development of higher-dimensional structure in type theory seems to be a good example.  All of the ingredients have been present since the 1970′s, yet as far as I know no one, until quite recently, no one quite put together all the pieces to expose the beautiful structure that has been sitting there all along.  Like many good ideas, one can see clearly that the ideas were foreshadowed by many earlier developments whose implications are only now becoming understood.  My plan is to explain higher type theory (HTT) to the well-informed non-expert, building on ideas developed by various researchers, including Thorsten Altenkirch, Steve Awodey, Richard Garner, Martin Hofmann, Dan Licata, Peter Lumsdaine, Per Martin-Löf, Mike Shulman, Thomas Streicher, Vladimir Voevodsky, and Michael Warren.  It will be useful in the sequel to be familiar with The Holy Trinity, at least superficially, and preferably well enough to be able to move back and forth between the three manifestations that I’ve previously outlined.

One-dimensional dependent type theory is defined by derivation rules for these four fundamental forms of judgement (and, usually, some others that we suppress here for the sake of concision):

\displaystyle \Gamma\vdash A\,\mathsf{type}

\displaystyle \Gamma\vdash M : A

\displaystyle \Gamma\vdash M \equiv N : A

\displaystyle \Gamma\vdash A\equiv B

A context, \Gamma, consists of a sequence of declarations of variables of the form x_1:A_1,\dots,x_n:A_n, where it is presupposed, for each 1\leq i\leq n, that x_1:A_1,\dots,x_{i-1}:A_{i-1}\vdash A_i\,\mathsf{type} is derivable.

The key notion of dependent type theory is that of a family of types indexed by (zero or more) variables ranging over a type.  The judgement \Gamma\vdash A\,\mathsf{type} states that A is a family of types indexed by the variables given by \Gamma.  For example, we may have \vdash\textit{Nat}\,\textsf{type}, specifying that \textit{Nat} is a closed type (a degenerate family of types), and x{:}\textit{Nat}\vdash\textit{Seq}(x)\,\textsf{type}, specifying that \textit{Seq}(n) is a type (say, of sequences of naturals of length n) for each \vdash n:\textit{Nat}.  The rules of type theory ensure, either directly or indirectly, that the structural properties of the hypothetical/general judgement are valid.  In particular families of types respect equality of indices:

\displaystyle{{\Gamma,x:A\vdash B\,\textsf{type}\quad \Gamma\vdash M\equiv N:A \quad \Gamma\vdash P:B[M/x]}\over  {\Gamma\vdash P:B[N/x]}}.

In words, if B is a family of types indexed by A, and if M and N are equal members of type A, then every member of B[M/x] is also a member of B[N/x].

The generalization to two- (and higher-) dimensional type theory can be motivated in several ways.  One natural source of higher-dimensional structure is a universe, a type whose elements correspond to types.  For example, we may have a universe of sets given as follows:

\displaystyle \vdash \textit{Set}\,\textsf{type}

\displaystyle x:\textit{Set}\vdash \textit{Elt}(x)\,\textsf{type}

\displaystyle \vdash \textit{nat}:\textit{Set}

\displaystyle \vdash \textit{Elt}(\textit{nat})\equiv\textit{Nat}

\displaystyle a:\textit{Set},b:\textit{Set}\vdash a\times b : \textit{Set}

\displaystyle a:\textit{Set},b:\textit{Set}\vdash \textit{Elt}(a\times b)\equiv \textit{Elt}(a)\times\textit{Elt}(b)

and so forth, ensuring that \textit{Set} is closed under typical set-forming operations whose interpretations are given by \textit{Elt} in terms of standard type-theoretic concepts.

In many situations, including much of informal (yet entirely rigorous) mathematics, it is convenient to identify sets that are isomorphic, so that, for example, the sets \textit{nat}\times\textit{nat} and \textit{2}\to\textit{nat} would be interchangeable.  In particular, these sets should have the “same” (type of) elements.  But obviously these two sets do not have the same elements (one consists of pairs, the other of functions, under the natural interpretation of the sets as types), so we cannot hope to treat \textit{Elt}(\textit{nat}\times\textit{nat}) and \textit{Elt}(\textit{2}\to\textit{nat}) as equal, though we may wish to regard them as equivalent in some sense.  Moreover, since two sets can be isomorphic in different ways, isomorphism must be considered a structure on sets, rather than a property of sets.  For example, \textit{2} is isomorphic to itself in two different ways, by the identity and by negation (swapping).  Thus, equivalence of the elements of two isomorphic sets must take account of the isomorphism itself, and hence must have computational significance.

It is precisely the desire to accommodate equivalences such as this that gives rise to higher dimensions in type theory.  Specifically, we introduce two-dimensional structure by adding a new judgement to type theory stating that two members of a type are related by a specified transformation:

\displaystyle \Gamma\vdash \alpha :: M\simeq N : A

Crucially, families of types must respect transformation:

\displaystyle{{\Gamma,x:A\vdash B\,\textsf{type}\quad \Gamma\vdash \alpha :: M\simeq N:A \quad \Gamma\vdash P:B[M/x]}\over  {\Gamma\vdash \textit{map}\{x:A.B\}[\alpha](P):B[N/x]}}.

A transformation should be thought of as evidence of interchangeability of the members of a type; the map operation puts the evidence to work.

Returning to our example of the universe of sets, let us specify that a transformation from one set to another is an pair of functions constituting a bijection between the elements of the two sets:

\displaystyle{  {\begin{array}{c}  \Gamma,x:\textit{Elt}(a)\vdash f(x):\textit{Elt}(b) \\  \Gamma,x:\textit{Elt}(b)\vdash g(x):\textit{Elt}(a) \\  \Gamma,x:\textit{Elt}(a)\vdash g(f(x))\equiv x:\textit{Elt}(a) \\  \Gamma,x:\textit{Elt}(b)\vdash f(g(x))\equiv x:\textit{Elt}(b)  \end{array}}  \over  {\Gamma\vdash\textit{iso}(f,g)::a\simeq b:\textit{Set}}}

(The equational conditions here are rather strong; I will return to this point in a future post.  For now, let us just take this as the defining criterion of isomorphism between two sets.)

Evidence for the isomorphism of two sets induces a transformation on types given by the following equation:

\displaystyle{  {\Gamma\vdash M:\textit{Elt}(a)}\over  {\Gamma\vdash \textit{map}\{\textit{Elt}\}[\textit{iso}(f,g)](M)\equiv f(M) : \textit{Elt}(b)}}

(suppressing the obvious typing premises for f and g).  In words an isomorphism between sets a and b induces a transformation between their elements given by the isomorphism itself.

This, then, is the basic structure of two-dimensional type theory, but there is much more to say!  In future posts I intend to develop the ideas further, including a discussion of these topics:

  1. The definition of \textit{map}\{x:A.B\} is given by induction over the structure of x:A.B.  The above equation covers only one case; there are more, corresponding to each way of forming a family of types x:A.B.  The extension to function types will expose the role of the inverse of the isomorphism between sets.
  2. The judgement \alpha::M\simeq N:A may be internalized as a type, which will turn out to correspond to the identity type in Martin-Löf’s type theory, albeit with a different interpretation given by Altenkirch.  The identity type plays an important role in the extension to all higher dimensions.
  3. To ensure coherence and to allow for greater expressiveness we must also discuss equality and equivalence of transformations and how these influence the induced transformation of families of types.  In particular transformations admit a groupoid structure which expresses reflexivity, symmetry, and transitivity of transformation; these conditions can be considered to hold strongly or weakly, giving rise to different applications and interpretations.
  4. Higher-dimensional type theory admits a fascinating interpretation in terms of homotopy theory which types are interpreted as spaces, members as points in those spaces, and transformations as paths, or homotopies.  This, together with a generalization of the treatment of universes outlined above, is the basis for Voevodsky’s work on univalent foundations of mathematics.
  5. One may consider relaxing the groupoid structure on transformations to a “monoidoid” (that is, category) structure by not requiring symmetry (inverses).  The structure of type theory changes significantly in the absence of symmetry, posing significant open problems, but admitting a wider range of applications of higher-dimensional structure in both CS and mathematics.
To keep up to date with the latest developments in this area, please visit the Homotopy Type Theory blog!

The semester’s over

May 4, 2011

One reason that I started this blog was to record my experiences with a new course on functional programming for freshmen at Carnegie Mellon.  Classes have ended, the final exam is graded, and we are in the process of taking stock of what we accomplished, and what we could do better.  This was the first instance of the course, taught to 83 first-year computer science students who volunteered to be part of the new curriculum.  (This represents about 2/3 of the freshman class.)  All had taken a new course on imperative programming taught by Frank Pfenning the previous semester, and all were simultaneously taking 251, the legendary theory foundation course developed here over the last dozen or so years.

Starting this fall the course will be part of the standard curriculum, and will be taught to a broader range of students, including the electrical engineers and the full class of CS students.  We’re still working out whether the standard order will be imperative, then functional, or functional, then imperative.  Much depends on the mathematical maturity of the incoming students.  Those with weak mathematical skills (the vast majority), but some programming experience, will likely start with imperative programming, which they will take simultaneously with a standard discrete math course taught in the mathematics department.  Those with strong mathematical skills, or high ambitions, will start with functional programming.  A small number will park for one semester, taking discrete math and a programming course primarily intended for non-majors, to bring themselves up to speed.

Our new data structures and algorithms course, being developed by Guy Blelloch, will be offered in the fall for students who have already have the IP and FP classes.  Guy plans to make heavy use of functional programming, and will be stressing parallel algorithms on persistent data structures as the common case, leaving the traditional sequential, ephemeral case to the imperative programming class.  Guy’s course is essentially a continuation (ahem) of the FP class that emphasizes more sophisticated data structures and algorithms, and more complex programming problems.

For those who might be interested, I’m attaching the final exam for the course.  The students did exceptionally well (average score of 88%), even though we feared it was too long and difficult beforehand.  The spread was very narrow, perhaps because the exam was too easy, or because this being a class of volunteers, their skills were unusually strong and uniform.  We’ll know more after the fall once we’ve taught the class to a broader range of students.  I do think the exam is representative of what we expected them to be able to do at the end, and they showed us that they can do it.  A clear success!

15-150 Final Exam (Spring 2011)


Of Course ML Has Monads!

May 1, 2011

A popular meme in the world of PL’s is that “Haskell has monads”, with the implication that this is a distinctive feature of the language, separate from all others.  While it is true that Haskell has popularized the use of monads as a program structuring device, the idea of a monad is not so much an issue of language design (apart from the ad hoc syntactic support provided by Haskell), but rather one of library design.  After all, a monad is just one of a zillion signatures (type classes) with which to structure programs, and there is no particular reason why this one cannot be used in any language that supports even a modicum of modularity.  There is a particular reason why monads had to arise in Haskell, though, which is to defeat the scourge of laziness.  But even in the presence of monads, one still needs things like seq to sequentialize evaluation, because the lazy cost model is so disadvantageous.  In an ironic twist the emphasis on monads in Haskell means that programming in Haskell is rather like programming in an updated dialect of Algol with a richer type structure than the original, but the same overall structure.

Examined from the point of view of ML, monads are but a particular of use of modules.  The signature of monads is given by the definition

signature MONAD = sig
  type 'a monad
  val ret : 'a -> 'a monad
  val bnd : 'a monad -> ('a -> 'b monad) -> 'b monad
end

There are many, many, many structures that satisfy this signature; I needn’t (and, in any case, can’t) rehearse them all here.  One particularly simple example should suffice to give the general idea:

structure Option : MONAD = struct
  type 'a monad = 'a option
  fun ret x = SOME x
  fun bnd (SOME x) k = k x
    | bnd NONE k = NONE
end

This is of course the option monad, which is sometimes used to model the data flow aspects of exceptions, perhaps with some elaboration of the NONE case to associate an exceptional value with a non-result.  (The control flow aspects are not properly modeled this way, however.  For that one needs, in addition, access to some sort of jump mechanism.)

Examples like this one proliferate.  A monad is represented by a structure.  Any structure that provides the facilities specified by the MONAD signature gives rise to the characteristic sequentialization mechanisms codified by it.  Monad transformers are functors that transform one monad into another, with no fuss or bother, and no ad hoc mechanisms required.  Standard modular programming techniques suffice to represent monads; moreover, the techniques involved are fully general, and are equally applicable to other signatures of interest (arrows, or quivers, or bows, or what have you).  Moreover, it is shown in my paper with Chakravarty and Dreyer how to integrate modules into the type inference mechanism of ML so that one can get automatic functor instantiation in those limited cases where it is self-evident what is intended.  This has been implemented by Karl Crary in a prototype compiler for an extension of Standard ML, and it would be good to see this supported in more broadly available compilers for the language.

The bulk of the mania about monads is therefore accounted for by modules.  I have no doubt, however, that you are wondering about the infamous IO monad in Haskell (and it’s associated work-around, unsafePerformIO).  Isn’t that a fundamental feature of the language that cannot be replicated in ML?  Hardly!  It’s entirely a matter of designing the signatures of the standard basis library modules, and nothing more.  The default basis library does not attempt to segregate effects into a monad, but it is perfectly straightforward to do this yourself, by providing your own layer over the standard basis, or to reorganize the standard basis to enforce the separation.  For example, the signature of reference cells might look like this:

signature REF = sig
  type 'a ref
  val ref : 'a -> 'a ref IO.monad
  val ! : 'a ref -> 'a IO.monad
  val := : 'a ref -> 'a -> unit IO.monad
end

Here we are presuming that we have a fixed declaration

structure IO : MONAD = ...

that packages up the basic IO primitives that are already implemented in the run-time system of ML, more or less like in Haskell.  The other signatures, such as those for mutable arrays or for performing input and output, would be modified in a similar manner to push effects into the IO monad.  Et voila, you have monadic effects, just like in Haskell.

There’s really nothing to it.  In fact, the whole exercise was carried out by a Carnegie Mellon student, Phillippe Ajoux, a couple of years ago.  He also wrote a number of programs in this style just to see how it all goes: swimmingly.  He also devised syntactic extensions to the Moscow ML compiler that provide a nicer notation for programming with monads, much as in Haskell, but better aligned with ML’s conventions.  (Ideally it should be possible to provide syntactic support for any signature, not just monads, but I’m not aware of a worked-out design for the general case, involving as it would an intermixing of parsing and elaboration.)

My point is that the ML module system can be deployed by you to impose the sorts of effect segregation imposed on you by default in Haskell.  There is nothing special about Haskell that makes this possible, and nothing special about ML that inhibits it.  It’s all a mode of use of modules.

So why don’t we do this by default?  Because it’s not such a great idea.  Yes, I know it sounds wonderful at first, but then you realize that it’s pretty horrible.  Once you’re in the IO monad, you’re stuck there forever, and are reduced to Algol-style imperative programming.  You cannot easily convert between functional and monadic style without a radical restructuring of code.  And you inevitably need unsafePerformIO to get anything serious done.  In practical terms, you are deprived of the useful concept of a benign effect, and that just stinks!

The moral of the story is that of course ML “has monads”, just like Haskell.  Whether you want to use them is up to you; they are just as useful, and just as annoying, in ML as they are in Haskell.  But they are not forced on you by the language designers!

Update: This post should’ve been called “ML Has Monads, Why Not?”, or “Of Course ML Has Comonads!”, but then no one was wondering about that.

Update: I now think that the last sentence is excessive.  My main point is simply that it’s very simple to go one way or the other with effects, if you have modules to structure things; it’s all a matter of library design.  A variant of ML that enforced the separation of effects is very easily constructed; the question is whether it is useful or not.  I’ve suggested that the monadic separation is beguiling, but not clearly a great idea.  Alternatively, one can say that we’re not that far away from eliminating laziness from Haskell, at least in this respect: just re-do the standard basis library in ML, and you’re a good ways there.  Plus you have modules, and we understand how to integrate type classes with modules, so the gap is rather small.


Follow

Get every new post delivered to your Inbox.

Join 1,014 other followers