My activity on this blog has been reduced to nil recently because I have been spending my time preparing a series of lectures on homotopy type theory, starting from basic principles and ending with the application of univalence and higher inductive types to algebraic topology. The course web page contains links to the video-taped lectures and to the course notes prepared by the students this semester. These will be available indefinitely and are accessible to anyone interested in the course. My hope is that these will provide a useful introduction to the HoTT Book, which is available for free on the web and may be printed on demand. My intention is to write an introduction to dependent type theory for a CS audience, which will serve as a reference for much ongoing work in the area and as a prolegomenon to the HoTT Book itself.
Back in the 1980′s it was very fashionable to talk about “declarative” programming languages. But to my mind there was never a clear definition of a “declarative language”, and hence no way to tell what is declarative and what is not. Lacking any clear meaning, the term came to refer to the arbitrary conflation of functional with logic programming to such an extent that “functional-and-logic-programming” almost became a Germanic thing-in-itself (ding an sich). Later, as the logic programming wave subsided, the term “declarative”, like “object-oriented”, came to be an expression of approval, and then, mercifully, died out.
Or so I had thought. Earlier this week I attended a thriller of an NSF-sponsored workshop on high-level programming models for parallelism, where I was surprised by the declarative zombie once again coming to eat our brains. This got me to thinking, again, about whether the term has any useful meaning. For what it’s worth, and perhaps to generate useful debate, here’re some things that I think people mean, and why I don’t think they mean very much.
- “Declarative” means “high-level”. This just seems to replace one vague term by another.
- “Declarative” means “not imperative”. But this flies in the face of reality. Functional languages embrace and encompass imperative programming as a special case, and even Prolog has imperative features, such as assert and retract, that have imperative meaning.
- “Declarative” means “functional”. OK, but then we don’t really need another word.
- “Declarative” means “what, not how”. But every language has an operational semantics that defines how to execute its programs, and you must be aware of that semantics to understand programs written in it. Haskell has a definite evaluation order, just as much as ML has a different one, and even Prolog execution is defined by a clear operational semantics that determines the clause order and that can be influenced by “cut”.
- “Declarative” means “equational”. This does not distinguish anything, because there is a well-defined notion of equivalence for any programming language, namely observational equivalence. Different languages induce different equivalences, of course, but how does one say that one equivalence is “better” than another? At any rate, I know of no stress on equational properties of logic programs, so either logic programs are not “declarative” or “equational reasoning” is not their defining characteristic.
- “Declarative” means “referentially transparent”. The misappropriation of Quine’s terminology only confuses matters. All I’ve been able to make of this is that “referentially transparent” means that beta-equivalence is valid. But beta equivalence is not a property of an arbitrary programming language, nor in any case is it clear why this equivalence is first among equals. In any case why you would decide a priori on what equivalences you want before you even know what it means to run a program?
- “Declarative” means “has a denotation”. This gets closer to the point, I think, because we might well say that a declarative semantics is one that gives meaning to programs as some kind of mapping between some sort of spaces. In other words, it would be a synonym for “denotational semantics”. But every language has a denotational semantics (perhaps by interpretation into a Kripke structure to impose sequencing), so having one does not seem to distinguish a useful class of languages. Moreover, even in the case of purely functional programs, the usual denotational semantics (as continuous maps) is not fully abstract, and the fully abstract semantics (as games) is highly operational. Perhaps a language is declarative in proportion to being able to give it semantics in some “familiar” mathematical setting?
- “Declarative” means “implicitly parallelizable“. This was certainly the sense intended at the NSF meeting, but no two “declarative” languages seemed to have much in common. Charlie Garrod proposes just “implicit”, which is pretty much synonymous with “high level”, and may be the most precise sense there is to the term.
No doubt this list is not exhaustive, but I think it covers many of the most common interpretations. It seems to me that none of them have a clear meaning or distinguish a well-defined class of languages. Which leads me to ask, is there any such thing as a declarative programming language?
[Thanks to the late Steve Gould for inspiring the title of this post.]
[Update: add a remark about semantics, add another candidate meaning.]
[Update: added link to previous post.]
It’s quite obvious to me that the treatment of exceptions in Haskell is wrong. Setting aside the example I gave before of an outright unsoundness, exceptions in Haskell are nevertheless done improperly, even if they happen to be sound. One reason is that the current formulation is not stable under seemingly mild extensions to Haskell that one might well want to consider, notably any form of parameterized module or any form of shadowing of exception declarations. For me this is enough to declare the whole thing wrong, but as it happens Haskell is too feeble to allow full counterexamples to be formulated, so one may still claim that what is there now is ok … for now.
But I wish to argue that Haskell exceptions are nevertheless poorly designed, because it misses the crucial point that exception values are shared secrets. Let us distinguish two parties, the raiser of the exception, and the handler of it. The fundamental idea of exceptions is to transfer a value from the raiser to the handler without the possibility of interception by another party. While the language of secrecy seems appropriately evocative, I hasten to add that I am not here concerned with “attackers” or suchlike, but merely with the difficulties of ensuring modular composition of programs from components. In such a setting the “attacker” is yourself, who is not malicious, but who is fallible.
By raising an exception the raiser is “contacting” a handler with a message. The raiser wishes to limit which components of a program may intercept that message. More precisely, the raiser wishes to ensure that only certain previously agreed-upon components may handle that exception, perhaps only one. This property should remain stable under extension to the program or composition with any other component. It should not be possible for an innocent third party to accidentally intercept a message that was not intended for it.
Achieving this requires a secrecy mechanism that allows the raiser and the handler(s) to agree upon their cooperation. This is accomplished by dynamic classification, exactly as it is done properly in Standard ML (but not O’Caml). The idea is that the raiser has access to a dynamically generated constructor for exception values, and any handler has access to the corresponding dynamically generated matcher for exception values. This means that the handler, and only the handler, can decode the message sent by the raiser; no other party can do anything with it other than pass it along unexamined. It is “perfectly encrypted” and cannot be deciphered by any unintended component.
The usual exception mechanisms, as distinct from exception values, allow for “wild-card handlers”, which means that an exception can be intercepted by a third party. This means that the raiser cannot ensure that the handler actually receives the message, but it can ensure, using dynamic classification, that only a legitimate handler may decipher it. Decades of experience with Standard ML shows that this is a very useful thing indeed, and has application far beyond just the simple example considered here. For full details, see my forthcoming book, for a full discussion of dynamic classification and its role for ensuring integrity and confidentiality in a program. Dynamic classification is not just for “security”, but is rather a good tool for everyday programming.
So why does Haskell not do it this way? Well, I’m not the one to answer that question, but my guess is that doing so conflicts with the monadic separation of effects. To do exceptions properly requires dynamic allocation, and this would force code that is otherwise functional into the IO monad. Alternatively, one would have to use unsafePerformIO—as in ezyang’s implementation—to “hide” the effects of exception allocation. But this would then be further evidence that the strict monadic separation of effects is untenable.
Update: Reworked last paragraph to clarify the point I am making; the previous formulation appears to have invited misinterpretation.
Update: This account of exceptions also makes clear why the perennial suggestion to put exception-raising information into types makes no sense to me. I will write more about this in a future post, but meanwhile contemplate that a computation may raise an exception that is not even in principle nameable in the type. That is, it is not conservativity that’s at issue, it’s the very idea.
Practical Foundations for Programming Languages, published by Cambridge University Press, is now available in print! It can be ordered from the usual sources, and maybe some unusual ones as well. If you order directly from Cambridge using this link, you will get a 20% discount on the cover price (pass it on).
Since going to press I have, inevitably, been informed of some (so far minor) errors that are corrected in the online edition. These corrections will make their way into the second printing. If you see something fishy-looking, compare it with the online edition first to see whether I may have already corrected the mistake. Otherwise, send your comments to me.
By the way, the cover artwork is by Scott Draves, a former student in my group, who is now a professional artist as well as a researcher at Google in NYC. Thanks, Scott!
Update: The very first author’s copy hit my desk today!
The course materials for our second-semester data structures and algorithms course are available here.
Thanks to Dan Licata and Guy Blelloch for helping make these available. Comments are most welcome.
Update: I will be giving a talk about this at Google Pittsburgh on Thursday the 20th of September.
In previous posts I have talked about the new introductory CS curriculum under development at Carnegie Mellon. After a year or so of planning, we began to roll out the new curriculum in the Spring of 2011, and have by now completed the transition. As mentioned previously, the main purpose is to bring the introductory sequence up to date, with particular emphasis on introducing parallelism and verification. A secondary purpose was to restore the focus on computing fundamentals, and correct the drift towards complex application frameworks that offer the students little sense of what is really going on. (The poster child was a star student who admitted that, although she had built a web crawler the previous semester, she in fact has no idea how to build a web crawler.) A particular problem is that what should have been a grounding in the fundamentals of algorithms and data structures turned into an exercise in object-oriented programming, swamping the core content with piles of methodology of dubious value to beginning students. (There is a new, separate, upper-division course on oo methodology for students interested in this topic.) A third purpose was to level the playing field, so that students who had learned about programming on the street were equally as challenged, if not more so, than students without much or any such experience. One consequence would be to reduce the concomitant bias against women entering CS, many fewer of whom having prior computing experience than the men.
The solution was a complete do-over, jettisoning the traditional course completely, and starting from scratch. The most important decision was to emphasize functional programming right from the start, and to build on this foundation for teaching data structures and algorithms. Not only does FP provide a much more natural starting point for teaching programming, it is infinitely more amenable to rigorous verification, and provides a natural model for parallel computation. Every student comes to university knowing some algebra, and they are therefore familiar with the idea of computing by calculation (after all, the word algebra derives from the Arabic al jabr, meaning system of calculation). Functional programming is a generalization of algebra, with a richer variety of data structures and a richer set of primitives, so we can build on that foundation. It is critically important that variables in FP are, in fact, mathematical variables, and not some distorted facsimile thereof, so all of their mathematical intuitions are directly applicable. So we can immediately begin discussing verification as a natural part of programming, using principles such as mathematical induction and equational reasoning to guide their thinking. Moreover, there are natural concepts of sequential time complexity, given by the number of steps required to calculate an answer, and parallel time complexity, given by the data dependencies in a computation (often made manifest by the occurrences of variables). These central concepts are introduced in the first week, and amplified throughout the semester.
Two major homework exercises embody the high points of the first-semester course, one to do with co-development of code with proof, the other to do with parallelism. Co-development of program and proof is illustrated by an online regular expression matcher. The problem is a gem for several reasons. One is that it is essentially impossible for anyone to solve by debugging a blank screen. This sets us up nicely for explaining the importance of specification and proof as part of the development process. Another is that it is very easy, almost inevitable, for students to make mistakes that are not easily caught or diagnosed by testing. We require the students to carry out a proof of the correctness of the matcher, and in the process discover a point where the proof breaks down, which then leads to a proper solution. (See my JFP paper “Proof-Directed Debugging” for a detailed development of the main ideas.) The matcher also provides a very nice lesson in the use of higher-order functions to capture patterns of control, resulting in an extremely clean and simple matcher whose correctness proof is highly non-trivial.
The main parallelism example is the Barnes-Hut algorithm for solving the n-body problem in physics. Barnes-Hut is an example of a “tree-based” method, invented by Andrew Appel, for solving this well-known problem. At a high level the main idea is to decompose space into octants (or quadrants if you’re working in the plane), recursively solving the problem for each octant and then combining the solutions to make an overall solution. The key idea is to use an approximation for bodies that are far enough away—a distant constellation can be regarded as an atomic body for the purposes of calculating the effects of its stars on the sun, say. The problem is naturally parallelizable, because of the recursive decomposition. Moreover, it provides a very nice link to their high school physics. Since FP is just an extension of mathematics, the equations specifying the force law and Newton’s Law carry over directly to the code. This is an important sense in which FP builds on and amplifies their prior mathematical experience, and shows how one may connect computer science with other sciences in a rather direct manner.
The introductory FP course establishes the basis for the new data structures and algorithms course that most students take in either the third or fourth semester. This course represents a radical departure from tradition in several ways. First, it is a highly rigorous course in algorithms that rivals the upper-division course taught at most universities (including our own) for the depth and breadth of ideas it develops. Second, it takes the stance that all algorithms are parallel algorithms, with sequential being but a special case of parallel. Of course some algorithms have a better parallelizability ratio (a precise technical characterization of the ability to make use of parallelism), and this can be greatly affected by data structure selection, among other considerations. Third, the emphasis is on persistent abstract types, which are indispensable for parallel computing. No more linked lists, no more null pointer nonsense, no more mutation. Instead we consider abstract types of graphs, trees, heaps, and so forth, all with a persistent semantics (operations create “new” ones, rather than modify “old” ones). Fourth, we build on the reasoning methods introduced in the first semester course to prove the correctness and the efficiency of algorithms. Functional programming makes all of this possible. Programs are concise and amenable to proof, they are naturally parallel, and they enjoy powerful typing properties that enforce abstraction in a mathematically rigorous manner. Fifth, there is a strong emphasis on problems of practical interest. For example, homework 1 is the shotgun method for genome sequencing, a parallel algorithm of considerable practical importance and renown.
There is a third introductory course in imperative programming, taken in either the first or second semester (alternating with the functional programming course at the student’s discretion), that teaches the “old ways” of doing things using a “safe” subset of C. Personally, I think this style of programming is obsolete, but there are many good reasons to continue to teach it, the biggest probably being the legacy problem. The emphasis is on verification, using simple assertions that are checked at run-time and which may be verified statically in some situations. It is here that students learn how to do things “the hard way” using low-level representations. This course is very much in the style of the 1970′s era data structures course, the main difference being that the current incarnation of Pascal has curly braces, rather than begin-end.
For the sake of those readers who may not be up to date on such topics, it seems important to emphasize that functional programming subsumes imperative programming. Every functional language is capable of expressing the old-fashioned sequential pointer-hacking implementation of data structures. You can even reproduce Tony Hoare’s self-described “billion dollar mistake” of the cursed “null pointer” if you’d like! But the whole point is that it is rarely useful, and almost never elegant, to work that way. (Curiously, the “monad mania” in the Haskell community stresses an imperative, sequential style of programming that is incompatible with parallelism, but this is not forced on you as it is in the imperative world.) From this point of view there no loss, and considerable gain, in teaching functional programming from the start. It puts a proper emphasis on mathematically sane programming methods, but allows for mutation-based programming where appropriate (for example, in engendering “side effects” on users).
To learn more about these courses, please see the course web pages:
I encourage readers to review the syllabi and course materials. There is quite a large body of material in place that we plan to expand into textbook-level treatments in the near future. We also plan to write a journal article documenting our experiences with these courses.
I am very grateful to my colleagues Guy Blelloch, Dan Licata, and Frank Pfenning for their efforts in helping to develop the new introductory curriculum at Carnegie Mellon, and to the teaching assistants whose hard work and dedication put the ideas into practice.
Update: Unfortunately, the homework assignments for these courses are not publicly available, because we want to minimize the temptation for students to make use of old assignments and solutions in preparing their own work. I am working with my colleagues to find some way in which we can promote the ideas without sacrificing too much of the integrity of the course materials. I apologize for the inconvenience.
A new feature of this year’s summer school was a reduction in the number of lectures, and an addition of daily open problem sessions for reviewing the day’s material. This turned out to be a great idea for everyone, because it gave us more informal time together, and gave the students a better chance at digesting a mountain of material. It also turned out to be a bit of an embarrassment for me, because I posed a question off the top of my head for which I thought I had two proofs, neither of which turned out to be valid. The claimed theorem is, in fact, true, and one of my proofs is easily corrected to resolve the matter (the other, curiously, remains irredeemable for reasons I’ll explain shortly). The whole episode is rather interesting, so let me recount a version of it here for your enjoyment.
The context of the discussion was extensional type theory, or ETT, which is characterized by an identification of judgemental with propositional equality: if you can prove that two objects are equal,then they are interchangeable for all purposes without specific arrangement. The alternative, intensional type theory,or ITT, regards judgemental equality as definitional equality (symbolic evaluation), and gives computational meaning to proofs of equality of objects of a type, allowing in particular transport across two instances of a family whose indices are equal. NuPRL is an example of an ETT; CiC is an example of an ITT.
Within the framework of ETT, the principle of function extensionality comes “for free”, because you can prove it to hold within the theory. Function extensionality states that whenever . That is, two functions are if they are equal on all arguments (and, implicitly, respect equality of arguments). Function extensionality holds definitionally if your definitional equivalence includes the and rules, but in any case does not have the same force as extensional equality. Function extensionality as a principle of equality cannot be derived in ITT, but must be added as an additional postulate (or derived from a stronger postulate, such as univalence or the existence of a one-dimensional interval type).
Regardless of whether we are working in an extensional or an intensional theory, it is easy to see that all functions of type definable in type theory are computable. For example, we may show that all such functions may be encoded as recursive functions in the sense of Kleene, or in a more modern formulation we may give a structural operational semantics that provides a deterministic execution model for such functions (given , run on until it stops, and yield that as result). Of course the proof relies on some fairly involved meta-theory, but it is all constructively valid (in an informal sense) and hence provides a legitimate computational interpretation of the theory. Another way to say the same thing is to say that the comprehension principles of type theory are such that every object deemed to exist has a well-defined computational meaning, so it follows that all functions defined within it are going to be computable.
This is all just another instance of Church’s Law, the scientific law stating that any formalism for defining computable functions will turn out to be equivalent to, say, the λ-calculus when it comes to definability of number-theoretic functions. (Ordinarily Church’s Law is called Church’s Thesis, but for reasons given in my Practical Foundations book, I prefer to give it the full status of a scientific law.) Type theory is, in this respect, no better than any other formalism for defining computable functions. By now we have such faith in Church’s Law that this remark is completely unsurprising, even boring to state explicitly.
So it may come as a surprise to learn that Church’s Law is false. I’m being provocative here, so let me explain what I mean before I get flamed to death on the internet. The point I wish to make is that there is an important distinction between the external and the internal properties of a theory. For example, in first-order logic the Löwenheim-Skolem Theorem tells us that if a first-order theory has an infinite model, then it has a countable model. This implies that, externally to ZF set theory, there are only countably many sets, even though internally to ZF set theory we can carry out Cantor’s argument to show that the powerset operation takes us to exponentially higher cardinalities far beyond the countable. One may say that the “reason” is that the evidence for the countability of sets is a bijection that is not definable within the theory, so that it cannot “understand” its own limitations. This is a good thing.
The situation with Church’s Law in type theory is similar. Externally we know that every function on the natural numbers is computable. But what about internally? The internal statement of Church’s Law is this: , where the notation means, informally, that is the code of a program that, when executed on input , evaluates to . In Kleene’s original notation this would be rendered as , where the predicate encodes the operational semantics, and the predicate extracts the answer from a successful computation. Note that the expansion makes use of the identity type at the type . The claim is that Church’s Law, stated as a type (proposition) within ETT, is false, which is to say that it entails a contradiction.
When I posed this as an exercise at the summer school, I had in mind two different proofs, which I will now sketch. Neither is valid, but there is a valid proof that I’ll come to afterwards.
Both proofs begin by applying the so-called Axiom of Choice. For those not familiar with type theory, the “axiom” of choice is in fact a theorem, stating that every total binary relation contains a function. Explicitly,
The function is the “choice function” that associates a witness to the totality of to each argument . In the present case if we postulate Church’s Law, then by the axiom of choice we have
That is, the functional picks out, for each function in , a (code for a) program that witnesses the computability of . This should already seem suspicious, because by function extensionality the functional must assign the same program to any two extensionally equal functions.
We may easily see that is injective, for if is , then both track both and , and hence and are (extensionally) equal. Thus we have an injection from into , which seems “impossible” … except that it is not! Let’s try the proof that this is impossible, and see where it breaks down. Suppose that is injective. Define , and consider so and we are done. Not so fast! Since is only injective, and not necessarily surjective, it is not clear how to define . The obvious idea is to send to , and any outside the image of to, say, the identity. But there is no reason to suppose that the image of is decidable, so the attempted definition breaks down. I hacked around with this for a while, trying to exploit properties of to repair the proof (rather than work with a general injection, focus on the specific functional ), but failed. Andrej Bauer pointed out to me, to my surprise, that there is a model of ETT (which he constructed) that contains an injection of into ! So there is no possibility of rescuing this line of argument.
(Incidentally, we can show within ETT that there is no bijection between and , using surjectivity to rescue the proof attempt above. Curiously, Lawvere has shown that there can be no surjection from onto , but this does not seem to help in the present situation. This shows that the concept of countability is more subtle in the constructive setting than in the classical setting.)
But I had another argument in mind, so I was not worried. The functional provides a decision procedure for equality for the type : given , compare with . Surely this is impossible! But one cannot prove within type theory that is undecidable, because type theory is consistent with the law of the excluded middle, which states that every proposition is decidable. (Indeed, type theory proves that excluded middle is irrefutable for any particular proposition : .) So this proof also fails!
At this point it started to seem as though Church’s Law could be independent of ETT, as startling as that sounds. For ITT it is more plausible: equality of functions is definitional, so one could imagine associating an index with each function without disrupting anything. But for ETT this seemed implausible to me. Andrej pointed me to a paper by Maietti and Sambin that states that Church’s Law is incompatible with function extensionality and choice. So there must be another proof that refutes Church’s Law, and indeed there is one based on the aforementioned decidability of function equivalence (but with a slightly different line of reasoning than the one I suggested).
First, note that we can use the equality test for functions in to check for halting. Using the predicate described above, we can define a function that is constantly iff a given (code of a) program never halts on given input. We may then use the above-mentioned equality test to check for halting. So it suffices to show that the halting problem for (codes of) functions and inputs is not computable to complete the refutation of the internal form of Church’s Law.
Specifically, assume given that, given a code for a function and an input, yields or according to whether or not that function halts when applied to that input. Define by , the usual diagonal function. Now apply the functional obtained from Church’s Law using the Axiom of Choice to obtain , the code for the function , and consider to derive the needed contradiction. Notice that we have used Church’s Law here to obtain a code for the type-theoretic diagonal function, which is then passed to the halting tester in the usual way.
As you can see, the revised argument follows along lines similar to what I had originally envisioned (in the second version), but requires a bit more effort to push through the proof properly. (Incidentally, I don’t think the argument can be made to work in pure ITT, but perhaps it would go through for ITT enriched with function extensionality.)
Thus, Church’s Law is false internally to extensional type theory, even though it is evidently true externally for that theory. You can see the similarity to the situation in first-order logic described earlier. Even though all functions of type are computable, type theory itself is not capable of recognizing this fact (at least, not in the extensional case). And this is a good thing, not a bad thing! The whole beauty of constructive mathematics lies in the fact that it is just mathematics, free of any self-conscious recognition that we are writing programs when proving theorems constructively. We never have to reason about machine indices or any such nonsense, we just do mathematics under the discipline of not assuming that every proposition is decidable. One benefit is that the same mathematics admits interpretation not only in terms of computability, but also in terms of continuity in topological spaces, establishing a deep connection between two seemingly disparate topics.
(Hat tip to Andrej Bauer for help in sorting all this out. Here’s a link to a talk and a paper about the construction of a model of ETT in which there is an injection from to .)
The 2012 edition of the Oregon Programming Languages Summer School was another huge success, drawing a capacity crowd of 100 eager students anxious to learn the latest ideas from an impressive slate of speakers. This year, as last year, the focus was on languages, logic, and verification, from both a theoretical and a practical perspective. The students have a wide range of backgrounds, some already experts in many of the topics of the school, others with little or no prior experience with logic or semantics. Surprisingly, a large percentage (well more than half, perhaps as many as three quarters) had some experience using Coq, a large uptick from previous years. This seems to represent a generational shift—whereas such topics were even relatively recently seen as the province of a few zealots out in left field, nowadays students seem to accept the basic principles of functional programming, type theory, and verification as a given. It’s a victory for the field, and extremely gratifying for those of us who have been pressing forward with these ideas for decades despite resistance from the great unwashed. But it’s also bittersweet, because in some ways it’s more fun to be among the very few who have created the future long before it comes to pass. But such are the proceeds of success.
One goal of the summer school is to explain the true nature of programming languages research. Most people seem to think that PL research must consist of arguing about semicolons and curly braces, or chasing the whims of some company that’s promoting some language, making marginal contributions in order to be able to say you’re “relevant”. Indeed, in many quarters “relevance” seems to be all there is to the enterprise. But, to paraphrase Kane’s business manager, Mr Bernstein, it’s no trick to be relevant … if all you want to do is be relevant. Indeed, the standard method is to reimport into academics whatever is happening in industry at a given moment, et voila, you are relevant. And deadly boring! To me the goal of research is precisely to be irrelevant to whatever is happening now, with the idea that by the time your ideas become relevant you’ll have moved on to something far more interesting, and far more unimportant to current practices. So my advice to someone embarking on a research career is dare to be irrelevant.
What PL research is really about is nothing short of a general theory of computation that, unlike the narrow interpretation of “theory” that is dominant in the U.S., takes full account of abstraction and composition. Anyone who has ever written code understands the central importance of these concepts, but those who have not seem to regard them as not worthy of consideration—witness the over-emphasis of the Turing Machine as a model of computation, and its emphasis on bit strings as the only form of data. A model that supports neither composition nor abstraction is not much of a model at all, in contrast to the (typed) λ-calculus, which takes both as a starting point. Computational trinitarianism is the proper framework for a full understanding of the theory of computation, and, accordingly, this was a central theme of this year’s summer school, with Frank Pfenning lecturing on proof theory, Steve Awodey on category theory, and me lecturing on type theory. The other lectures built on and extended this foundation to bring students up to the latest research topics in the field, including the use of proof assistants for language meta-theory and compiler verification.
Videos of the lectures, and course notes provided by the speakers, are all available at the OPLSS 12 web site.
Last fall it became clear to me that it was “now or never” time for completing Practical Foundations for Programming Languages, so I put just about everything else aside and made the big push to completion. The copy editing phase is now complete, the cover design (by Scott Draves) is finished, and its now in the final stages of publication. You can even pre-order a copy on Amazon; it’s expected to be out in November.
I can already think of ways to improve it, but at some point I had to declare victory and save some powder for future editions. My goal in writing the book is to organize as wide a body of material as I could manage in a single unifying framework based on structural operational semantics and structural type systems. At over 600 pages the manuscript is at the upper limit of what one can reasonably consider a single book, even though I strived for concision throughout.
Quite a lot of the technical development does not follow along traditional lines. For example, I completely decouple the concepts of assignment, reference, and storage class (heap or stack) from one another, which makes clear that one may have references to stack-allocated assignables, or make use of heap-allocated assignables without having references to them. As another example, my treatment of concurrency, while grounded in the process calculus tradition, coheres with my treatment of assignables, but differs sharply from standard accounts (and avoids some of the complications in the treatment of process equivalences).
An explicit goal was to avoid the computational cladistics that characterizes many treatments of PL concepts. For example, there are not chapters on “paradigms” such as “functional programming” or “object-oriented programming”, although many of their underlying concepts are treated in the text. So there are chapters on higher-order functions and dynamic dispatch, for example, but these are not elevated to language design principles, but rather are analyses of computational phenomena “found in nature”.
With the first edition behind me, I intend to resume blogging. I have a few topics lined up in my head, including an update on our new undergraduate curriculum at Carnegie Mellon (going smashingly), and more posts on fundamentals of logic and PL’s. So stay tuned!
Владислав Натаров <firstname.lastname@example.org> has kindly translated my blog posts to date into Russian. I cannot vouch for the accuracy of the translation, but I am grateful for the effort, and hope that this will be of interest to my Russian readers.