PCLSRING in Semantics

July 11, 2016

PCLSRING is an operating system kernel design technique introduced in ITS for managing interruptions of long-running synchronous system calls.  It was mentioned in an infamous diatribe by Dick Gabriel, and is described in loving detail by Allen Bawden in an article for the ages.

Discussions of PCLSRING usually center on fundamental questions of systems design.  Is the ITS approach better than the Unix approach?   Should the whole issue be avoided by using asynchronous system calls, as in VMS?  And weren’t the good old days better than the bad new days anyway?

Let’s set those things aside for now and instead consider what it is, rather than what it’s for or whether it’s needed.  The crux of the matter is this.  Suppose you’re working with a system such as Unix that has synchronous system calls for file I/O, and you initiate a “large” read of n bytes into memory starting at address a.  It takes a while to perform the transfer, during which time the process making the call may be interrupted for any number of reasons.  The question is, what to do about the process state captured at the moment of the interrupt?

For various reasons it doesn’t make sense to snapshot the process while it is running inside the kernel.  One solution is to simply stop the read “in the middle” and arrange that, when the process resumes, it returns from the system call indicating that some m<=n bytes have been read.  You’re supposed to check that m=n yourself anyway, and restart the call if not.  (This is the Unix solution.)  It is all too easy to neglect the check, and the situation is made the worse because so few languages have sum types which would make it impossible to neglect the deficient return.

PCLSRING instead stops the system call in place, backs up the process PC to the system call, but with the parameters altered to read n-m bytes into location a+m, so that when the process resumes it simply makes a “fresh” system call to finish the read that was so rudely interrupted.  The one drawback, if it is one, is that your own parameters may get altered during the call, so you shouldn’t rely on them being anything in particular after it returns.  (This is all more easily visualized in assembly language, where the parameters are typically words that follow the system call itself in memory.)

While lecturing at this year’s OPLSS, it occurred to me that the dynamics of Modernized Algol in PFPL, which is given in Plotkin’s style, is essentially the same idea.  Consider the rule for executing an encapsulated command:

if mm’, then bnd(cmd(m);x.m”)bnd(cmd(m’);x.m”)

(I have suppressed the memory component of the state, which is altered as well.)  The expression cmd(m) encapsulates the command m.  The bnd command executes m and passes its result to another command, m”, via the variable x.  The above rule specifies that a step of execution of m results in a reconstruction of the entire bnd, albeit encapsulating m’ , the intermediate result, instead of m.  It’s exactly PCLSRING!  Think of m as the kernel code for the read, think of cmd as the system call, and think of the bnd as the sequential composition of commands in an imperative language.  The kernel only makes partial progress executing m before being interrupted, leaving m’ remaining to be executed to complete the call.  The “pc” is backed up to the bnd, albeit modified with m’ as the new “system call” to be executed on the next transition.

I just love this sort of thing!  The next time someone asks “what the hell is PCLSRING?”, you now have the option of explaining it in one line, without any mention of operating systems.  It’s all a matter of semantics.



PFPL Commentary

June 3, 2016

I am building a web page devoted to the 2nd edition of Practical Foundations for Programming Languages, recently published by Cambridge University Press.  Besides an errata, the web site features a commentary on the text explaining major design decisions and suggesting alternatives.  I also plan to include additional exercises and to make sample solutions available to faculty teaching from the book.

The purpose of the commentary is to provide the “back story” for the development, which is often only hinted at, or is written between the lines, in PFPL itself.  To emphasize enduring principles over passing fads, I have refrained from discussing particular languages in the book.  But this makes it difficult for many readers to see the relevance.  One purpose of the commentary is to clarify these connections by explaining why I said what I said.

As a starting point, I explain why I ignore the familiar concept of a “paradigm” in my account of languages.  The idea seems to have been inspired by Kuhn’s (in)famous book The Structure of Scientific Revolutions, and was perhaps a useful device at one time.  But by now the idea of a paradigm is just too vague to be useful, and there are many better ways to explain and systematize language structure.  And so I have avoided it.

I plan for the commentary to be a living document that I will revise and expand as the need arises.  I hope for it to provide some useful background for readers in general, and teachers in particular.  I wish for the standard undergraduate PL course to evolve from a superficial taxonomy of the weird animals in the language zoo to a systematic study of the general theory of computation.  Perhaps PFPL can contribute to effecting that change.



OPLSS 2016

April 11, 2016

The program for OPLSS 2016 has been announced, and we are open for registration!

This year’s instance is being organized by Zena Ariola, Dan Licata, and me, with renewed emphasis this year on programming languages semantics and verification.

Oregon Programming Languages Summer School
June 20-July 2, 2016
Eugene, Oregon

There are still spaces available at the 15th Annual Oregon Programming
Languages Summer School (OPLSS).

Please encourage your PhD students, masters students, advanced
undergraduates, colleagues, and selves to attend!

This year’s program is titled Types, Logic, Semantics, and Verification and features the following courses:

* Programming Languages Background — Robert Harper, Carnegie Mellon University and Dan Licata, Wesleyan University
* Category Theory Background — Ed Morehouse, Carnegie Mellon University
* Logical Relations — Patricia Johann, Appalachian State University
* Network Programming — Nate Foster, Cornell University
* Automated Complexity Analysis — Jan Hoffman, Carnegie Mellon University
* Separation Logic and Concurrency — Aleks Nanevski, IMDEA
* Principles of Type Refinement — Noam Zeilberger, INRIA
* Logical relations/Compiler verification — Amal Ahmed, Northeastern University

Full information on the courses and registration and scholarships is
available at https://www.cs.uoregon.edu/research/summerschool/.

For more information, please email summerschool@cs.uoregon.edu.


Practical Foundations for Programming Languages, Second Edition

April 11, 2016

Today I received my copies of Practical Foundations for Programming Languages, Second Edition on Cambridge University Press.  The new edition represents a substantial revision and expansion of the first edition, including these:

  1. A new chapter on type refinements has been added, complementing previous chapters on dynamic typing and on sub-typing.
  2. Two old chapters were removed (general pattern matching, polarization), and several chapters were very substantially rewritten (higher kinds, inductive and co-inductive types, concurrent and distributed Algol).
  3. The parallel abstract machine was revised to correct an implied extension that would have been impossible to carry out.
  4. Numerous corrections and improvements were made throughout, including memorable and pronounceable names for languages.
  5. Exercises were added to the end of each chapter (but the last).  Solutions are available separately.
  6. The index was revised and expanded, and some conventions systematized.
  7. An inexcusably missing easter egg was inserted.

I am grateful to many people for their careful reading of the text and their suggestions for correction and improvement.

In writing this book I have attempted to organize a large body of material on programming language concepts, all presented in the unifying framework of type systems and structural operational semantics.  My goal is to give precise definitions that provide a clear basis for discussion and a foundation for both analysis and implementation.  The field needs such a foundation, and I hope to have helped provide one.


It Is What It Is (And Nothing Else)

February 22, 2016

A recent discussion of introductory computer science education led to the topic of teaching recursion.  I was surprised to learn that students are being taught that recursion requires understanding something called a “stack” that is nowhere in evidence in their code.  Few, if any, students master the concept, which is usually “covered” only briefly.  Worst, they are encouraged to believe that recursion is a mysterious bit of esoterica that is best ignored.

And thus is lost one of the most important and beautiful concepts in computing.

The discussion then moved on to the implementation of recursion in certain inexplicably popular languages for teaching programming.  As it turns out, the compilers mis-implement recursion, causing unwarranted space usage in common cases.  Recursion is dismissed as problematic and unimportant, and the compiler error is elevated to a “design principle” — to be snake-like is to do it wrong.

And thus is lost one of the most important and beautiful concepts in computing.

And yet, for all the stack-based resistance to the concept, recursion has nothing to do with a stack.  Teaching recursion does not need any mumbo-jumbo about “stacks”.  Implementing recursion does not require a “stack”.  The idea that the two concepts are related is simply mistaken.

What, then, is recursion?  It is nothing more than self-reference, the ability to name a computation for use within the computation itself.  Recursion is what it is, and nothing more.  No stacks, no tail calls, no proper or improper forms, no optimizations, just self-reference pure and simple.  Recursion is not tied to “procedures” or “functions” or “methods”; one can have self-referential values of all types.

Somehow these very simple facts, which date back to the early 1930’s, have been replaced by damaging myths that impede teaching and using recursion in programs.  It is both a conceptual and a practical loss.  For example, the most effective methods for expressing parallelism in programs rely heavily on recursive self-reference; much would be lost without it.  And the allegation that “real programmers don’t use recursion” is beyond absurd: the very concept of a digital computer is grounded in recursive self-reference (the cross-connection of gates to form a latch).  (Which, needless to say, does not involve a stack.)  Not only do real programmers use recursion, there could not even be programmers were it not for recursion.

I have no explanation for why this terrible misconception persists.  But I do know that when it comes to programming languages, attitude trumps reality every time.  Facts?  We don’t need no stinking facts around here, amigo.  You must be some kind of mathematician.

If all the textbooks are wrong, what is right?  How should one explain recursion?  It’s simple.  If you want to refer to yourself, you need to give yourself a name.  “I” will do, but so will any other name, by the miracle of α-conversion.  A computation is given a name using a fixed point (not fixpoint, dammit) operator:  fix x is e stands for the expression e named x for use within e.  Using it, the textbook example of the factorial function is written thus:

fix f is fun n : nat in case n {zero => 1 | succ(n') => n * f n'}.

Let us call this whole expression fact, for convenience.  If we wish to evaluate it, perhaps because we wish to apply it to an argument, its value is

fun n : nat in case n {zero => 1 | succ(n') => n * fact n'}.

The recursion has been unrolled one step ahead of execution.  If we reach fact again, as we will for a positive argument,  fact is evaluated again, in the same way, and the computation continues.  There are no stacks involved in this explanation.

Nor is there a stack involved in the implementation of fixed points.  It is only necessary to make sure that the named computation does indeed name itself.  This can be achieved by a number of means, including circular data structures (non-well-founded abstract syntax), but the most elegant method is by self-application.  Simply arrange that a self-referential computation has an implicit argument with which it refers to itself.  Any use of the computation unrolls the self-reference, ensuring that the invariant is maintained.  No storage allocation is required.

Consequently, a self-referential functions such as

fix f is fun (n : nat, m:nat) in case n {zero => m | succ(n') => f (n',n*m)}

execute without needing any asymptotically significant space.  It is quite literally a loop, and no special arrangement is required to make sure that this is the case.  All that is required is to implement recursion properly (as self-reference), and you’re done.  There is no such thing as tail-call optimization.  It’s not a matter of optimization, but of proper implementation.  Calling it an optimization suggests it is optional, or unnecessary, or provided only as a favor, when it is more accurately described as a matter of getting it right.

So what, then, is the source of the confusion?  The problem seems to be a too-close association between compound expressions and recursive functions or procedures.  Consider the classic definition of factorial given earlier.  The body of the definition involves the expression

n * fact n'

where there is a pending multiplication to be accounted for.  Once the recursive call (to itself) completes, the multiplication can be carried out, and it is necessary to keep track of this pending obligation.  But this phenomenon has nothing whatsoever to do with recursion.  If you write

n * square n'

then it is equally necessary to record where the external call is to return its value.  In typical accounts of recursion, the two issues get confused, a regrettable tragedy of error.

Really, the need for a stack arises the moment one introduces compound expressions.  This can be explained in several ways, none of which need pictures or diagrams or any discussion about frames or pointers or any extra-linguistic concepts whatsoever.  The best way, in my opinion, is to use Plotkin’s structural operational semantics, as described in my Practical Foundations for Programming Languages (Second Edition) on Cambridge University Press.

There is no reason, nor any possibility, to avoid recursion in programming.  But folk wisdom would have it otherwise.  That’s just the trouble with folk wisdom, everyone knows it’s true, even when it’s not.

Update: Dan Piponi and Andreas Rossberg called attention to a pertinent point regarding stacks and recursion.  The conventional notion of a run-time stack records two distinct things, the control state of the program (such as subroutine return addresses, or, more abstractly, pending computations, or continuations), and the data state of the program (a term I just made up because I don’t know a better one, for managing multiple simultaneous activations of a given procedure or function).  Fortran (back in the day) didn’t permit multiple activations, meaning that at most one instance of a procedure can be in play at a given time.  One consequence is that α-equivalence can be neglected: the arguments of a procedure can be placed in a statically determined spot for the call.  As a member of the Algol-60 design committee Dijkstra argued, successfully, for admitting multiple procedure activations (and hence, with a little extra arrangement, recursive/self-referential procedures).  Doing so requires that α-equivalence be implemented properly; two activations of the same procedure cannot share the same argument locations.  The data stack implements α-equivalence using de Bruijn indices (stack slots); arguments are passed on the data stack using activation records in the now-classic manner invented by Dijkstra for the purpose.  It is not self-reference that gives rise to the need for a stack, but rather re-entrancy of procedures, which can arise in several ways, not just recursion.  Moreover, recursion does not always require re-entrancy—the so-called tail call optimization is just the observation that certain recursive procedures are not, in fact, re-entrant.  (Every looping construct illustrates this principle, albeit on an ad hoc basis, rather than as a general principle.)

OPLSS 2015 Announcement

February 20, 2015

We are pleased to announce the preliminary program for the 14th Annual Oregon Programming Languages Summer School (OPLSS) to be held June 15th to 27th, 2015 at the University of Oregon in Eugene.

This year’s program is titled Types, Logic, Semantics, and Verification and features the following speakers:

  • Amal Ahmed, Northeastern University
  • Nick Benton, Microsoft Cambridge Research Lab
  • Adam Chlipala, Massachusetts Institute of Technology
  • Robert Constable, Cornell University
  • Peter Dybjer, Chalmers University of Technology
  • Robert Harper, Carnegie Mellon University
  • Ed Morehouse, Carnegie Mellon University
  • Greg Morrisett, Harvard University
  • Frank Pfenning, Carnegie Mellon University

The registration deadline is March 16, 2015.

Full information on registration and scholarships is available at https://www.cs.uoregon.edu/research/summerschool/.

Please address all inquiries to summerschool@cs.uoregon.edu.

Best regards from the OPLSS 2015 organizers!

Robert Harper

Greg Morrisett

Zena Ariola


Summer of Programming Languages

July 6, 2014

Having just returned from the annual Oregon Programming Languages Summer School, at which I teach every year, I am once again very impressed with the impressive growth in the technical sophistication of the field and with its ability to attract brilliant young students whose enthusiasm and idealism are inspiring.  Eugene was, as ever, an ideal setting for the summer school, providing a gorgeous setting for work and relaxation.  I was particularly glad for the numerous chances to talk with students outside of the classroom, usually over beer, and I enjoyed, as usual, the superb cycling conditions in Eugene and the surrounding countryside.  Many students commented to me that the atmosphere at the summer school is wonderful, filled with people who are passionate about programming languages research, and suffused with a spirit of cooperation and sharing of ideas.

Started by Zena Ariola a dozen years ago, this year’s instance was organized by Greg Morrisett and Amal Ahmed in consultation with Zena.  As usual, the success of the school depended critically on the dedication of Jim Allen, who has been the de facto chief operating officer since it’s inception.  Without Jim, OPLSS could not exist.  His attention to detail, and his engagement with the students are legendary.   Support from the National Science Foundation CISE Division, ACM SIGPLANMicrosoft Research, Jane Street Capital, and BAE Systems was essential for providing an excellent venue,  for supporting a roster of first-rate lecturers, and for supporting the participation of students who might otherwise not have been able to attend.  And, of course, an outstanding roster of lecturers donated their time to come to Eugene for a week to share their ideas with the students and their fellow lecturers.

The schedule of lectures is posted on the web site, all of which were taped, and are made available on the web.  In addition many speakers provided course notes, software, and other backing materials that are also available online.  So even if you were not able to attend, you can still benefit from the summer school, and perhaps feel more motivated to come next summer.  Greg and I will be organizing, in consultation with Zena.  Applying the principle “don’t fix what isn’t broken”, we do not anticipate major changes, but there is always room for improvement and the need to freshen up the content every year.  For me the central idea of the summer school is the applicability of deep theory to everyday practice.  Long a dream held by researchers such as me, these connections become more “real” every year as the theoretical abstractions of yesterday become the concrete practices of today.  It’s breathtaking to see how far we’ve come from the days when I was a student just beginning to grasp the opportunities afforded by ideas from proof theory, type theory, and category theory (the Holy Trinity) to building beautiful software systems.  No longer the abstruse fantasies of mad (computer) scientists, these ideas are the very air we breathe in PL research.  Gone are the days of ad hoc language designs done in innocence of the foundations on which they rest.  Nowadays serious industrial-strength languages are emerging that are grounded in theory and informed by practice.

Two examples have arisen just this summer, Rust (from Mozila) and Swift (from Apple), that exemplify the trend.  Although I have not had time to study them carefully, much less write serious code using them, it is evident from even a brief review of their web sites that these are serious languages that take account of the academic developments of the last couple of decades in formulating new language designs to address new classes of problems that have arisen in programming practice.  These languages are type safe, a basic criterion of sensibility, and feature sophisticated type systems that include ideas such as sum types, which have long been missing from commercial languages, or provided only in comically obtuse ways (such as objects).  The infamous null pointer mistakes have been eradicated, and the importance of pattern matching (in the sense of the ML family of languages) is finally being appreciated as the cure for Boolean blindness.  For once I can look at new industrial languages without an overwhelming sense of disappointment, but instead with optimism and enthusiasm that important ideas are finally, at long last, being recognized and adopted.  As has often been observed, it takes 25 years for an academic language idea to make it into industrial practice.  With Java it was simply the 1970’s idea of automatic storage management; with languages such as Rust and Swift we are seeing ideas from the 80’s and 90’s make their way into industrial practice.  It’s cause for celebration, and encouragement for those entering the field: the right ideas do win out in the end, one just has to have the courage to be irrelevant.

I hope to find the time to comment more meaningfully on the recent developments in practical programming languages, including Rust and Swift, but also languages such as Go and OCaml that are also making inroads into programming practice.  (The overwhelming success and future dominance of Haskell is self-evident.  Kudos!) But for now, let me say that the golden age of programming language research is here and now, and promises to continue indefinitely.

Update: word smithing.