Filed under: Programming, Teaching Tagged: programming languages ]]>

On examination that sort of critique fails, because *a proof by contradiction is not a proof that derives a contradiction*. Pythagoras’s proof is valid, one of the eternal gems of mathematics. No one questions the validity of that argument, even if they question proof by contradiction.

Pythagoras’s Theorem expresses a negation: *it is not the case that* the square root of two can be expressed as the ratio of two integers. Assume that it can be so represented. A quick deduction shows that this is impossible. So the assumption is false. Done. This is a* direct proof* of a negative assertion; it is *not* a “proof by contradiction”.

What, then, *is* a proof by contradiction? It is the *affirmation* of a positive statement by refutation of its denial. It is a *direct proof* of the negation of a negative assertion that is then pressed into service as a *direct proof* of the assertion, which it is not.* *Anyone is free to ignore the distinction for the sake of convenience, as a philosophical issue, or as a sly use of “goto” in a proof, but the distinction nevertheless exists and is important. Indeed, part of the beauty of constructive mathematics is that one can draw such distinctions, for, once drawn, one can selectively disregard them. Once blurred, forever blurred, a pure loss of expressiveness.

For the sake of explanation, let me rehearse a standard example of a proof by contradiction. The claim is that there exists irrationals a and b such that a to the b power is rational. Here is an indirect proof, a true proof by contradiction. Move number one, let us prove instead that it is impossible that any two irrationals a and b are such that a to the b is irrational. This is a negative statement, so of course one proves it be deriving a contradiction from assuming it. But it is not the original statement! This will be clear from examining the information content of the proof.

Suppose, for a contradiction, that every two irrationals a and b are such that a to the b power is irrational. We know from Pythagoras that root two is irrational, so plug it in for both a and b, and conclude that root two to the root two power is irrational. Now use the assumption again, taking a to be root two to the root two, and b to be root two. Calculate a to the power of b, it is two, which is eminently rational. Contradiction.

We have now proved that it is not the case that every pair of irrationals, when exponentiated, give an irrational. There is nothing questionable about this proof as far as I am aware. But it does not prove that there are two irrationals whose exponent is rational! If you think it does, then I ask you, please name them for me. That information is not in this proof (there are other proofs that do name them, but that is not relevant for my purposes). You may, if you wish, disregard the distinction I am drawing, that is your prerogative, and neither I nor anyone has any problem with that. But you cannot claim that it is a *direct proof*, it is rather an *indirect proof*, that proceeds by refuting the negative of the intended assertion.

So why am I writing this? Because I have learned, to my dismay, that in U.S. computer science departments–of all places!–students are being taught, *erroneously,* that any proof that derives a contradiction is a “proof by contradiction”. It is not. Any proof of a negative must proceed by contradiction. A proof by contradiction in the long-established sense of the term is, contrarily, an indirect proof of a positive by refutation of the negative. This distinction is important, even if you want to “mod out” by it in your work, for it is only by drawing the distinction that one can even define the equivalence with which to quotient.

That’s my main point. But for those who may not be familiar with the distinction between direct and indirect proof, let me take the opportunity to comment on why one might care to draw such a distinction. It is entirely a matter of intellectual honesty: the information content of the foregoing indirect proof does not fulfill the expectation stated in the theorem. It is a kind of boast, an overstatement, to claim otherwise. Compare the original statement with the reformulation used in the proof. The claim that it is not the case that every pair of irrationals exponentiate to an irrational is uncontroversial. The proof proves it directly, and there is nothing particularly surprising about it. One would even wonder why anyone would bother to state it. Yet the supposedly equivalent claim stated at the outset appears much more fascinating, because most people cannot easily think up an example of two irrationals that exponentiate to rationals. Nor does the proof provide one. Once, when shown the indirect proof, a student of mine blurted out “oh that’s so cheap.” Precisely.

Why should you care? Maybe you don’t, but there are nice benefits to keeping the distinction, because it demarcates the boundary between constructive proofs, which have direct interpretation as functional programs, and classical proofs, which have only an indirect such interpretation (using continuations, to be precise, and giving up canonicity). Speaking as a computer scientist, this distinction matters, and it’s not costly to maintain. May I ask that you adhere to it?

*Edit: **rewrote final paragraph, sketchy and irrelevant, and improved prose throughout. *

Filed under: Programming, Research, Teaching ]]>

The now more popular accounts of type theory emphasize the *axiomatic freedom* given by making fewer foundational commitments, such as not asserting the decidability of every type, but give only an indirect account of their computational content, and then only in some cases. In particular, the computational content of Voevodsky’s Univalence Axiom in Homotopy Type Theory remains unclear, though the Bezem-Coquand-Huber model in cubical sets carried out in constructive set theory gives justification for its constructivity.

To elicit the computational meaning of higher type theory more clearly, emphasis has shifted to *cubical type theory* (in at least two distinct forms) in which the higher-dimensional structure of types is judgmentally explicit as the higher cells of a type, which are interpreted as identifications. In the above-linked talk I explain how to construe a cubical higher type theory directly as a programming language. Other efforts, notably by Cohen-Coquand-Huber-Mörtberg, have similar goals, but using somewhat different methods.

For more information, please see my home page on which are linked two arXiv papers providing the mathematical details, and a 12-page paper summarizing the approach and the major results obtained so far. These papers represent joint work with Carlo Angiuli and Todd Wilson.

Filed under: Research ]]>

Discussions of PCLSRING usually center on fundamental questions of systems design. Is the ITS approach better than the Unix approach? Should the whole issue be avoided by using asynchronous system calls, as in VMS? And weren’t the good old days better than the bad new days anyway?

Let’s set those things aside for now and instead consider what it is, rather than what it’s for or whether it’s needed. The crux of the matter is this. Suppose you’re working with a system such as Unix that has synchronous system calls for file I/O, and you initiate a “large” read of *n* bytes into memory starting at address *a*. It takes a while to perform the transfer, during which time the process making the call may be interrupted for any number of reasons. The question is, what to do about the process state captured at the moment of the interrupt?

For various reasons it doesn’t make sense to snapshot the process while it is running inside the kernel. One solution is to simply stop the read “in the middle” and arrange that, when the process resumes, it returns from the system call indicating that some *m**<=n* bytes have been read. You’re supposed to check that *m=n* yourself anyway, and restart the call if not. (This is the Unix solution.) It is all too easy to neglect the check, and the situation is made the worse because so few languages have sum types which would make it impossible to neglect the deficient return.

PCLSRING instead stops the system call in place, *backs up the process PC* to the system call, but with the parameters altered to read *n-m *bytes into location *a+m*, so that when the process resumes it simply makes a “fresh” system call to finish the read that was so rudely interrupted. The one drawback, if it is one, is that your own parameters may get altered during the call, so you shouldn’t rely on them being anything in particular after it returns. (This is all more easily visualized in assembly language, where the parameters are typically words that follow the system call itself in memory.)

While lecturing at this year’s OPLSS, it occurred to me that the dynamics of Modernized Algol in *PFPL*, which is given in Plotkin’s style, is essentially the same idea. Consider the rule for executing an encapsulated command:

if *m* → *m’*, then *bnd(cmd(m);x.m”)* → *bnd(cmd(m’);x.m”)*

(I have suppressed the memory component of the state, which is altered as well.) The expression *cmd(m)* encapsulates the command *m*. The *bnd* command executes *m* and passes its result to another command, *m”*, via the variable *x*. The above rule specifies that a step of execution of *m* results in a reconstruction of the entire *bnd*, albeit encapsulating *m’* , the intermediate result, instead of *m*. It’s exactly PCLSRING! Think of *m* as the kernel code for the read, think of *cmd* as the system call, and think of the *bnd* as the sequential composition of commands in an imperative language. The kernel only makes partial progress executing *m* before being interrupted, leaving *m’* remaining to be executed to complete the call. The “pc” is backed up to the *bnd*, albeit modified with *m’* as the new “system call” to be executed on the next transition.

I just love this sort of thing! The next time someone asks “what the hell is PCLSRING?”, you now have the option of explaining it in one line, without any mention of operating systems. It’s all a matter of semantics.

Filed under: Programming, Research, Teaching Tagged: operating systems, pclsring, structural operational semantics ]]>

The purpose of the commentary is to provide the “back story” for the development, which is often only hinted at, or is written between the lines, in *PFPL* itself. To emphasize enduring principles over passing fads, I have refrained from discussing particular languages in the book. But this makes it difficult for many readers to see the relevance. One purpose of the commentary is to clarify these connections by explaining *why* I said what I said.

As a starting point, I explain why I ignore the familiar concept of a “paradigm” in my account of languages. The idea seems to have been inspired by Kuhn’s (in)famous book *The Structure of Scientific Revolutions*, and was perhaps a useful device at one time. But by now the idea of a paradigm is just too vague to be useful, and there are many better ways to explain and systematize language structure. And so I have avoided it.

I plan for the commentary to be a living document that I will revise and expand as the need arises. I hope for it to provide some useful background for readers in general, and teachers in particular. I wish for the standard undergraduate PL course to evolve from a superficial taxonomy of the weird animals in the language zoo to a systematic study of the general theory of computation. Perhaps *PFPL* can contribute to effecting that change.

Filed under: Research, Teaching ]]>

- A new chapter on type refinements has been added, complementing previous chapters on dynamic typing and on sub-typing.
- Two old chapters were removed (general pattern matching, polarization), and several chapters were very substantially rewritten (higher kinds, inductive and co-inductive types, concurrent and distributed Algol).
- The parallel abstract machine was revised to correct an implied extension that would have been impossible to carry out.
- Numerous corrections and improvements were made throughout, including memorable and pronounceable names for languages.
- Exercises were added to the end of each chapter (but the last). Solutions are available separately.
- The index was revised and expanded, and some conventions systematized.
- An inexcusably missing easter egg was inserted.

I am grateful to many people for their careful reading of the text and their suggestions for correction and improvement.

In writing this book I have attempted to organize a large body of material on programming language concepts, all presented in the unifying framework of type systems and structural operational semantics. My goal is to give precise definitions that provide a clear basis for discussion and a foundation for both analysis and implementation. The field needs such a foundation, and I hope to have helped provide one.

Filed under: Programming, Research, Teaching ]]>

And thus is lost one of the most important and beautiful concepts in computing.

The discussion then moved on to the implementation of recursion in certain inexplicably popular languages for teaching programming. As it turns out, the compilers mis-implement recursion, causing unwarranted space usage in common cases. Recursion is dismissed as problematic and unimportant, and the compiler error is elevated to a “design principle” — to be snake-like is to do it wrong.

And thus is lost one of the most important and beautiful concepts in computing.

And yet, for all the stack-based resistance to the concept, *recursion** has nothing to do with a stack*. Teaching recursion does not need any mumbo-jumbo about “stacks”. Implementing recursion does not require a “stack”. The idea that the two concepts are related is simply mistaken.

What, then, is recursion? It is nothing more than *self-reference*, the ability to name a computation for use within the computation itself. *Recursion is what it is*, and nothing more. No stacks, no tail calls, no proper or improper forms, no optimizations, just self-reference pure and simple. Recursion is not tied to “procedures” or “functions” or “methods”; one can have self-referential values of all types.

Somehow these very simple facts, which date back to the early 1930’s, have been replaced by damaging myths that impede teaching and using recursion in programs. It is both a conceptual and a practical loss. For example, the most effective methods for expressing parallelism in programs rely heavily on recursive self-reference; much would be lost without it. And the allegation that “real programmers don’t use recursion” is beyond absurd: the very concept of a digital computer is grounded in recursive self-reference (the cross-connection of gates to form a latch). (Which, needless to say, does not involve a stack.) Not only do real programmers use recursion, there could not even be programmers were it not for recursion.

I have no explanation for why this terrible misconception persists. But I do know that when it comes to programming languages, attitude trumps reality every time. Facts? We don’t need no stinking facts around here, amigo. You must be some kind of mathematician.

If all the textbooks are wrong, what is right? How *should* one explain recursion? It’s simple. If you want to refer to yourself, you need to give yourself a name. “I” will do, but so will any other name, by the miracle of α-conversion. A computation is given a name using a *fixed point* (not *fixpoint*, dammit) operator: *fix x is e* stands for the expression *e* named *x* for use within *e*. Using it, the textbook example of the factorial function is written thus:

fix f is fun n : nat in case n {zero => 1 | succ(n') => n * f n'}.

Let us call this whole expression *fact,* for convenience. If we wish to evaluate it, perhaps because we wish to apply it to an argument, its value is

fun n : nat in case n {zero => 1 | succ(n') => n *factn'}.

The recursion has been *unrolled* one step ahead of execution. If we reach *fact* again, as we will for a positive argument, *fact* is evaluated again, in the same way, and the computation continues. *There are no stacks involved in this explanation*.

Nor is there a stack involved in the implementation of fixed points. It is only necessary to make sure that the named computation does indeed name itself. This can be achieved by a number of means, including circular data structures (non-well-founded abstract syntax), but the most elegant method is by *self-application*. Simply arrange that a self-referential computation has an implicit argument with which it refers to itself. Any use of the computation unrolls the self-reference, ensuring that the invariant is maintained. No storage allocation is required.

Consequently, a self-referential functions such as

fix f is fun (n : nat, m:nat) in case n {zero => m | succ(n') => f (n',n*m)}

execute without needing any asymptotically significant space. It is quite literally a loop, and *no special arrangement* is required to make sure that this is the case. All that is required is to implement recursion properly (as self-reference), and you’re done. *There is no such thing as tail-call optimization. *It’s not a matter of optimization, but of proper implementation. Calling it an optimization suggests it is optional, or unnecessary, or provided only as a favor, when it is more accurately described as a matter of getting it right.

So what, then, is the source of the confusion? The problem seems to be a too-close association between compound expressions and recursive functions or procedures. Consider the classic definition of factorial given earlier. The body of the definition involves the expression

n *factn'

where there is a pending multiplication to be accounted for. Once the recursive call (to itself) completes, the multiplication can be carried out, and it is necessary to keep track of this pending obligation. *But this phenomenon has nothing whatsoever to do with recursion.* If you write

n *squaren'

then it is equally necessary to record where the external call is to return its value. In typical accounts of recursion, the two issues get confused, a regrettable tragedy of error.

Really, the need for a stack arises the moment one introduces compound expressions. This can be explained in several ways, none of which need pictures or diagrams or any discussion about frames or pointers or any extra-linguistic concepts whatsoever. The best way, in my opinion, is to use Plotkin’s structural operational semantics, as described in my *Practical Foundations for Programming Languages (Second Edition)* on Cambridge University Press.

There is no reason, nor any possibility, to avoid recursion in programming. But folk wisdom would have it otherwise. That’s just the trouble with folk wisdom, everyone knows it’s true, even when it’s not.

*Update*: Dan Piponi and Andreas Rossberg called attention to a pertinent point regarding stacks and recursion. The conventional notion of a run-time stack records two distinct things, the *control state* of the program (such as subroutine return addresses, or, more abstractly, pending computations, or continuations), and the *data state* of the program (a term I just made up because I don’t know a better one, for managing multiple simultaneous activations of a given procedure or function). Fortran (back in the day) didn’t permit multiple activations, meaning that at most one instance of a procedure can be in play at a given time. One consequence is that α-equivalence can be neglected: the arguments of a procedure can be placed in a statically determined spot for the call. As a member of the Algol-60 design committee Dijkstra argued, successfully, for admitting multiple procedure activations (and hence, with a little extra arrangement, recursive/self-referential procedures). Doing so requires that α-equivalence be implemented properly; two activations of the same procedure cannot share the same argument locations. The data stack implements α-equivalence using de Bruijn indices (stack slots); arguments are passed on the data stack using activation records in the now-classic manner invented by Dijkstra for the purpose. It is not self-reference that gives rise to the need for a stack, but rather re-entrancy of procedures, which can arise in several ways, not just recursion. Moreover, recursion does not always require re-entrancy—the so-called tail call optimization is just the observation that certain recursive procedures are not, in fact, re-entrant. (Every looping construct illustrates this principle, albeit on an *ad hoc* basis, rather than as a general principle.)

Filed under: Programming, Teaching ]]>

A piece of the puzzle was put into place by Xavier Leroy and François Pessaux in their paper on tracking uncaught exceptions. Their idea was to move use type-based methods to track uncaught exceptions, but to move the clever typing techniques required out of the programming language itself and into a separate analysis tool. They make effective use of the powerful concept of row polymorphism introduced by Didier Rémy for typing records and variants in various dialects of Caml. Moving exception tracking out of the language and into a verification tool is the decisive move, because it liberates the analyzer from any constraints that may be essential at the language level.

But why track uncaught exceptions? That is, why track *uncaught* exceptions, rather than *caught* exceptions? From a purely methodological viewpoint it seems more important to know that a certain code fragment *cannot* raise certain exceptions (such as the exception in ML, which arises when a value matches no pattern in a case analysis). In a closed world in which all of the possible exceptions are known, then tracking positive information about which exceptions might be raised amounts to the same as tracking which exceptions cannot be raised, by simply subtracting the raised set from the entire set. As long as the raised set is an upper bound on the exceptions that might be raised, then the difference is a lower bound on the set of exceptions that cannot be raised. Such conservative approximations are necessary because a non-trivial behavioral property of a program is always undecidable, and hence requires proof. In practice this means that stronger invariants must be maintained than just the exception information so that one may prove, for example, that the values passed to a pattern match are limited to those that actually do satisfy some clause of an inexhaustive match.

How realistic is the closed world assumption? For it to hold seems to require a whole-program analysis, and is therefore non-modular, a risky premise in today’s world. Even on a whole-program basis exceptions must be *static* in the sense that, even if they are scoped, they may in principle be declared globally, after suitable renaming to avoid collisions. The global declarations collectively determine the whole “world” from which positive exception tracking information may be subtracted to obtain negative exception information. But in languages that admit multiple instantiations of modules, such as ML functors, static exceptions are not sufficient (each instance should introduce a distinct exception). Instead, static exceptions must be replaced by *dynamic* exceptions that are allocated at initialization time, or even run-time, to ensure that no collisions can occur among the instances. At that point we have an *open world* of exceptions, one in which there are exceptions that may be raised, but which cannot be named in any form of type that seeks to provide an upper bound on the possible uncaught exceptions that may arise.

For example consider the ML expression

let exception X in raise X end

If one were to use positive exception tracking, what would one say about the expression as a whole? It can, in fact it does, raise the exception , yet this fact is unspeakable outside of the scope of the declaration. If a tracker does not account for this fact, it is *unsound* in the sense that the uncaught exceptions no longer provide an upper bound on what may be raised. One maneuver, used in Java, for example, is to admit a class of *untracked* exceptions about which no static information is maintained. This is useful, because it allows one to track those exceptions that can be tracked (by the Java type system) and to not track those that cannot.

In an open world (which includes Java, because exceptions are a form of object) positive exception tracking becomes infeasible because there is no way to name the exceptions that might be tracked. In the above example the exception is actually a *bound variable* bound to a reference to an exception constructor. The name of the bound variable ought not matter, so it is not even clear what the exception raised should be called. (It is amusing to see the messages generated by various ML compilers when reporting uncaught exceptions. The information they provide is helpful, certainly, but is usually, strictly speaking, meaningless, involving identifiers that are not in scope.)

The methodological considerations mentioned earlier suggest a way around this difficulty. Rather than attempt to track those exceptions that might be raised, instead track the exceptions that cannot be raised. In the above example there is nothing to say about *not* being raised, because it *is* being raised, so we’re off the hook there. The “dual” example

let exception X in 2+2 end

illustrates the power of negative thinking. The body of the does not raise the exception bound to , and this may be recorded in a type that makes sense within the scope of . The crucial point is that when exiting its scope it is *sound* to drop mention of this information in a type for the entire expression. Information is lost, but the analysis is sound. In contrast there is no way to drop positive information without losing soundness, as the first example shows.

One way to think about the situation is in terms of *type refinements*, which express properties of the behavior of expressions of a type. To see this most clearly it is useful to separate the exception mechanism into two parts, the *control* part and the *data *part. The control aspect is essentially just a formulation of *error-passing style*, in which every expression has either a normal return of a specified type, or an exceptional return of the type associated to all exceptions. (Nick Benton and Andrew Kennedy nicely formulated this view of exceptions as an extension of the concept of a monad.)

The data aspect is, for dynamic exceptions, the type of *dynamically classified* values, which is written in PFPL. Think of it as an *open-ended sum* in which one can dynamically generate new classifiers (aka summands, injections, constructors, exceptions, channels, …) that carry a value of a specified type. According to this view the exception is bound to a dynamically-generated classifier carrying a value of unit type. (Classifier allocation is a storage effect, so that the data aspect necessarily involves effects, whereas the control aspect may, and, for reasons of parallelism, be taken as pure.) Exception constructors are used to make values of type , which are passed to handlers that can deconstruct those values by pattern matching.

Type refinements come into play as a means of tracking the class of a classified value. For the purposes of exception tracking, the crucial refinements of the type are the *positive* refinement, , and the *negative refinement*, , which specify that a classified value is, or is not, of class . Positive exception tracking reduces to maintaining invariants expressed by a disjunction of positive refinements; negative exception tracking reduces to maintaining invariants expressed by a conjunction of negative refinements. Revisiting the logic of exception tracking, the key is that the entailment

is valid, whereas the “entailment”

is not. Thus, in the negative setting we may get ourselves out of the scope of an exception by weakening the refinement, an illustration of the power of negative thinking.

Filed under: Programming, Research Tagged: dynamic classification, execeptions, open vs closed world, type refinements ]]>

Guy and I have argued, through our separate and joint work, for the applicability of PL ideas to algorithms design, leading. for example, to the concept of adaptive programming that Umut Acar has pursued aggressively over the last dozen years. And we have argued for the importance of cost analysis, for various measures of cost, at the level of the code that one actually writes, and not how it is compiled. Last spring, prompted by discussions with Anindya Banerjee at NSF in the winter of 2014, I decided to write a position paper on the topic, outlining the scientific opportunities and challenges that would arise in an attempt to unify the two, disparate theories of computing. I circulated the first draft privately in May, and revised it in July to prepare for a conference call among algorithms and PL researchers (sponsored by NSF) to find common ground and isolate key technical challenges to achieving its goals.

There are serious obstacles to be overcome if a grand synthesis of the “two theories” is to be achieved. The first step is to get the right people together to discuss the issues and to formulate a unified vision of what are the core problems, and what are promising directions for short- and long-term research. The position paper is not a proposal for funding, but is rather a proposal for a meeting designed to bring together two largely (but not entirely) disparate communities. In summer of 2014 NSF hosted a three-hour long conference call among a number of researchers in both areas with a view towards hosting a workshop proposal in the near future. Please keep an eye out for future developments.

I am grateful to Anindya Banerjee at NSF for initiating the discussion last winter that led to the paper and discussion, and I am grateful to Swarat Chaudhuri for his helpful comments on the proposal.

[*Update:* word smithing, corrections, updating, removed discussion of cost models for fuller treatment later, fixed incoherence after revision.]

Filed under: Research Tagged: algorithms, programming languages, research ]]>

To this end the sources of the 1990 and 1997 versions of the definition are on the web site, with the permission of MIT Press, as is the type-theoretic definition formulated by Stone and H., which was subsequently used as the basis for a complete machine-checked proof of type safety for the entire language done by Crary, Lee, and H. It is be hoped that the errors in the definition (many are known, we provide links to the extensive lists provided by Kahrs and Rossberg in separate investigations) may now be corrected. Anyone is free to propose an alteration to be merged into the main branch, which is called “SML, The Living Language” and also known as “Successor ML”. One may think of this as a kind of “third edition” of the definition, but one that is in continual revision by the community. Computer languages, like natural languages, belong to us all collectively, and we all contribute to their evolution.

Everyone is encouraged to create forks for experimental designs or new languages that enrich, extend, or significantly alter the semantics of the language. The main branch will be for generally accepted corrections, modifications, and extensions, but it is to be expected that completely separate lines of development will also emerge.

The web site, sml-family.org is up and running, and will be announced in various likely places very soon.

*Update:* We have heard that some people get a “parked page” error from GoDaddy when accessing sml-family.org. It appears to be a DNS propagation problem.

*Update: *The DNS problems have been resolved, and I believe that the web site is stably available now as linked above.

*Update:* Word smithing for clarity.

Filed under: Research Tagged: functional programming, sml ]]>