There Is Such A Thing As A Declarative Language, and It’s The World’s Best DSL

July 22, 2013

In a recent post I asked whether there is any such thing as a declarative language. The main point was to argue that the standard “definitions” are, at best, not very precise, and to see whether anyone might offer a better definition. What I’m after is an explanation of why people seem to think that the phrase has meaning, even though they can’t say very clearly what they mean by it.  (One commenter analogized with “love” and “happiness”, but I would counter by saying that we’re trying to do science here, and we ought to be able to define our terms with some precision.)

As I mentioned, perhaps the best “definition” that is usually offered is to say that “declarative” is synonymous with “functional-and-logic-programming”.  This is pretty unsatisfactory, since it is not so easy to define these terms either, and because, contrary to conventional classifications, the two concepts have pretty much nothing in common with each other (but for one thing to be mentioned shortly). The propositions-as-types principle helps set them clearly apart: whereas functional programming is about executing proofs, logic programming is about the search for proofs. Functional programming is based on the dynamics of proof given by Gentzen’s inversion principle. Logic programming is based on the dynamics of provability given by cut elimination and focusing.  The two concepts of computation could not be further apart.

Yet they do have one thing in common that is usefully isolated as fundamental to what we mean by “declarative”, namely the concept of a variable.  Introduced by the ancient Hindu and Muslim mathematicians, Brahmagupta and al Kwharizmi, the variable is one of the most remarkable achievements of the human intellect.  In my previous post I had secretly hoped that someone would propose variables as being central to what we mean by “declarative”, but no one did, at least not in the comments section.  My unstated motive for writing that post was not so much to argue that the term “declarative” is empty, but to test the hypothesis that few seem to have grasp the importance of  this concept for designing a civilized, and broadly applicable, programming language.

My contention is that variables, properly so-called, are what distinguish “declarative” languages from “imperative” languages. Although the imperative languages, including all popular object-oriented languages, are based on a concept that is called a variable, they lack anything that actually is a variable.   And this is where the trouble begins, and the need for the problematic distinction arises.  The declarative concept of a variable is the mathematical concept of an unknown that is given meaning by substitution. The imperative concept of a variable, arising from low-level machine models, is instead given meaning by assignment (mutation), and, by a kind of a notational pun, allowed to appear in expressions in a way that resembles that of a proper variable.  But the concepts are so fundamentally different, that I argue in PFPL that the imperative concept be called an “assignable”, which is more descriptive, rather than “variable”, whose privileged status should be emphasized, not obscured.

The problem with purely imperative programming languages is that they have only the concept of an assignable, and attempt to make it serve also as a concept of variable. The results are a miserable mess of semantic and practical complications. Decades of work has gone into rescuing us from the colossal mistake of identifying variables with assignables. And what is the outcome? If you want to reason about assignables, what you do is (a) write a mathematical formulation of your algorithm (using variables, of course) and (b) show that the imperative code simulates the functional behavior so specified.   Under this methodology the mathematical formulation is taken as self-evidently correct, the standard against which the imperative program is judged, and is not itself in need of further verification, whereas the imperative formulation is, invariably, in need of verification.

What an odd state of affairs!  The functional “specification” is itself a perfectly good, and apparently self-evidently correct, program.  So why not just write the functional (i.e., mathematical) formulation, and call it a day?  Why indeed!  Declarative languages, being grounded in the language of mathematics, allow for the identification of the “desired behavior” with the “executable code”.  Indeed, the propositions-as-types principle elevates this identification to a fundamental organizing principle: propositions are types, and proofs are programs.  Who needs verification?  Once you have a mathematical specification of the behavior of a queue, say, you already have a running program; there is no need to relegate it to a stepping stone towards writing an awkward, and invariably intricate, imperative formulation that then requires verification to ensure that it works properly.

Functional programming languages are written in the universally applicable language of mathematics as expressed by the theory of types.  Such languages are therefore an integral part of science itself, inseparable from our efforts to understand and master the workings of the world.  Imperative programming has no role to play in this effort, and is, in my view, doomed in the long run to obsolescence, an artifact of engineering, rather than a fundamental discovery on a par with those of mathematics and science.

This brings me to my main point, the popular concept of a domain-specific language. Very much in vogue, DSL’s are offered as the solution to many of our programming woes. And yet, to borrow a phrase from my colleague Guy Blelloch, the elephant in the room is the question “what is a domain?”. I’ve yet to hear anyone explain how you decide what are the boundaries of a “domain-specific” language. Isn’t the “domain” mathematics and science itself? And does it not follow that the right language must be the language of mathematics and science? How can one rule out anything as being irrelevant to a “domain”?  I think it is impossible, or at any rate inadvisable, to make such restrictions a priori.  Indeed, full-spectrum functional languages are already the world’s best DSL’s, precisely because they are (or come closest to being) the very language of science, the ultimate “domain”.


Referential transparency

February 9, 2012

After reading some of the comments on my Words Matter post, I realized that it might be worthwhile to clarify the treatment of references in PFPL. Most people are surprised to learn that I have separated the concept of a reference from the concept of mutable storage. I realize that my treatment is not standard, but I consider that one thing I learned in writing the book is to divorce the concept of a reference from that to which it refers.  Doing so allows for a nicely uniform account of references arising in many disparate situations, and is the basis for, among other things, a non-standard, but I think clearer, treatment of communication in process calculi.  Conventional accounts of references are, in my formulation, references to assignables; I also have references to symbols, to fluids, to channels, and to classifiers, all of which arise naturally and independently of assignment.

It’s not feasible for me to reproduce the entire story here in a short post, so I will confine myself to a few remarks that will perhaps serve as encouragement to read the full story in PFPL.

There is, first of all, a general concept of a symbol, or name, on which much of the development builds. Perhaps the most important thing to realize about symbols in my account is that symbols are not forms of value. They are, rather, a indices for an infinite, open-ended (expansible) family of operators for forming expressions. For example, when one introduces a symbol a as the name of an assignable, what one is doing is introducing two new operators, get[a] and set[a], for getting and setting the contents of that assignable. Similarly, when one introduces a symbol a to be used as a channel name, then one is introducing operators send[a] and receive[a] that act on the channel named a. The point is that these operators interpret the symbol; for example, in the case of mutable state, the get[a] and set[a] operators are what make it be state, as opposed to some other use of the symbol a as an index for some other operator. There is no requirement that the state be formulated using “cells”; one could as well use the framework of algebraic effects pioneered by Plotkin and Power to give a dynamics to these operators. For present purposes I don’t care about how the dynamics is formulated; I only care about the laws that it obeys (this is the essence of the Plotkin/Power approach as I understand it).

Associated with each symbol a is a type that is, I hasten to emphasize not the type of the symbol a itself (the symbol inhabits no type), but is merely associated with it. For example, if a has the associated type nat, then get[a] will be a command returning a number, and set[a](e) will take a number, e, as argument. Similarly for channels, and other uses of symbols as indices of other operators.

Given a symbol a with associated type T, one may form a reference to a, written &a, as a value whose type, in the case of a reference to an assignable, will be, say, T assignable reference, or similar terminology. This type is interpreted by elimination forms that take a value as argument, and transition to the corresponding operator for that assignable: getref(&a) transitions to get[a], and setref(&a;e) transitions to set[a](e). Similar rules govern other uses of symbols. The “ref” operations simply dereference the symbol (determine which assignable it references, in the case of assignables), and defer to the underlying interpretation of the referenced symbol. In the case of process calculi channel passing is effected using references to channels, and there are operations that determine an event based on the referent of a value of channel reference type. (This greatly clarifies, in my view, some messiness in the π-calculus, and leads to a much better-behaved theory of process equivalence.)

The scoping of symbols, or names, is handled independently of their interpretation. One may use either a scoped or a scope-free interpretation of symbols. Scoped symbols have a limited extent; free symbols have unlimited extent. The concept of a mobile type, adapted from our work on ML5, manages the interaction between scoped declarations and evaluation (for which see the book). One consequence is that one may form references to both scoped and unscoped assignables, independently of the scope discipline. In other words I am deliberately breaking the conventional (and, I think, mistaken) linkage between the extent of an assignable and the formation of a reference to it. Usually people conflate reference with state and with global extent; I am deliberately not following that convention, because I think it muddles things up. So, for example, in my account of Algol I have no difficulty in allowing references to scoped assignables, and passing them as arguments to procedures, for example. I found this surprising, at first, but then quite natural once I had dissected the situation more carefully.

Finally, let me mention that the critical property of assignables (in fact, symbols in general) is that they admit disequality. Two different assignables, say a and b, can never, under any execution scenario, be aliases for one another: an assignment to one will never affect the content of the other. Aliasing is a property of references, because two references, say bound to variables x and y, may refer to the same underlying assignable, but two different assignables can never be aliases for one another. This is why “assignment conversion” as it is called in Scheme is not in any way an adequate response to my plea for a change of terminology.  Assignables and variables are different things, and references are yet different from those.


Follow

Get every new post delivered to your Inbox.

Join 1,294 other followers