Bellman on “Dynamic Programming”

Everyone who has studied algorithms has wondered “why the hell is Bellman’s memorization technique called dynamic programming?”.  I recently learned the answer from my colleague, Guy Blelloch, who dug up the explanation from Richard Bellman himself:

“I spent the Fall quarter (of 1950) at RAND. My first task was to find a name for multistage decision processes.

“An interesting question is, ‘Where did the name, dynamic programming, come from?’ The 1950s were not good years for mathematical research. We had a very interesting gentleman in Washington named Wilson. He was Secretary of Defense, and he actually had a pathological fear and hatred of the word, research. I’m not using the term lightly; I’m using it precisely. His face would suffuse, he would turn red, and he would get violent if people used the term, research, in his presence. You can imagine how he felt, then, about the term, mathematical. The RAND Corporation was employed by the Air Force, and the Air Force had Wilson as its boss, essentially. Hence, I felt I had to do something to shield Wilson and the Air Force from the fact that I was really doing mathematics inside the RAND Corporation. What title, what name, could I choose? In the first place I was interested in planning, in decision making, in thinking. But planning, is not a good word for various rea- sons. I decided therefore to use the word, ‘programming.’ I wanted to get across the idea that this was dynamic, this was multistage, this was time-varying—I thought, let’s kill two birds with one stone. Let’s take a word that has an absolutely precise meaning, namely dynamic, in the classical physical sense. It also has a very interesting property as an adjective, and that is it’s impossible to use the word, dynamic, in a pejorative sense. Try thinking of some combination that will possibly give it a pejorative meaning. It’s impossible. Thus, I thought dynamic programming was a good name. It was something not even a Congressman could object to. So I used it as an umbrella for my activities” (p. 159).

As with algorithms, so too with dynamic languages?

Update: why is it called “memoization” and not “memorization”?

Update: rewrite of the commentary.

About these ads

40 Responses to Bellman on “Dynamic Programming”

  1. […] and matching different typing regimes is only possible in a dynamically typed context, type systems are often a second class citizens even in the languages that implement them well, […]

    • I think I understand your point, but I would say it differently. First one has a structural type system defining what are the programs; a dynamic language has just such a type system, with one recursive type. Second, one has type refinements as predicates over the structural type system. One can have many of those for each structural type system, whether unityped or multityped. I think this way of saying things reconciles views that seem disparate otherwise.

  2. blaisorblade says:

    > “oh, static languages are just fine, but I want something more dynamic,” the speaker not quite realizing the absurdity of what they are saying

    Question: isn’t NuPRL an answer to the actual concerns?
    I never tried NuPRL and should read more about it; but from email of yours to the types ML, I seem to understand you write a program without a “syntactic” type system (or with a degenerate one, if you prefer), and prove separately it implements some specification by proving the . And the reason you chose to not have a syntactic type system in NuPRL is that it would not be expressive enough — which might be what the speaker is complaining about (maybe with bad language).

    Of course, there’s the huge difference that in NuPRL you still prove that a term is “\in” some type, while “dynamic” languages don’t have support for that.

    • Right, these are called type refinements in PFPL2. They are secondary to structural types.

    • Yes, NuPRL’s is a behavioral type system, i.e. is based on realizability. But, no, dynamic languages as typically advocated are not remotely like NuPRL in terms of expressive power or verification capabilities.

      See my recent paper with Davies that draws on and makes use of behavioral typing to make clear that oop has nothing to contribute to language design, but is instead a mode of use of sums, products, and, yes, abstract types (pace failed attempts to place them in opposition to one another).

  3. I’d like to second jbetzend’s point re. origins of the term “memoization”, I did some digging way back in 1992, and here are my notes from that time:

    It was first coined by Donald Michie [1] during early work in AI
    at the University of Edinburgh in the late 1960s or early 1970s.
    He coined the term “memo function” to describe a general
    technique (first used in Lisp) to accelerate function calls by
    remembering arguments and results of previous calls in an
    internal lookup table. The paper’s title uses “memo-function”,
    but the text has many occurrences of the term “memoization”.

    [1] D.Michie, “‘Memo’ functions and machine learning”,
    Nature, v 218, pp 19-22, April 1968


  4. You asked in your update why it is called ‘memoization’ instead of ‘memorization’. I looked around a bit and here’s what I found:

    Obviously the two words have the same origin and a similar meaning. Yet memoization refers to the specific process in computing while memorization is a much broader term.

    It appears that Donald Michie coined the term ‘memo function’ in 1968 with his “Memo Functions and Machine Learning” in nature, ‘memo’ being short for memorandum (Latin for “what must be remembered”) and thus “memoization” meaning “to turn (a function) into a memo”.

  5. Peter Gerdes says:

    Don’t be a pedant. You know damn well what people mean when they say they want a more dynamic language the same way I know what people mean when they say “I literally felt like I could fly” to describe how they felt. You may feel the terminology is inapt, inelegant, a poor way to extend the language but don’t pretend it doesn’t make sense.

    Here, I’ll prove it to you. Go ask people in CS (theory or practice) which is more dynamic C++ or python, Obj C or C, Ruby or Obj C, python or lisp. Don’t let them weasel out force them to pick (if they say they don’t know ask what they think other people would say). To a man they will tell you lisp > Ruby > python >= Obj C > C++ >= C. Sure the equalities indicate some possibility for disagreement but THIS LANGUAGE IS MORE DYNAMIC THAN THAT IS A LINGUISTIC CONSTRUCT WHICH IS WIDELY AGREED UPON BY MOST USERS OF THE TERM. They might not be using it the way you like but the fact that they use it consistently *DEFINES* a notion of more dynamic. Sure, people won’t agree perfectly and there will be edge cases. But the same thing applies to the term fearsome but it doesn’t mean the concept doesn’t exist.

    The dirty secret is that the dynamicity of a language just isn’t a function of the type system it implements. That’s why you assume it can’t make sense. After all (viewied under a certain model) one can always take a turing complete staticly typed language and embed the untyped lambda calculus.

    What it means when someone says they want a more dynamic language is a combination of *easy* function dispatch based on the properties an object acquires at runtime. True, any language might have the ability via case statements and the like but you can’t deny that it feels different to program in a language like ruby and one like C++. True C++ could do all the same things by encoding all it’s data in some untyped form but it doesn’t provide *easy* mechanisms to define behavior based on runtime information.

    The other component is the level of reflection present in the language. This doesn’t have to be reflection narrowly defined but any ability to create substantially different behavior for certain regions of code than one might expect from the base language. In other words a language like scheme is more dynamic than C++ because all parts of C++ look and behave pretty much the same but deep into a scheme program all the normal scheme behavior might be hidden behind hygenic macros and higher order functions.

    Again it’s not the pure ability. Arguably the access to assembly language gives that to C++ but the ease.

    • Thanks for the note. I think your critique applies in any technical context in which someone gives precise definitions to imprecise social constructs or relatively vague intuitions. Terms such as “energy” or “number” come to mind. My intent is reformist in the sense that drawing the distinctions I do eliminates confusions and reconciles seemingly disparate notions. I haven’t found it productive to take a show of hands to determine what a PL concept means, but you or others may disagree.

  6. Steven Shaw says:

    There must be a better word than “static” than for static types. What is static anyway? One person’s static/compile-time is another persons dynamic/runtime. So, _when_ is static might be a better question :).

    One way to think of it is early (detection) versus late (detection) of bugs/errors. Early is generally better than late and certainly in this case (but not always — for example delaying a decision to the last possible moment is sometimes powerful). I guess this relates to staging (which I know very little about).

    I haven’t come up with any good alternatives. Hope you have one or many.

  7. catnaroek says:

    While I understand and share your distaste for languages whose advocates prepend the adjective “dynamic” to the noun “type” to denote things that are not types, I do not understand your distaste for the adjective “dynamic” itself.

    Instead of fighting non-concepts denoting non-things, it would be far more productive to redefine the terms “static type” and “dynamic type” so that 1. both denote things that actually exist, and 2. both are instance of the general notion of “type”.

    (A draft from) your book PFPL mentions that “Most programming languages exhibit a phase distinction between the static and dynamic phases of processing.” (Chapter 4 from Revision 1.40 of 02.05.2014.) In the context of a language with such phase separation, we can define “static type” and “dynamic type” in a meaningful way:

    1. Static types are those whose meaning is fully determined in the static phase of processing. For example: int, string, type variables that can be monomorphized. All types in Standard ML and Haskell (as defined in the Haskell Report) are static types.

    2. Dynamic types are those whose meaning can only be fully determined in the dynamic phase of processing. For example, if a language has dependent type of valid indices for an array, then the type of valid indices of an array that is constructed at runtime is clearly a dynamic type. GHC Haskell, OCaml and Scala have dynamic types in this sense.

    • The entire point is that “dynamic types” are not types. I debunk the standard fallacies item-by-Item in PFPL. It is NOT ma matter of what is determines statically, because one can, in fact, compute types ( not classes/tags) dynamically, so your proposed analogy, encouraged perhaps by sigfpe’s post, is simply not right.

    • catnaroek says:

      > because one can, in fact, compute types ( not classes/tags) dynamically

      That was exactly my point. While the so-called “dynamic types” in practice are just dynamically computed classes, dynamic types actually worthy of the name would be true-blue dynamically computed types. “Dependent type” is simply too bad a marketing term.

    • Peter Gerdes says:

      But that gets it backward. Dynamic type doesn’t mean a type which is computable dynamically. It means a type which is not computable statically.

      Now this gets a bit murky because a given language only has facts about it’s semantic behavior. The authors may base that upon some type system but you can equally well describe it by many others. One can always simply refuse to assign types and only describe the language via reduction rules as in the lambda calculus.

      So if we are speaking pedantically THERE ARE NO FACTS ABOUT THE TYPES PRESENT IN A PROGRAM LANGUAGE. Type judgements occur in the metalanguage and many different type judgements can be supported by a given language.

      But unless you take it that far you use the same shorthand as everyone else and talks about the types in language X to mean those types that are particularly salient and suited to typing language X. Given this paradigm one could regard a more dynamically typed language as one in which fewer of it’s types can be computed without knowledge of the runtime graph, i.e., it’s not about what you can compute dynamically but what you can’t (elegantly) compute statically.

    • If I understand your point correctly, then I don’t agree with it. In the franework of PFPL the statics is not bolted on after the fact to a pre-existing PL; rather it defines the language and determines its dynamics by the inversion principle. My recently drafted chapter on type refinements discusses post-hoc refinement systems that are choosable more or less arbitrarily, but I am at pains to distinguish these from types. I am sorry if you see this as pedantry; my experience has been that drawing these and related distinctions is productive and helpful in revealing the sources of apparent confusions and disagreements and in enabling a more integrated view of seemingly disparate ideas.

  8. Igor Bukanov says:

    The notion that a static language extends a dynamic one is not known because there are few if any popular static languages that allows to do any sensible programming that relies on runtime type enforcement. For example, in Java one in principle can use Object and refelection to call any method, but the code to use that pattern is extremely verbose and unmaintainable. So if that has to be done on a large scale, people rather embed an interpreter for some dynamic language enforcing the belief that static language are not powerful enough to do the dynamic stuff.

  9. […] InfoQ How Far Can We Get Being Purely Declarative Constraint Logic Programming Bob Harpers’ Post on Static vs Dynamic Typing Analogies Between Typing and Logic Programming Dependent Type Systems, e.g. Agda and Idris The […]

  10. […] InfoQ How Far Can We Get Being Purely Declarative Constraint Logic Programming Bob Harpers’ Post on Static vs Dynamic Typing Analogies Between Typing and Logic Programming Dependent Type Systems, e.g. Agda and Idris The […]

  11. arielbyd says:

    I would like to note that typical DT languages (e.g. Python) don’t really have a notion of class (or tag) – except for the trivial case of primitive objects – which can recognize themselves – on the lowest level, all objects support the same set of operations, occasionally raising exceptions – which are valid behaviours and not cases of “getting stuck”.

    However, when programming in these languages, programmers typically find the notion of types useful for reasoning about these programs. Of course, these “types” are only present in comments and the programmers’ heads – which has the disadvantage that they are not checked. However, this informality has the useful property of programmers not needing to convince type checkers that their programs are type-safe – which is useful, because as of today proof checkers are rather imperfect, and the checkers found in popular languages are, to be blunt, stupid.

  12. So what does preservation and progress look like in a dynamically typed language?

    • Excellent! Preservation is trivial (if you can transition at all, you definitely have transitioned to an expression of the one type), but you still have to prove progress (that you are never stuck without being finished).

  13. Mike Meyer says:

    I think I want to call shenanigans. While in today’s world “dynamic” is generally considered the superior adjective, there are circumstances where “static” would be the preferred option. For instance, you really, really, really want the rock you’re supporting your skyscraper with to be “static” (bedrock) rather than “dynamic” (not bedrock?).

    Likewise, while it’s true that I can’t think of a way to use “dynamic” as a pejorative, the same is true of “static”. And I can use either of them in a negative. Bellman’s trick only worked because he was dealing with an ignoramus (now *there’s* a pejorative for you!). In extending it to programming languages, you are tarring programmers with the same brush.

    Not that his invalidates your point – just this one argument. Before I decide whether I think static vs. dynamic typing is a false dichotomy, I’d like to know how you’re going to define them.

  14. I am familiar with your argument that dynamic languages are a special case of static languages, and am entirely convinced by it. However, it seems to me that you often use this observation as evidence that dynamic languages are not interesting from a software engineering perspective, which seems like a largely unrelated argument to me. From a software engineering perspective, it seems sensible to speak of languages that by their syntax and standard libraries encourage a richly-typed style of programming (SML, Haskell, Scala, …) as a distinct family from languages that discourage or completely disallow such styles (Python, Ruby, Javascript, Clojure, …). Do you find any such distinction absurd? Of course you could write “Pythonic” code in Haskell, but it would look ridiculous and approximately no one does that.

    The reason I’m interested is that like Jeremy Siek ( I believe there are solid software engineering arguments for writing carefully, precisely typed programs as well as more sloppily-typed programs. Dismissing one family or the other (or hybrids, or whatever) out of hand seems counterproductive to me.

    To get back to the naming theme of your post, would you be happier if we called them typeful and sloppy languages?

    • My main point is to debunk the supposed “opposition” between dynamic and static languages. It’s based on a fallacy, and I’d like to correct that. More broadly, I am in favor of richly expressive typed (there are no other) languages that encompass things like so-called dynamic typing or laziness as a special case, a mode of use of a proper static type discipline. Pretending that there is some sort of opposition between dynamic and static languages does not further this goal, it impedes it, which is why I object.

      There is a good reason why no one programs “pythonically” within ML or Haskell: once you have the freedom to express ideas properly, you realize that you absolutely do not want that! It’s only within the limited world of only one type that such ideas seem to make sense.

      Please see my response to Matthias for a brief explanation of my view of what is called “gradual typing” and suchlike. I see no value at all in starting out by ignoring types (blinkering oneself at the outset), then trying to go back and undo the mistakes you make at the outset by considering only one type. Also see PFPL, especially the recent revision that includes type refinements, for a worked example of what happens when you start with one type and examine closely what is going on using refinements. It’s laughable because the code you wind up with you could as easily have written in the first place, without the excursion through unityped programming.

      I am fully sympathetic to the position of the poor (hypothetical) person who has only ever seen Java or the like as an example of static typing, and recoils from it. But I’ve been dead against the wrong-headedness of Java from day one, and would never uphold that as an example of anything other than a big mistake backed by a huge company with way too much influence over what is taught at universities. (I’ve worked hard to eradicate that from the curriculum at Carnegie Mellon, as I’ve described in previous posts.) And it remains true, though much less so than once was, that many people are unable to imagine the concept of type inference. It eliminates explicit mention of types, except exactly where you want them, namely to express your intent within the code itself, breathing the life of reason into it, rather than relying on post-hoc, inaccurate “documentation” of some idea that was once vaguely true of the code that’s actually there.

    • Depending on what exactly you mean by “opposition”, I do not agree. I understand your point about languages with non-trivial type systems being generalizations of languages with trivial type systems (and btw, have assigned several chapters from PFPL in a small seminar I’m leading this semester). But X being a generalization of Y does not immediately demonstrate that Y is crap.

      Your claim about “you realize that you absolutely do not want that!” is demonstrably false. I have written 10s of thousands of lines of SML, OCaml and Haskell and still happily choose Python and Javascript for some programming tasks. And I personally know at least a handful of people who have had the same experience.

      I think the interesting technical point is that there seems to be some real convenience in a language implicitly tagging every value with its class and checking those tags at every use. In that style of programming you give up a lot in terms of performance, maintainability and large-scale reliability, but sometimes those aren’t important concerns.

    • See PFPL for a demonstration that a superficial convenience is nothing of the kind, particularly when you have type inference as in ML. And you are distorting my claim, which is that the supposed opposition between dynamic and static languages is wrong, which it is.

    • Apologies for misunderstanding what you called crap. It was not a deliberate attempt to misrepresent your views.

      I scanned over your analysis of dynamic typing in PFPL, but didn’t see anything that directly addresses the kind of convenience I have in mind. So I wonder if you would comment on this example? I’m curious if I have missed something, or if it’s just a different in priorities.

      Consider the following scenario: We have some legacy function foo that takes a bunch of parameters and does a bunch of stuff. We decide to add a new feature to foo that requires a new parameter, and is only relevant in some special case. Here are sketches of how I would implement this in Python and SML. It’s worth noting that the code is quite similar in the two languages, but there are two differences I’ll comment on below.


      # exception version
      def foo( ... params ..., x ):
          if some special case:
              ... use x ...
      # checked version
      def foo( ... params ..., x ):
          if some special case:
              if x == None:
                  error "Sorry; need x"
                  ... use x ...
      # normal case call
      foo( ... params ..., None )
      # special case call
      foo( ... params ..., xVal )


      (* exception version *)
      fun foo( ... params ..., x) =
          if some special case then
              ... use valOf(x) ...
      (* checked version *)
      fun foo( ... params ..., x ) =
          if some special case then
              case x of
                  NONE => error "Sorry; need x"
                | SOME xVal => ... use xVal ...
      (* normal case call *)
      foo( ... params ..., NONE )
      (* special case call *)
      foo( ... params ..., SOME(xVal) )

      The differences are:

      1. In the exception version in SML we need “valOf”, whereas in Python we just use x directly.
      2. In the special case call in SML we need SOME, whereas in Python we just pass xVal directly.

      These may seem like insignificant differences, but my belief is that these kinds of small differences add up to a noticeable convenience difference for quick and dirty programming. Have I missed something? Do you think this kind of convenience difference is just not worth thinking about?

    • Thanks for the followup. In this case I can’t see why not use options? The price you pay for the “convenience” of not doing so is that you have the infamous “null pointer” problem everywhere, a source of endless irritation. The purpose of refinement types, btw, is to ensure that your use of valOf is indeed valid at that point — it’s an instance of a potential “downcast error”. (I am pretty sure, though, that I have never used valOf in my entire life of programming with Standard ML.)

      My top-level point is that it doesn’t make sense to me to ignore the rich world of types that is inherent in computation. If you want or need dynamic typing (as it is called) in some situation, then by all means use it; a sufficiently rich static language puts this at your disposal as a special case. It seems to me, though, that once you have that choice, you stop making it, and you realize that confining yourself to one type is not at all an advantage, but is rather a clear disadvantage. That’s what I tried to show by a (necessarily small, but detailed) example in PFPL (Dynamic and Hybrid Typing chapters).

    • My top-level point is that using options is not literally free (compared to relying on implicit dynamic classification machinery). The syntax tree of your program is larger. In many, many situations I would argue that what you buy in terms of performance, maintainability and reliability is worth the price. But the tradeoff exists. I don’t know if it’s your intention or not, but I think your writings on the topic imply that there is simply no tradeoff at all.

      Stylistic choices somewhat amplify this issue. As you say in your comment, the checked version (avoiding valOf) is what someone writing in SML would choose 9 times our of 10 (999 out of 1,000, whatever), whereas the exception version is far more likely to show up in a Python, Lua or Ruby program. Again, there are reasons to prefer the checked version, but it does come at some cost in terms of program size.

      The other important reason that unityped languages are good for small, simple programs has to do with errors. Type errors exist in the universe of all possible program executions, whereas classification errors exist in the universe of specific executions. If you’re writing a large application, the former has a lot of value. If you’re writing a little program that reads some data in 4 different text log and database formats, does a little massaging and outputs some graphs, the latter makes it a lot easier to see where to apply a bandaid to get the darn thing working.

      Where I think we agree is that large, complex applications benefit quite a lot from smart use of typing discipline. One of the reasons that this conversation is interesting to me is that I think a lot of programmers (especially the large number of self-taught programmers) start with unityped languages, because they make it easier to get from zero to some code doing what you want. Some of those programmers go on to work on bigger systems where the use of unityped languages is much less appropriate. Getting that population to move to better languages for that kind of application is a lot easier when one starts from a place of understanding why programmers like unityped languages.

      (Semi-tangentially, while there’s a case to be made for the popularity of Java, C++, Objective C and C# being the work of corporate marketing machines, Lua, Python, Ruby, Clojure and Perl all grew up in the open source world and caught on because lots of programmers like using them.)

    • Your example prompted me to think of a couple of other points to consider that I think are important. Your emphasis on whether the nth argument to that function/method is an option or not does not do justice to the real issue, which is what Yaron Minsky calls making undefined states unrepresentable (or words to that effect). First, notice that the fact that Python or other unityped languages have “multiple arguments” to functions/methods is already a concession for the need for more than the one true type; you need at least finite products of the one true type to pass more than one value to the function. Second, notice that the standard “multiargument” syntax is, in fact, an instance of pattern matching against a tuple pattern, another concession to better practices than offered by oop dogma. Which brings me to my third and main point, which can only be discussed by being a bit more specific. Let’s say that your “some special case” is that the first argument is zero. In your arrangement, whether you rely on the “null pointer” or not, is that you are making ill-defined situations representable, because I can provide the nth argument even in the case that n is zero (or non-zero), when in fact it should not be there at all. This is an instance of all that is wrong with struct’s in C: you agglomerate all the fields that could ever be there, even when the situation is such that some fields are in fact undefined for that scenario. And then you remind yourself for now and forever into the future that you mustn’t touch those fields in such situations. This never worked and will never work. It is of course possible to do C-like or “pythonic” programming in ML using options instead of the “null pointer”, but that’s not the way to write good code (and I am 99% sure you know that). What you want is (a) types to express the correlations between components, these are called sums, and (b) pattern matching to explicitly match the cases that are legal and allow the exhaustiveness checker to warn you when you’ve missed a case, either by mistake or by design or as a consequence of evolution of the code. The advantages of this approach are practically infinite, and the disadvantages are non-existent. The “pythonic” approach is a total lose on all counts, as becomes obvious once you admit, as even Python grudgingly does by having multi-argument functions, that more than one type in fact exists. So what I am saying is your example is really of bad ML code vs standard “pythonic” code, and even then the ML code looks slightly better to me. But when written properly, the ML code is quite a lot better. Yaron gives a talk (and may have written a paper) expanding on this point in great detail, so I won’t try to repeat it all here, but rather refer you there.

      Thanks for the thoughtful responses, I appreciate it.

    • I strongly agree with most of the points in your most recent comment, but I think your claims about the relative advantages and disadvantages are way off.

      First on the advantages side: Even in the context of medium- and large-scale applications, where I think rich type systems have compelling engineering advantages, those strengths are most definitely not “practically infinite”. If that were the case, it would be practically impossible to write large-scale software in unityped languages, which is obviously not the state of the world we live in. If the advantages were practically infinite it would be extremely surprising to see teams writing in C++ and/or Java win first prize (or share it?) at the ICFP programming contest (which has happened in 6 of the last 7 years). Compelling advantages, I’m with you; practically infinite, not by a long shot.

      On the disadvantages side: I get the value of making illegal states unrepresentable (Yaron’s presentation: minute 18), but it’s not free! It takes intellectual effort to encode non-trivial invariants in types. In my silly example, if the special case is weirder than one of the parameters being 0, it would take some non-trivial refactoring of the callers of foo to build the right value. For companies like Jane Street, where much of their code needs very high reliability and resilience in the face of evolution, this investment is a no-brainer. But if you look at all the code written around the world, the vast majority of it does not have such high reliability requirements. A good chunk of it is small, single-use, dead-end programs. In this context, (well-implemented) unityping really does provide convenience advantages.

      In case it’s not obvious, I’m not trying to claim that Python or Java or C is a better language than SML or Haskell or Scala. I would like to see more sophisticated language technologies used more widely. And I think it’s important for effective advocacy to stay on the well supported by evidence side of whatever claims one wants to make.

      (Random thought related to Yaron and your previous post: An offhand remark he made at a presentation helped me solidify my hypothesis that for most applications parallelism should really be done with processes (isolation by default) rather than threads (sharing by default). That’s how they do it at Jane Street.)

  15. Bob, one more person pointed me to your blogs on ‘dynamic programming’ and I decided I’d finally respond.

    I am the one academic PL researcher who has spent nearly his entire career working with dynamically typed languages and on improving them. I did so long before they became popular in the real world and I am still pursuing my vision of getting them right. I also spent my first sabbatical with you to study types and I am impressed to this day. I think this gives me the standing to push back in a rational way.

    Before I do so, let me remind you that programming in dynamically typed languages is still highly unpopular in academic circles. Everyone who touches this topic starts with an apology explaining that he really understands how important types are. So I think this debate is quite academic and probably irrelevant for practitioners and researchers alike. Having said that, here we go.

    Just like Bellman, PL people use the word “dynamic” in the phrase
    “dynamically typed” with “the classical physical sense” in mind. Something good happens at run time; you and I call this tag checking so that we can have meaningful conversations. But “they” call it dynamically typed (DT) and trying to fix this terminology problem is water under the bridge. Let it go.

    Next let me paraphrase Jim Morris’s dissertation. He suggested that the study of programming languages is about four concepts: syntax, types, semantics, and pragmatics. [Yes, he seems to identify the last two.] From a semantic perspective, your claim that a static programming language is totally correct (if you live in a closed world). Approaches as far apart as Scott semantics and the Wright-Felleisen method for type soundness support your idea; a type soundness theorem degenerates to a safety theorem when we move to a dynamically typed language. The problem for you and me as semanticists is that semantics is the least important aspect of a programming language. The English phrase “it’s just semantics” applies to our world in some ways, especially if you include all the people who program without ever having seen a semantics and people who wouldn’t recognize a semantics if it bit their behinds.

    Point is most programmers care about pragmatics and from a pragmatic perspective dynamically typed languages really are not statically typed languages. In a nutshell, statically typed languages commit to one type discipline; dynamically typed languages enable programmers to mix and match as many static type disciplines as they want.

    Here is an expanded explanation for your readers. As you know from How to Design Programs, (good) DT programmers use something like types to design and organize their programs. The big difference is that there is no explicit language of types as in other explicitly, statically and soundly typed languages such as ML, Haskell (I know about exceptions), all the way to Agda. This is both a curse and a blessing.

    The obvious curse is that DT programmers must rely on informal, unchecked data definitions and signatures to convey to the next reader of their code what they meant. Before they know it, the comments and the code are out of sync, which imposes an unbelievably high cost on maintenance and other SE activities. That’s what my 25-year program on ‘soft typing’ and ‘gradual typing’ is trying to fix, and I think we are slowly arriving at an acceptable place here.

    The hidden blessing is that DT programmers can import, super-impose, and integrate many different static type disciplines — informally — and use them to design their code and to reason about it. The result are programming idioms that no one ordinary static type system can uniformly check, and that is why DT programmers cannot accept the phrase “DT languages are statically typed.”

    From my 25-year study of this world and from the recent literature — now that others are finally paying attention — programmers actually do this. Most of the time, you see naive set theory [readers, this is a technical term] at work in DT programs. In others, you see ML-ish reasoning or, let’s use Cardelli’s word here, “typeful” programming; and that includes dependently typed ideas. When you get to object-oriented DT languages, you quickly recognize that they invent programming idioms that conventional OO type disciplines don’t even cover. And there is more.

    So yes, semantically dynamically typed languages are statically typed languages. But for the programmer on the street and the PL researcher who studies this phenomenon, this phrase is unhelpful. If we wish to help the millions of software engineers who commit to this approach with our research, we need to understand that __dynamically typed languages are multi-disciplined statically typed languages__, something the static world has yet to invent.

    • Thanks for the thoughtful response. It is wrong to think that static languages are committed to one type discipline! There is an important distinction to be made between structural type theories, which tell you what programs exist and what is their dynamics, and behavioral type theories, which tell you properties about the execution of a pre-existing well-typed program. This is precisely the distinction between proof theory, the boring formal propositions-as-types correspondence, and realizability theory, the fascinating and powerful semantic propositions-as-types correspondence. Most people don’t even know of the distinction, and make the mistake of emphasizing the former (the wrongly-named “Curry-Howard” “isomorphism”) over the latter (Brouwer’s actual conception of types). Concretely, the notion of type refinement, which we’ve been developing here over a couple of decades, is a form of realizability, and is precisely a behavioral type system layered on a structural type system, the right way to do things. You can and do have many different notions of refinement, and I encourage their further development. But they apply to statically typed languages, the only kind that exist. (I’m sorry, but dynamic “types” are not types and should not be called that.) Interesting that you should mention OOP, because I believe that (a) the extant static type systems for OOP are not very useful or interesting, and (b) they confuse structural with behavioral typing through the mistaken notion of subtyping. So I fully agree with you about that topic, and will have more technically precise things to say on it before very long.

    • While I assumed a structural approach to typing in my response and (I know you know) I am perfectly aware of refinement types — our contracts are like those after all — I still don’t think I have ever seen a mix-and-match approach in the type refinement world the way I have encountered it in the “untyped” world. I am not saying that it is impossible, but I am saying that the static world is behind in this regard, especially as far as the programmer on the street is concerned and even some PL people. The kind of type systems that your response assumes are still in research and (practical speaking) in their infancy. The “imaginary” type system that DT programmers assume exist in their head. (Otherwise see my response above.)

      ;; —

      I understand that it is strange to call ‘tags’ and ‘tag checking’ ‘types’ and ‘dynamic type checking.’ But the terminology is accepted by many PL researchers and programmers. We should give up on fighting such entrenched terminological mistakes, even if they are a roadblock to a proper understanding. And if we don’t distinguish informal pragmatics from formal pragmatics we won’t even get that far.

    • No. Classes/tags are not types, and nothing has or ever will come of encouraging this basic confusion. It’s an error, plain and simple.

    • Let me put that another way: there is a very useful distinction between classes and types that must be held onto. Confusing the two concepts only hurts, it doesn’t help.

    • Igor Bukanov says:

      The confusion between classes and types comes from fundamental differences between classes, say, in Java/C# and those in C++. In a typical OOP language everything is derived from a common base class. As such classes are indeed just runtime tags. In C++ a new class can be defined outside of any hierarhy thus adding a new statically enforced partition between instances that can only be subverted through undefined behaviour. This is emphasised at the end of section 2.1 in [1].

      [1] –

  16. jamesclapper says:

    Frankly, I think the NSA has more to do with the unpopularity of static typing and machine-checkable proofs than anything else. They will fight to the death anything that can prevent Heartbleed-scale exploits. Snowden told all of us why Coq isn’t mainstream.

Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Get every new post delivered to your Inbox.

Join 201 other followers

%d bloggers like this: