Dynamic languages are static languages


While reviewing some of the comments on my post about parallelism and concurrency, I noticed that the great fallacy about dynamic and static languages continues to hold people in its thrall.  So, in the same “everything you know is wrong” spirit, let me try to set this straight: a dynamic language is a straightjacketed static language that affords less rather than more expressiveness.   If you’re one of the lucky ones who already understands this, congratulations, you probably went to Carnegie Mellon!  For those who don’t, or think that I’m wrong, well let’s have at it.  I’m not going to very technical in this post; the full technical details are available in my forthcoming book, Practical Foundations for Programming Languages, which is available in draft form on the web.

So-called dynamic languages (“so-called” because I’m going to argue that they don’t exist as a separate class of languages) are perennially popular.  From what I can tell, it’s largely a matter of marketing.  Hey, we all want to be dynamic, don’t we?  A dynamic personality is one that makes things happen, takes risks, has fun, breaks the rules!  And who wants to be static?  It’s synonymous with boring, after all, and no one wants to be a bore (even if they are, or especially if they are).  So, hey, dynamic languages are cool, right?  Or, at any rate, cool people like dynamic languages.  Static languages are for drips and losers.

That’s one argument, and, to be honest, it’s quite a successful bit of marketing.  Like most marketing tactics, it’s designed to confuse, rather than enlighten, and to create an impression, rather than inform.

Another part of the appeal of dynamic languages appears to be that they have acquired an aura of subversion.  Dynamic languages fight against the tyranny of static languages, hooray for us!  We’re the heroic blackguards defending freedom from the tyranny of typing!  We’re real programmers, we don’t need no stinking type system!

Now I’m as susceptible to the seductions of subversion as the next guy (witness this very post), so I can appreciate wanting to strike one for the little guy, overturning the evil establishment.  But I’ve got news for you: when it comes to “dynamic typing”, it’s a futile enterprise.  Not because the establishment is so powerful.  Not because evil always wins.  But because the distinction makes no sense.  Dynamic typing is but a special case of static typing, one that limits, rather than liberates, one that shuts down opportunities, rather than opening up new vistas.  Need I say it?  Something can hardly be opposed to that of which it is but a trivial special case.  So, give it up, and get with the program!  There are ill-defined languages, and there are well-defined languages.  Well-defined languages are statically typed, and languages with rich static type systems subsume dynamic languages as a corner case of narrow, but significant, interest.

Wittgenstein said that all philosophical problems are problems of grammar.  And so it is here as well.  The root of the problem lies in the confusion between a type and a class.  We all recognize that it is often very useful to have multiple classes of values of the same type.  The prototypical example is provided by the complex numbers.  There is one type of complex numbers that represent points in two-dimensional space.  In school we encounter two classes of complex numbers, the rectangular and the polar.  That is, there are two ways to present a complex number using two different systems of coordinates.  They are, of course, interchangeable, because they represent values of the same type.  But the form of presentation differs, and some forms are more convenient than others (rectangular for addition, polar for multiplication, for example).  We could, of course, choose a “coin of the realm”, and represent all complex numbers in one way, and consider coordinates as just a matter of how a number is given.  But it can also be convenient to consider that the type of complex numbers consists of two classes, each of which consists of two real numbers, but interpreted differently according to the coordinate system we are using.

Crucially, the distinction between the two classes of complex number is dynamic in that a given computation may result in a number of either class, according to convenience or convention.  A program may test whether a complex number is in polar or rectangular form, and we can form data structures such as sets of complex numbers in which individual elements can be of either form.  But this does not conflict with the basic fact that there is but one static type of complex numbers!  In type theoretic terms what I am saying is that the type complex is defined to be the sum of two copies of the product of the reals with itself.  One copy represents the class of rectangular representations, the other represents the class of polar representations.  Being a sum type, we can “dispatch” (that is, case analyze) on the class of the value of the type complex, and decide what to do at run-time.  This is no big deal.  In fact, it’s quite simple and natural in languages such as ML or Haskell that support sum types.  (Languages that don’t are hobbled, and this is part of the confusion.)

What does this have to do with the concept of a dynamic language?  The characteristics of a dynamic language are the same, values are classified by into a variety of forms that can be distinguished at run-time.  A collection of values can be of a variety of classes, and we can sort out at run-time which is which and how to handle each form of value.  Since every value in a dynamic language is classified in this manner, what we are doing is agglomerating all of the values of the language into a single, gigantic (perhaps even extensible) type.  To borrow an apt description from Dana Scott, the so-called untyped (that is “dynamically typed”) languages are, in fact, unityped.  Rather than have a variety of types from which to choose, there is but one!

And this is precisely what is wrong with dynamically typed languages: rather than affording the freedom to ignore types, they instead impose the bondage of restricting attention to a single type!  Every single value has to be a value of that type, you have no choice!  Even if in a particular situation we are absolutely certain that a particular value is, say, an integer, we have no choice but to regard it as a value of the “one true type” that is classified, not typed, as an integer.  Conceptually, this is just rubbish, but it has serious, tangible penalties.  For one, you are depriving yourself of the ability to state and enforce the invariant that the value at a particular program point must be an integer.  For another, you are imposing a serious bit of run-time overhead to represent the class itself (a tag of some sort) and to check and remove and apply the class tag on the value each time it is used.  (See PFPL for full details of what is involved.)

Now I am fully aware that “the compiler can optimize this away”, at least in some cases, but to achieve this requires one of two things (apart from unreachable levels of ingenuity that can easily, and more profitably, be expressed by the programmer in the first place).  Either you give up on modular development, and rely on whole program analysis (including all libraries, shared code, the works), or you introduce a static type system precisely for the purpose of recording inter-modular dependencies.  The whole-program approach does not scale, and flies in the face of the very advantage that dynamic languages supposedly have, namely dynamic evolution and development of components.  The static type system approach works beautifully, and makes my point nicely.

I am also fully aware that many statically typed languages, particularly the ones widely in commercial use, do not have a sufficiently expressive type system to make it feasible to work dynamically (that is, with multiple classes of values) within a statically typed framework.  This is an argument against bad languages, not an argument against type systems.  The goal of (static, there is no other kind) type systems is precisely to provide the expressive power to capture all computational phenonema, including dynamic dispatch and dynamically extensible classification, within the rich framework of a static type discipline.  More “researchy” languages such as ML or Haskell, have no trouble handling dynamic modes of programming.

Why should you care?  Because, I argue, that a fully expressive language is one that supports the delicate interplay between static and dynamic techniques.  Languages that force only one perspective on you, namely the dynamic languages, hobble you; languages that admit both modes of reasoning enable you and liberate you from the tyranny of a single type.  Let a thousand flowers bloom!

About these ads

57 Responses to Dynamic languages are static languages

  1. Gary Hollis says:

    Dynamic languages are better understood as creating a difference between a variable and an object. Lisp illustrates this the best by making symbols a type, and having objects be indexed by symbols. However, each object has a single type, and assignment of a new object to a variable is not changing the type of the object, but rather the type of the object which the variable refers to. If you do it right (like CLOS and true generic functions do), you get both power and efficiency.

    • And it will still be statically typed. Please see PFPL for a discussion of symbols in both a statically and (so-called) dynamically typef context.

      I do think it is important to distinguish carefully classes, which may be attached to objects, from types, which are static classifiers in this context. It helps avoid arguing cross-purposes.

      Thank you for the thoughtful response.

  2. Frank Adrian says:

    TL;DR version: Mo types => Mo problems. It’s just about descriptions of programs and whether the components of those descriptions are better expressed and more easily understood by having things tacitly expressed by the compilation system or by having them spelled out algorithmically. The algorithm and type description portions of programs are tradeoffs. In the end, it’s more important that we can easily change these description because the real world is dynamic.

    —–

    The unitypal aspect of dynamic languages that you talked about is a really interesting way of looking at things. If you think about it, the bit array is a unitype upon which we use additional layers of types as a conceptual filter, giving meaning to this otherwise meaningless sea of bits. You may scoff, but there was actually a language proposed by someone back in the late 1970′s/early 1980′s that looked at memory this way (Sad that I don’t remember the name of the language – vague and perhaps apocryphal memories whisper U. Toronto, odd language in one of the better journals of that day, somehow linked with another language named Edison from around that time in my mind. But I digress…)

    If you look at types (including functional types) as a way through which we interpret bit patterns, you start to realize that all you’re doing with types when programming is making a tradeoff between how you’re going to structure (or elide) certain kinds of types – in short, even dynamic languages have their types built in, if only by the fact that when acting upon a certain sea of bits, this transformational filter, taken as a whole, will produce a meaningful sequence of bits when viewed through the further filters of screen, vision, and brain. In the end, we are all Turing machines and, really, we’re only talking about ease of use in programming.

    What we’re really talking about is how one expresses these filters. “Typed” languages have a more concise description of almost any static filter, but suffer when changes need to be made to that filter (honestly, it sort of reminds me of a Transformer changing from a car into a robot – it’s reallyreallyshinycool technology, but until the process is complete, you’re stuck with something that isn’t very useful and, if something goes wrong, good luck figuring out what, because there’s a fuckton of machinery there); in addition thinking about types is math and most people suck at math. “Dynamic” languages can take more space (and allow more errors) in describing the overall filter, but it’s a conceptual space more easily traversed by folks who really don’t need to think much about their programs and who follow an operational model; in addition, this filter description is more easily changed (more Gumby than Transformer) and can still be useful even if the transformation is not yet complete. In the end, the amount of code in each is about the same when you contact the real world. Type casting/checking at programming time isn’t that big of a part of any dynamic program and the overall filter (or description of the filter, since that’s what we’re really talking about) in whichever technology you’re using, isn’t much different, once it hits the vagaries of other systems. In the end, I still have to generate a crappy web page to show my result and, unless your shiny new pretty “typed” language targets that hideous untyped Javascript somehow, I have to deal with that anyway (and probably have to go into that level to debug whatever ugly abstraction – because, seriously, show me a typed something or other built on any part of Javascript that won’t be butt ugly – you build on it).

    So it’s simple – show me there’s a elegant set of types that will give me succor, from hardware to the highest language levels, that will hide the total suck and complexity when you come up against the other systems in the real world – I don’t think you can, because the real world is not typed. It’s a filter system that we build in a variety of ways and it’s hideously dynamic. Stop looking at static structures and start looking at how those structures transform and how to get feedback loops between observations of that world and changes to these filters. It’s the only way you’ll ever get to the fluidity of the real world, which ultimately, you’re trying to model. Because the only things in the real world that are static are dead.

    • If you think types (or programming languages, for that matter) are about interpreting bits, then you’ve completely missed their point. Except in the lowest-level languages, bits are an implementation detail, and largely (or even entirely) irrelevant on the level of abstraction that the language is supposed to provide.

  3. […] in this sense, but a less powerful version where you are not allowed to define any other types. Prof. Robert Harper of CMU has been making this argument for a few decades. Types are an extremely useful tool, and we […]

  4. … Until we had to write something that handles heterogeneous collections, then we ended up writing (essentially) a simulator of a dynamic language (in Scala) because the lowest common type in our collections was “Any.”

    • that’s scala’s problem

    • m50d says:

      You can use Shapeless’ HList if you know type by position. It would be possible to do the same thing with maps (I’m not sure whether Shapeless does that yet), where the value type could depend in an arbitrary way on the key type.

      I agree that there’s value in programming the dynamic way though, and I do wish Scala would make it easier to do a “dynamic call” than x.asInstanceOf[{def meth(): String}].meth()

  5. […] Of course, you can circumvent this whole system by using prodigious casting or by using the id type for everything. Using a so-called “untyped” system, like Ruby or the id subset in Objective-C, is purposefully crippling yourself. Instead of pretending that there are no types in such a system, we need to realize that we’re actually shoehorning every semantic type into one. […]

  6. Though the argument is compelling there are at least two important fallacies it implicitly promotes: one being that a computer program should be written in a single language (are there any real world cases for this these days?) and secondly that all parts of a computer program should have equal quality in terms of perfomance and even stability and safety.

    Don’t write tight computational loops in Ruby. But then again you might not want to write those in Haskell either (why pay the overhead for what wants to be a very short piece of code?).

    That is not to say that there are not many use cases in which is it much easier to maintain over the long term statically typed code, where you would really sleep easier if you had proof it works. Where you want change to be painful in order to be sure it is correctly implemented. There are moments for that.

    At some abstract level dynamic languages may very well be languages with a single type. And then again at some abstract level a good glass of Bordeaux is just a specific form of alcohol; Very limited indeed. But then again please don’t offer me to turn it into a cocktail.

  7. mihaibalint says:

    Here is an engineering view on dynamic language popularity: It facilitates that good old favorite reuse technique of the software industry: copy-paste programming. It’s immediate, it lets the boss know that the programmer produced something and “the app” works, right now!

    Sure, they could do proper reuse with creating abstractions, interfaces, design, bla, bla, but that requires effort and time which means that “the app” might only work tomorrow, which is not good because we need to ship yesterday. Oh and let’s not forget that good abstractions and interfaces require a better trained/more experienced/talented developer (more expensive, less replaceable, higher risk). So in the short term, it makes perfect business sense to “go dynamic”, and nowadays, nobody plans farther than the next quarterly shareholder meeting.

  8. pbazant says:

    A physicist’s view:
    I didn’t realize the fact that dynamic typing is just a very special case of static typing until about three years ago. Of course I had been aware that one can, in principle, implement C++ in Python and vice versa, so the distinction cannot be fundamental, but then I read an article about sum types and realized that the distinction actually doesn’t exist. No rant intended, but I think using C (sum types not possible) or C++ (maybe possible, but not standardized) hinders proper understanding of those issues. From the practical POV, however, languages restricted to a unitype may be easier to use for some tasks (serialization etc.) than languages with limited type composition capabilities like C. Interoperation of richly typed and unityped code is still very problematic today. However, the reason is not fundamental, but historical. IMHO, a properly designed language with rich first class types+type pattern matching+dynamic compilation can achieve the flexibility of dynamic languages and speed of C simultaneously, with all the benefits of a rich type system.

    • johnwcowan says:

      From the practical POV, however, languages restricted to a unitype may be easier to use for some tasks (serialization etc.) than languages with limited type composition capabilities like C.

      Just so. As I said, given the choice between static types without dynamic types and dynamic types without static types, I unhesitatingly affirm my preference for the ape, er, latter.

  9. millstone2 says:

    Here is an example of how dynamic languages saved the day, where a static language could not.

    We ship a library which contains a hash table class used by many client applications. While testing the next version of the library, we found that it revealed a latent bug in a high-profile client app: two equal objects could have different hash values, leading to data loss. Bad news!

    The client app had already shipped to customers, all of whom would be unhappy if we released the library. We could try to get those customers to update the app, but it would have been expensive and inevitably some customers would not update in time. Changing the library to accommodate the bug was another possibility, but a truly unpleasant one.

    Fortunately the library and client app were both written in a dynamic language, so we made the new version of the library detect the presence of the buggy class. If it finds it, it reaches in and outright replaces its hash method with a corrected one. Problem solved!

    Note that this had nothing to do with type checking (though we could have dynamically introduced a type error if we chose, casting doubt on the whole notion of a “provable static type system”). Instead it’s all about the compiler permitting static information to be wrong. If the compiler assumed that the object’s hash function at compile time would not change at runtime (say, by inlining it), this fix would have been much harder. But dynamic languages made it simple!

    • mihaibalint says:

      Quote:

      “Changing the library to accommodate the bug was another possibility, but a truly unpleasant one.” … bla … “so we made the new version of the library detect the presence of the buggy class”

      So, you went with the unpleasant solution, what’s your point in favor of dynamic languages again? Oh, it’s the Java and C/C++ don’t allow hot-swapping of library methods, well for your particular use-case I think even old Java could hot-swap. In general I don’t see why statically typed languages could not do that.

  10. millstone2 says:

    I don’t think the author has a full appreciation of what dynamic languages are. Here is the most fundamental definition: a static language depends on correct static types to execute correctly, and a dynamic language does not, instead .

    That does not mean that a dynamic language must have no static types. For example, Objective-C is a dynamic language with static types. Dynamic languages can admit as rich a static type system as you like.

    The crucial difference is that those static types do not have to be correct, i.e. do not have to accurately specify the true runtime type of the object. This is what allows dynamic languages to implement (for example) generic, transparent remoting, which is not possible in your examples of ML or Haskell (which therefore do NOT allow “dynamic modes of programming.”)

    • To most people who value type systems, that value is a type system’s ability to actually prove something about programs. If the type system is utterly unsound, this value is reduced to zero.

      “Generic transparent remoting”, if I understand it correctly, is well possible in a typeful language, and not particularly hard to support either.

    • Igor Stasenko says:

      Ever heard about Test-Driven Development?
      Here’s the proof: if your tests are not failing, therefore program works correctly.

      And it think that author miss the point. Dynamic languages are called dynamic, not because of absence of static types, but because you can change the program behavior at run time, without need to do full world stop, and run compiler to do some static-type analysis for you.

    • johnzabroski says:

      Igor,

      Many functional languages support interactive programming trivially, due to the fact that most if not all computation is pure. That being said, not all functional languages are equal in how long it takes the interactive session to resolve changes or how well they can handle changes without requiring a “reload” command.

      Changing behavior at run-time isn’t even that particularly wild of a party trick. It also might not be the best way to describe what the “dynamic” program is doing; maybe what you are really trying to do is achieve an equilibrium state and abusing dynamic languages to achieve that goal? If so, why not spell out the goal state and let the runtime figure it out for you? Don’t you just love systems you can deploy and walk away from and not have to get paged into the office at 3am to fix, because you’ve proved they work?

    • millstone2 says:

      A static type system is unsound only if the compiler accepts type errors without a diagnostic. Both dynamic and static languages may have sound static type systems. The key difference is that in a dynamic language, the diagnostic can be a warning, while in a static language, the diagnostic is necessarily a fatal error.

      This has real practical implications, e.g. if you change some type in your Haskell code, you have to fix every use of that type before you can test anything. Surely you’ve found that annoying? Dynamic languages permit interleaved fixing and testing.

      Generic transparent remoting really is not possible to support in a static language. For example, see how Microsoft went about adding the [TransparentProxy class to C#](http://blogs.msdn.com/b/cbrumme/archive/2003/07/14/51495.aspx), which required significant changes to the compiler and runtime. In every case where the compiler emits a method call, it has to construct a special stub that considers the possibility that the receiver could be a TransparentProxy. In short, they were forced to tack on some limited dynamicism to the runtime to support remoting (and note the consequence: compile time types are now permitted to be wrong).

      Had C# been a real dynamic language, this remoting could have been implemented entirely in a library, without requiring compiler or runtime modifications. That’s the strength of dynamic languages.

    • Scott Kilpatrick says:

      “This has real practical implications, e.g. if you change some type in your Haskell code, you have to fix every use of that type before you can test anything. Surely you’ve found that annoying?”

      This is one of the reasons I *prefer* static languages and real type systems.

    • @Igor:
      Testing can prove the presence, not the absence of bugs. (Edgar Dijkstra)

      @millstone2:
      If I link against a library by someone who chose to ignore those warnings then my own program is broken without me getting any diagnostics.

      Re generic proxies: all you need is either a typecase or a type dynamic – both can be supplied without breaking the type system.

  11. abstract type says:

    It’s not a matter of oddity, it’s a matter of being wrong.

  12. paddy3118 says:

    Static languages are dynamic with an initial type check of the source at runtime.

    So what?

    • abstract type says:

      That is operationally incorrect. See my book for technical details.

    • paddy3118 says:

      Of course it it would be odd to write a static language like that. I think you gain just as little by the title of this blog post.

  13. cwallenpoole says:

    I have to say that while I agree in part, a truly dynamically typed language only really has one “type” in the sense that you use it above, I really do think that Van Rossum wins this one out.

    From the perspective of a mathematician, a method has a parameter (going by Church’s concepts here). Typing is the statement that, given this parameter, the source code should express definitively that this particular attribute will exist in that parameter. Unfortunately, in programming, there is no means whereby we can ensure that this parameter will still fall within the domain of the problem. This is why we have null pointer exceptions (however one’s language of preference handles it). And, while I will definitely agree that it is a convenience that a “type-system” works, that very problem can actually make it more deceitful.

    I’ve known of cases where a run-time variable is corrupted so that the type was no longer a reliable indication of what was actually there. This is hell to debug because it goes against the very assumptions that the strongly-typed languages impose on us.

    With weak typing, inheritance and interface are not strictly required. If I have an method which calls parameter.foo and parameter.bar, then I don’t need to worry about creating a specific interface which just includes those or implementing an interface which includes five extra functions to accommodate the myopia of the original programmer (and I think that all of us have found that we happen to share that programmer’s name from time to time). I can pass in a parameter to this method and the method will go on its merry way. Yes, it can cause unexpected behavior, but, once again, that happens in all languages: “Programmers are users, and users are destructive”.

    In weak typing, the contract the parameter has with the method is merely that certain properties exist as a part of the parameter, and that’s it. This makes dynamically typed languages much easier to test. You don’t need a com.whatever.you.fancy.but.its.long.and.AnnoyingImpl. You might even be able to get away with:
    class MyTestObject:
    pass

    def myFunc():
    print( 1 )

    f=MyTestObject()
    f.foo = myFunc
    f.bar = myFunc

    On a more philosophical note: you may wish to reconsider your metric. Philosophically, a system is generally considered to be both less elegant and less expressive when it requires more to accomplish the same task. I think I could say the same about mathematics without fear of contradiction. So, given that as a premise, can we really say that expressivity is what is measured here?

    • paddy3118 says:

      Not all Dynamic languages are weakly typed.

    • cwallenpoole says:

      Some are more stongly typed than others. But, in most dynamically typed languages, there is a good deal more leeway in typing. Whereas in Java only booleans and boolean expressions evaluate to false, Python and PHP, on the other hand, allow NULL and 0. And, yes, Python will not let you do “1″ + 1, it will let you sort [1,"1"] and if “1″ < 1 (that last one is false, by the way). Java… doesn't really do that.

    • paddy3118 says:

      In response to cwallenpoole, All that is Java is not universaly good.

      And if you quote Python, you might quote the versions that have been released for several years and have corrected earlierdecisions.

      In Python 3 we have:

      >>> sorted([1, "1"])
      Traceback (most recent call last):
        File "", line 1, in 
          sorted([1, "1"])
      TypeError: unorderable types: str() >> if "1" < 1: pass
      
      Traceback (most recent call last):
        File "", line 1, in 
          if "1" < 1: pass
      TypeError: unorderable types: str() < int()
      
    • cwallenpoole says:

      @paddy3118

      Oh please, no. Java is merely my goto example of a strongly, statically typed programming language. Much of Java is needlessly opaque, and it could use a little house-keeping.

      Python 2.7 was released only 4 1/2 months ago. I maintain the validity of the example because while these are thankfully removed in Py3K, they exist in a quite widely and still supported version of Python. I will admit, however, that I might have been more specific.

  14. Ashley Yakeley says:

    More “researchy” languages such as ML or Haskell, have no trouble handling dynamic modes of programming.

    This isn’t quite true. Dynamic typing in Haskell is ugly, if in particular you want to create a type that can represent values of any type, i.e. that someone else can use to store values of their type in and later retrieve them.

    The standard way to do this “open sum” typing is to use the Typeable class, but that is unsound (it is easy to write unsafeCoerce by writing your own Typeable instance). The approach I’ve been using is what I call open witnesses, but that involves extending the language via Template Haskell.

    Admittedly, it’s pretty rare that you actually need to do this.

  15. Wew, I hope you know what you were doing when you posted a rant about static vs dynamic to the Internet…

    I think for a static language to “be dynamic” in the way advocates of that faith interpret it, it has to provide at least a built-in type dynamic (i.e., an infinite sum type), or equivalently, a typecase. (An extensible sum type, aka exn, may be enough for some, but not all use cases.) So one could argue that a “dynamic language” is essentially one that chooses to trade parametricity for other forms of expressiveness.

    • So one could argue that a “dynamic language” is essentially one that chooses to trade parametricity for other forms of expressiveness.

      I like this take on it a lot. It makes sense that when we add, say, System F’s type system to the lambda calculus, we gain parametricity at the cost of giving up general recursion; if we want to be able to type the Y combinator we need a more sophisticated type system, and if we want to be able to type, say, variadic functions, we have to work even harder. It seems to me that we eventually hit diminishing returns in that direction, at least at present, and that part of the case for dynamic (or unityped) languages is that there exist useful, good programs out in the wild such that the benefit we’d get from being able to assign types to them is not worth the amount of effort it takes today. I honestly think that call/cc, for instance, is easier to understand when one doesn’t have to try to think about what its type is. Even more so for, say, variadic map. On the other hand, it’s easy to go too far with that kind of thing, leading to an extremely irritating “you just have to let it wash over you!” mentality about understanding code. So I try not to evangelize one way or the other. (Also, as a researcher, I personally am interested in trying to give appropriate types to more programs, not necessarily because the result is “worth the effort” according to someone’s metric, but because the process of trying is worthwhile (and fun!) in itself — but that’s a bottom-up motivation that’s not a lot of help to people in the trenches writing code.)

  16. [...] Dynamic languages are static languages While reviewing some of the comments on my post about parallelism and concurrency, I noticed that the great fallacy [...] [...]

  17. marklillibridge says:

    Another “dynamic mode” is replacing an existing built-in function with a new version so that all code uses the new version not the old version. Moreover, the change might change the effective type of the function (e.g., add an extra optional argument or allow more “types” for an existing argument).

    In essence, for some of these dynamic languages, everything is an assignable which makes static type checking beyond unitype impractical except for very special cases.

    Consider also, the use of “eval” which can create arbitrary new code at runtime from strings…

    • marklillibridge says:

      Another dynamic feature is dynamic binding where for a period of time a variable is replaced with a different value (possibly of a different “type”).

    • abstract type says:

      This is discussed in my book. It is unrelated to dynamic typing or dynamic languages. Altogether I find it to be of very limited utility.

    • marklillibridge says:

      I think you have to agree that Ruby and Lisp are dynamic languages and they both support this feature. In Ruby, there is a major design pattern involving creating domain specific languages that relies crucially on this feature. Very briefly:

      HTML.gen {
      title “my title”
      body {
      “some body text”
      button(“OK”) { handle_ok() }
      }
      }

      The method call title (really self.title(“my title”)) here is undefined in the global scope; when HTML.gen calls its passed in closure (the outer {…}s), it changes self for that closure call; this has the effect of a fluid let and makes title a defined method during that closure.

      Fluid let is also in widespread use in gnuemac’s elisp so I don’t think you can claim it is of very limited utility.

  18. marklillibridge says:

    > More “researchy” languages such as ML or Haskell, have no trouble handling dynamic modes of programming.

    [devil's advocate] did they add overloading to ML when I wasn’t looking? Does “+” work on complex numbers, strings, arrays, and objects that define the “+” method?
    Does it use double dispatch so that you can do things like add integers to (user-defined) complex numbers?

    Does ML (or a standard ML Library) have a standard unitype with conversion functions to and from it? Otherwise, each different programmer wanting to use dynamic features has to roll his own incompatible unitype.

  19. paddy3118 says:

    A lot of hand-waving, but no meat! Give more examples and comparisons to back your contentious claim if you truly want to change opinions. If all you want to do is sell a future book, then I hope people try before they buy.

  20. jlouisa says:

    One thing which I would like to dig the readers brains about is the concept of dynamic linking of code into a statically compiled system. In particular, you may envision Erlang or Common Lisp (dynamic languages) where you can “hot deploy/hot swap” new code into the already running system.

    You may also consider the distributed variant: Code is hot-loaded on a cluster of multiple machines and has to give static guarantees that it will be safe.

    The related and important problem is that of evolving code: we envision that code will be extended over time, so adding new functionality must work, eventually by specifying migration policies for data that is live while running the program, or is persistent on storage media. What we would like is the safety in the phase of dynamic linkage: If the link will lead to a type error, we will abort the code load and report the error.

    To the best of my Knowledge, the “local” problem is solvable to a certain extent either by employing existential types or by doing something akin to runtime-type-checking before code is swapped (See for instance Don Stewarts ph.d thesis (reachable from http://corp.galois.com/don-stewart/ ). Another point worth Mentioning is the Alice-ml project: http://www.ps.uni-saarland.de/alice/ – namely its Package concept).

    The distributed case is harder, and I have no good ideas, apart from perhaps using modality as inspired heavily by Tom Murphy VII’s phd thesis(http://www.cs.cmu.edu/~tom7/) to make sure that each distributed “world” in which we try to hot-swap it is safe to do so.

    And now to the brain pick: Does anybody else know of further references along the same vein?

    • abstract type says:

      As you mention, Rossberg has done a lot of useful and interesting work in this direction, as has Hicks at Maryland.

  21. xilun0 says:

    For one, you are depriving yourself of the ability to state and enforce the invariant that the value at a particular program point must be an integer.

    What about e.g. in Python: assert isinstance(a, int)

    This construct qualify IMO as an ability to state and enforce the invariant that the value at a particular program point must be an integer, even if i can see some obvious defaults:
    - this is only evaluated at runtime and so this is arguably lot less powerful than static typing, especially when trying to prove some characteristics of the program. This can still be useful for a statical analysis but such analyses are not provided by default tools, and would be based on conventionally restricting to simple constructs like the example above — also the effectiveness will be reduced in presence of third party code not using said conventions unless the variables involved in any interface with such code are qualified by the user, which is even less practical
    - this can’t state (and there is no way to state) that a variable will only ever contain integers
    - the verbosity and obviously the inadequacy with the language “philosophy” will make it unlikely to be used on lots of variables of a program, further restricting the usefulness of such construct

  22. Bob — you’ve been making this argument for at least 20 years now and it would be foolish to disagree with you in the basic spirit of the argument (and I certainly don’t!), I think that the whole “I can define a gigantic datatype and effectively program in Racket by putting in the injections and projections in the right places” really misses how painful it is to put those projections and injections in and how this kind of pain can really stand in the way of effective programming.

    Instead, I think that Sam Tobin-Hochstadt’s Typed Racket forms a much more modern (and plausible) basis for your argument.

    • abstract type says:

      That I’ve been saying this for 20 years says more about the audience than it does about the speaker. The real point is that when you program in a sufficiently expressive static language, such as ML or Haskell, you immediately realize that you neither want nor need the overhead that dynamic languages force on you, and, more importantly, that expressing and enforcing invariants in the program itself is so helpful that it’s just ludicrous to deprive yourself of them. Then in those limited cases where dynamic values are useful, you are free to use them, with 100% faithfulness to what the dynamic languages force on you against your will.

  23. mhelvens says:

    The reason some people (such as myself) argue that dynamic languages can be more flexible than static languages is because they allow code like this:

    x: int;

    ...

    if (x >= 0) {
    y: nat <- x;
    }

    We’re assigning supertype to subtype here (nat <: int). But we do it at a point where we can be sure the value in x is also of type nat. Yet, a program like this would never type-check, unless complicated control-flow analysis were involved.

    In short, (static) type-checkers sometimes have to throw out the baby with the bathwater.

    • Dan Doel says:

      I can write that program in Agda, except it will not be a boolean test followed by an extra definition. It will rely on the fact that integers in the library are defined as a sum type:

      f : Integer -> ...
      f (+ y) = ...
      f (-1- y) = ...

      where (+ y) constructs a non-negative integer from a natural y, and -1- constructs a negative one, avoiding 0.

      In general, even if you don’t simply define integers that way, you’ve just re-asserted the problem laid out in the Boolean Blindness post: a test x >= 0 shouldn’t return a boolean; it should return either an equivalent natural number (or a way to get one; if nat is more like a refinement type, then it can be represented simply as an integer and a proof of non-negativeness), or an equivalent negative number.

      So, your exact program might not ever type check, but one much like it could.

  24. johnwcowan says:

    Most of the arguments for dynamically typed languages are propaganda, to be sure, but what you’re doing here is constructing counter-propaganda. (That may be a legitimate pedagogical strategy; as I said, I have nothing to say about that, at least not at present: I’m not a beginner and I don’t try to teach beginners. I’ll take this remark as given in any future comments.)

    It’s quite correct that dynamically typed languages have only one manifest static type; indeed, it is a sum type, which means that it must be tagged (as sum types must be in statically typed languages as well). However, type inference need not be an all or nothing proposition; it may be possible to compile away some but not all tags, depending on the relative costs of compile time and run time. What is more, when you must squeeze out the last increment of performance, there is no substitute for whole-program compilation — Haskell type classes aren’t free either, and neither is run-time dynamic linking.

    I’m willing to live without static types before I’ll live without dynamic types. I’d like even better to be able to add just enough static type information to achieve specific goals.

    • Ashley Yakeley says:

      I’m willing to live without static types before I’ll live without dynamic types. I’d like even better to be able to add just enough static type information to achieve specific goals.

      Odd, I take the exact opposite approach. I try to use just enough dynamic type information, often just sum types, if that even counts. The specific goals I have for static type information include eliminating bugs at compile-time, so the more the better. As someone-or-other (Simon Peyton-Jones?) pointed out, when you prove properties of your program, you end up finding bugs, almost regardless of what properties you are trying to prove.

    • abstract type says:

      The moral of my story is that your position makes no sense. Your “dynamic language” has static types; the problem is that it has only one. A good language should have a rich variety of types supporting all manner of programming techniques; such a language is inherently and unavoidably statically typed.

  25. bowaggoner says:

    I found the post really interesting, but a little confusing — it would be great to hear a little more on the subject!

    Your point, I think, is that dynamic languages (or, if you like, languages whose single static type is classified at runtime) allow less expression or freedom. But you also end up focusing somewhat on performance penalties rather than expression in code.

    So, could you elaborate on how being able to use both styles allows for better expression or reasoning? For example, would you like complex numbers to be statically typed as such, but dynamically classified as rectangular or polar? And in this case why is static typing useful/necessary at all?

    • Dan Doel says:

      One of the things he mentions besides performance is the expression of invariants, and that applies fine to the complex number case. If complex numbers are dynamically classified as either rectangular or polar, then if you see:

      f : Complex -> ...

      You may not know if the argument to f is polar or rectangular until runtime, but you at least know it is one of the two. You have ruled out an infinity of other values that could otherwise show up there if everything were thrown into one big type, and provably so.

      But, even that isn’t all that types can do. I can’t find where, but I recall Conor McBride saying that says that people who only talk about types ruling out errors are selling them short. Sufficiently fancy types can contain more and more information about what your program is supposed to be doing. This can force you to write correct code, but it can also mean that the compiler uses that information to write ‘obvious’ pieces of code for you. And then you only have to fill in sufficient detail for the system to bridge the gap between what you’ve said is supposed to happen, and how to actually accomplish it.

      An elementary example of this is Lennart Augustsson’s djinn, which will take types like:

      (a -> b) -> Maybe a -> Maybe b

      and write code that has the type for you. One of the favorite demonstrations is to feed it:

      ((a -> Cont r b) -> Cont r a) -> Cont r a

      where

      Cont r a = (a -> r) -> r

      The code it spits out is the definition of call-with-current-continuation, which can be non-trivial to write if you’re just thinking about how it should behave. However, the type completely determines the implementation, so it is trivial for the djinn to infer the right definition.

  26. dmbarbour says:

    There are too many holy wars on this subject, and I don’t want to start another on this blog.

    Just keep in mind that principle of least power applies, also, to expressiveness in a type system.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 1,428 other followers

%d bloggers like this: