The Point of Laziness


As I’ve discussed previously, there are a number of good reasons why Haskell is not suitable for teaching introductory functional programming.  Chief among these is laziness, which in the context of a pure functional language has fatal side effects.  First, Haskell suffers from a paucity of types.  It is not possible in Haskell to define the type of natural numbers, nor the type of lists of natural numbers (or lists of anything else), nor any other inductive type!  (In Carollian style there are types called naturals and lists, but that’s only what they’re called, it’s not what they are.)  Second, the language has problematic cost model.  It is monumentally difficult to reason about the time, and especially space, usage of a Haskell program.  Worse, parallelism arises naturally in an eager, not a lazy, language—for example, computing every element of a finite sequence is fundamental to parallel computing, yet is not compatible with the ideology of laziness, which specifies that we should only compute those elements that are required later.

The arguments in favor of laziness never seem convincing to me.  One claim is that the equational theory of lazy programs is said to be more convenient; for example, beta reduction holds without restriction.  But this is significant only insofar as you ignore the other types in the language.  As Andrzej Filinski pointed out decades ago, whereas lazy languages have products, but not sums, eager languages have sums, but not products.  Take your pick.  Similarly, where lazy languages rely on strictness conditions, eager languages rely on totality conditions.  The costs and benefits are dual, and there seems to be no reason to insist a priori on one set of equations as being more important than the other.

Another claim is that laziness supports the definition of infinite data types, such as infinite sequences of values of some type.  But laziness is not essential, or even particularly useful, for this purpose.  For example, the type nat->nat is a natural representation of infinite sequences of natural numbers that supports many, but not all, of the operations that finite sequences (but not, for example, operations such a reverse, which make no sense in the infinite case).   More generally, there is no inherent connection between laziness and such infinitary types.  Noam Zeilberger has developed an elegant theory of eager and lazy types based on distinguishing positive from negative polarities of type constructors, the positive including the inductive and the negative including the coinductive.   Coinductive types are no more about laziness than inductive types are about pointers.

I wish to argue that laziness is important, but not for pure functional programming, but rather only in conjunction with effects.  This is the Kahn-MacQueen Principle introduced in the 1970’s by Gilles Kahn and David MacQueen in their seminal paper on recursive networks of stream transducers.  Dan Licata and I have emphasized this viewpoint in our lectures on laziness in our new course on functional programming for freshmen.

Let’s use streams as a motivating example, contrasting them with lists, with which they are confused in Haskell.  A list is an example of a positive type, one that is defined by its membership conditions (constructors).  Defining a function on a list amounts to pattern matching, giving one case for each constructor (nil and cons), and using recursion to apply the function to the tail of the list.  A stream is an example of a negative type, one that is defined by its behavioral conditions (destructors).  Defining a stream amounts to defining how it behaves when its head and tail are computed.  The crucial thing about lists, or any positive type, is that they are colimits; we know as part of their semantics how a value of list type are constructed.  The crucial thing about streams, or any negative type, is that they are limits; we know as part of their semantics how they behave when destructed.

Since we have no access to the “inside” of a stream, we should think of it not as a static data structure, but as a dynamic process that produces, upon request, successive elements of the stream.  Internally, the stream keeps track of whatever is necessary to determine successive outputs; it has its own state that is not otherwise visible from the outside.  But if a stream is to be thought of as given by a process of generation, then it is inherently an ephemeral data structure.  Interacting with a stream changes its state; the “old” stream is lost when the “new” stream is created.  But, as we have discussed previously, ephemeral data structures are of limited utility.  The role of memoization is to transform an ephemeral process into a persistent data structure by recording the successive values produced by the process so that they can be “replayed” as necessary to permit the stream to have multiple futures.  Thus, rather than being a matter of efficiency, memoization is a matter of functionality, providing a persistent interface to an underlying ephemeral process.

To see how this works in practice, let’s review the signatures PROCESS and STREAM that Dan Licata and I developed for our class.  Here’s a snippet of the signature of processes:

signature PROCESS = sig
  type 'a process = unit -> 'a option
  val stdin : char process
  val random : real process
end

A process is a function that, when applied, generates a value of some type, or indicates that it is finished.  The process stdin represents the Unix standard input; the process random is a random number generator.  The signature of streams looks essentially like this:

signature STREAM = sig
  type 'a stream
  datatype 'a front = Nil | Cons of 'a * 'a stream
  val expose : 'a stream -> 'a front
  val memo : 'a Process.process -> 'a stream
  val fix : ('a stream -> 'a stream) -> 'a stream
end

The type ‘a front is the type of values that arise when a stream is exposed; it can either terminate, or present an element and another stream.  The memo constructor creates a persistent stream from an ephemeral process of creation for its elements.  The fix operation is used to create recursive networks of streams.  There are other operations as well, but these illustrate the essence of the abstraction.

Using these signatures as a basis, it is extremely easy to put together a package of routines for scripting.  The fundamental components are processes that generate the elements of a stream.  Combinators on streams, such as composition or mapping and reducing, are readily definable, and may be deployed to build up higher levels of abstraction.  For example, Unix utilities, such as grep, are stream transducers that take streams as inputs and produce streams as outputs.  These utilities do not perform input/output; they merely transform streams.  Moreover, since streams are persistent, there is never any issue with “buffering” or “lookahead” or “backtracking”; you just manipulate the stream like any other (persistent) data structure, and everything works automagically.  The classical Bell Labs style of intermixing I/O with processing is eliminated, leading not only to cleaner code, but also greater flexibility and re-use.  This is achieved not by the double-backflips required by the inheritance mechanisms of oopl’s, but rather by making a crisp semantic distinction between the processing of streams and the streaming of processes.  True reuse operates at the level of abstractions, not at the level of the code that gives rise to them.

Update: It seems worthwhile to point out that memoization to create a persistent from an ephemeral data structure is a premier example of a benign effect, the use of state to evince functional behavior.  But benign effects are not programmable in Haskell, because of the segregation of effects into the IO monad.

Update: Lennart Augustsson gives his reasons for liking laziness.

About these ads

56 Responses to The Point of Laziness

  1. roshanjames says:

    I am puzzled by this remark: “As Andrzej Filinski pointed out decades ago, whereas lazy languages have products, but not sums, eager languages have sums, but not products.”

    Could I know which of Filinski’s papers is being referred to here?

    • Abstract Type says:

      If I remember correctly, it was in his MSc thesis from around 1990. I may well have learned this from him directly, however, I’m not sure.

    • summermute says:

      >I am puzzled by this remark
      There is a discussion on reddit (sorry) which tries to explain this moment.

    • summermute says:

      Some comments on reddit were deleted, so the arguments can not be inferred from that discussion. Shortly, in a lazy language we have an equation
      fst (x,y) = x (where (x,y) is of product type), which does not hold true in the case of a language with strict semantics (evaluation of ‘y’ may not terminate). Sum types have dual behaviour, as was illustrated here in other comments, but an example which gives a clear evidence of it seems to be trickier to construct.

  2. ee8a91jjf says:

    I believe the primary (only?) benefit of laziness is in fact social rather than mathematical. No strict language with any significant user base has been able to resist the siren song of impurity.

    Laziness isn’t really a feature so much as an excuse to keep the language pure.

    (What appears above is my interpretation of SLPJ’s famous “Hair Shirt” speech).

    If you really want people to stop using lazy-by-default languages, you’ll need to produce a language which is all of these: strict, pure, popular.

    Coq doesn’t quite count because, as a total language, its semantics are agnostic to evaluation policy.

  3. Franklin says:

    I’m a little disappointed by how many commenters did not seem to enjoy or get the punny title “the point of laziness”. Did you intend that pun, Bob?

  4. mcandre says:

    My mistaken impression that the author was a Haskell newbie was not an indictment. Many actual newbies fail to understand how laziness is anything but a bother.

    The natural numbers are traditionally defined by Peano numbers. Can other MLs do better than data Peano = Zero | Succ Peano?

    I do not, in fact, know the finer points of comparative algebraic type systems, just that (through Haskell) algebraic type systems are amazing. I’ll let myself out now. :)

    • Abstract Type says:

      Yes, but unfortunately the algebraic type system in Haskell is fundamentally broken, rendering it useless for my purposes (and creating a lot of problems for everyone).

  5. pchiusano says:

    I always thought laziness was better for modularity – callers do not need to know about the strictness policy of callees in a lazy language – the callee can evaluate whatever it needs to. In a strict language you often end up with code duplication due to having to propagate these policies to callers, their callers, and so on…

    • Kenneth B says:

      I always thought impurity was better for modularity – callers do not need to know about the purity of callees – the callee can evaluate whatever it needs to. In a pure language you often end up with code duplication due to having to propagate these policies to callers, their callers, and so on…

      I always thought the lack of type-safety was better for modularity – callers do not need to know about the type-safe-ness of callees – the callee can evaluate whatever it needs to. In a type-safe language you often end up with code duplication due to having to propagate these policies to callers, their callers, and so on…

      … ad nauseam

  6. ezyang says:

    The characterization that, because all types in Haskell are lifted, we don’t have inductive types (and the implied assertion that we cannot use induction), is a little disingenuous, both on practical and theoretical grounds. Haskell programmers regularly use induction to prove properties about their code, assuming the presence of total arguments (the same way a higher-order function in a strict language might assume the presence of total functions). Furthermore, as Danielsson, Jansson and Gibbons point out, it is probably the case that “Fast and Loose Reasoning is Morally Correct” for even a language as complicated as Haskell.

    Thus, the only true difficulty is characterizing the evaluatedness of arguments being passed around, and I’d argue that this (very real) difficulty is one and the same as the opaque cost model for Haskell.

  7. qelt says:

    I have to thank you for finally forcing me to admit that infinite data structures have nothing to do with lazy evaluation.

    I’m not sure that’s all there is to be said about laziness and “big” structures, however. Your Stream datatype is an excellent substitute for Haskell’s lists, but one has to explicitly choose to use it instead of native ML lists. The advantage of lazy evaluation by default is that unneeded data won’t be processed even if you didn’t anticipate its presence.
    Of course, I’m also not sure this is worth having to
    anticipate weird space leaks instead.

    I have to disagree much more strongly with regard to parallelism. Parallel evaluation does not arise naturally from eager evaluation anymore that it does from lazy evaluation (unless you’re advocating explicit nondeterministic threads, in which case impurity is a prerequisite). Parallel evaluation is a third, different evaluation strategy, and it is introduced in analogous ways in eager and lazy languages (“Do this and this at the same time” vs. “When this need to be done, do this at the same time as well”).

    If anything, I think lazy evaluation is favorable to parallelism because it encourages programmers to write code that is independent of evaluation strategies. I am aware of the fallacy here (“Lazy evaluation has benefits, but the main point is that it makes you write code that doesn’t rely on any evaluation strategy – including lazy.”), but the fact remains that Haskell is pure (if you don’t stick your nose too deep) and ML isn’t. I find it incredibly sad that you would teach a course that puts an emphasis on static types using a language that doesn’t have statically typed effects; I’d argue that’s the most important thing static typing can do for functional programming.

    I’d rather program in a language with only the two types Pure and IO than in ML’s type system.

    • Abstract Type says:

      Read my posts on parallelism please.

      ML has statically typed effects in the sense you suggest to exactly the same extent as does Haskell.

    • Abstract Type says:

      I plan to explain this remark in more detail in a later post.

  8. glorkspangle says:

    Can you enlarge on the ‘fix’ function? Online lecture notes? Examples? I can construct a function with that type, but not a useful one. Maybe it’s too early in the morning.

    • Abstract Type says:

      The trick is to use backpatching. First, create a cell containing a function that, when applied, raises the exception BlackHole (which you declare, of course). The stream you will return is one that, when exposed, exposes the result of applying the function stored in that cell. Second, assign to that cell the function which, when exposed, exposes the application of the stream transducer to the stream just described. If the transducer is not productive, you will get an uncaught exception BlackHole, which is the correct behavior. If it is productive, it will evaluate to a “stream front”, which is then returned as the result of the expose. Third, return the stream in question.

      (I could give the code, but I think you can write it given this.)

    • glorkspangle says:

      Of course. I’m out of the habit of thinking this way. This is just like Y, n’est-ce-pas?

    • Abstract Type says:

      Yes, except that it shares properly, whereas Y would just unwind and repeat computations.

  9. will87 says:

    Why is this not an inductive definition of the naturals?

    data Nat = Zero | Succ !Nat

    For the ML programmers ! makes the constructor argument strict. So omega = Y Succ would evaluate to _|_.

  10. Frank Atanassow says:

    But if a stream is to be thought of as given by a process of generation, then it is inherently an ephemeral data structure. Interacting with a stream changes its state; the “old” stream is lost when the “new” stream is created.

    Robert, I know you know this is not accurate: in a coinductive formulation, the “old” stream is not lost. That is precisely why it is wrong to call coinductive types ephemeral.

    You even demonstrate this yourself: Your PROCESS sig is an ephemeral data structure, but not a coinductive type; to build a coinductive type from it, you implemented your expose operation (which is isomorphic to ‘a stream -> 1 + ‘a * ‘a stream, a coalgebraic signature), and you yourself admitted that STREAM is persistent.

    I am sympathetic to your criticisms of laziness and ephemeral data structures, but this is a square peg in a round hole.

    By the way, the exponential is a coinductive type, degenerate (like products) in the sense that it is not (co)recursive. Its state is the variables in the closure together with a given argument. And I know you would not claim that the exponential is ephemeral, or that this state is “lost” when a function is applied, which is plainly disingenuous.

    • Frank Atanassow says:

      I should have written: “Its state is the variables in the closure” (full stop).

    • Abstract Type says:

      Re-reading, it may be that I haven’t worded things well. But my intention was to distinguish coinductive types, which can be modeled in the pure language without laziness, from (I need another word) lazy types, which model interaction with stateful agents.

    • Frank Atanassow says:

      I reread it.

      OK, I retract my hasty remark. It was I who conflated coinductive types with ephemeral data structures, not you.

    • Frank Atanassow says:

      Having just seen your reply, I’m confused again.

      You just distinguished between “coinductive types” and “lazy types”. I take it you would characterize implementations of STREAM as lazy but not coinductive; am I right?

      Because, inasmuch as Haskell’s algebraic types are “broken” by lazy evaluation semantics, I assume you would agree that ML’s eager semantics are not sound for coinduction, cf. eager sums, lazy products.

    • Abstract Type says:

      You are right about ML and Haskell being dual with respect to products and coproducts in the pure fragment.

      I’m not sure that I’ve made the argument well, but I wanted to separate issues, and to argue that the black box view of laziness is appropriate for managing interaction (and not relevant to supporting coinductive types in the pure part).

  11. mcandre says:

    By the way, a natural number type can indeed be constructed in Haskell using Peano numbers.

    • Abstract Type says:

      No, you cannot.

    • Luke says:

      Yes, you can.

      data Nat = Zero | Succ !Nat

      Mind the Bang. Just because the language is lazy does not mean you cannot construct noncompact data types, it just means it isn’t the default.

    • Abstract Type says:

      I’m sorry, you are wrong. I know perfectly well that there are “strictness annotations” in Haskell.

    • nwfilardo says:

      One can’t even with the use of strictness annotations in type definitions? I realize that this is somewhat lacking — the kinds of strict constructors and lazy constructors do not differ — but it should (AFAIUI) let you define exactly data Peano numbers, not their codata dual, no?

    • Abstract Type says:

      Nope.

    • gasche says:

      Robert Harper is here referring to the *lifting* behavior of Haskell data types, where you may have exotic values using ⊥ when you make for a sum. For example, type `Either a b` has all elements of a, all elements of b, plus a ⊥ element. So this is not exactly a disjoint sum (for a precise definition of what he means by “sum”, see the categorical definitions of sum and product).

      For a more introductory discussion of Haskell data types, you should have a look at Edward Yang’s Hussling Haskell types into Hasse diagrams, which make it very clear in particular that there are more “elements” in a Haskell type Nat that the natural numbers.

    • Abstract Type says:

      More precisely, all types in Haskell are pointed (not necessarily lifted). But, yes.

    • To expand on Bob’s reply: his point (and what he complains about in his post) is that this type in fact does not represent the natural numbers in a lazy language, because the type is unavoidably inhabited by additional “values”. For example,

      nonnat :: Nat
      nonnat = Succ nonnat

      Despite the type, this is not a natural number, and inductive principle don’t apply. Strict semantics, like in ML, precludes such examples.

    • Abstract Type says:

      Exactly. I’m amazed at the misconceptions surrounding Haskell, in particular (and also those surrounding dynamic languages, but that’s another post).

    • asajeffrey says:

      Haskell can model Peano arithmetic just as well as ML can, you just need to use strictness annotations, which steps you outside of the purely lazy fragment of Haskell. You could argue that neither of these are “really” the naturals due to divergence, but that leads to languages with termination checkers, which really aren’t suitable for introductory teaching ( yet :-).

    • Abstract Type says:

      Alan, I must’ve missed your point. I’m simply saying that there are no inductive types in Haskell, which I think there are not.

    • Kenneth B says:

      A lot of commenters here seem to think that the definition

      data Nat = Zero | Succ !Nat

      disallows “nonstandard” inhabitants, but this is obviously false. The “strictness” annotation in !Nat just requires that the argument to Succ be in head normal form, not be fully evaluated. Thus, you can easily define:

      loop :: Nat
      loop = Succ loop

      without running afoul of the “strictness” annotation. Indeed, you can also define:

      bad :: Nat
      bad = Succ (Succ undefined)

      and GHC will not complain about the inner Succ being applied to undefined.

      Please, before you assume that Bob Harper (frikkin Bob Harper!) doesn’t know what he’s talking about, run some basic experiments of your own.

    • sclv says:

      @Kenneth: I suggest you run your own tests.


      -- force the head.
      discrim (Succ x) = ()
      discrim x = ()

      -- various ways of writing bottom
      bad = Succ undefined
      bad2 = Succ (Succ undefined)
      bad3 = Succ (Succ (Succ undefined))
      bad4 = Succ bad4

      {-
      *Main> discrim undefined
      *** Exception: Prelude.undefined
      *Main> discrim bad
      *** Exception: Prelude.undefined
      *Main> discrim bad2
      *** Exception: Prelude.undefined
      *Main> discrim bad3
      *** Exception: Prelude.undefined
      *Main> discrim bad4
      ^CInterrupted. (nontermination)
      -}

    • glorkspangle says:

      sclv: you are making Bob’s point for him.

      In your own code, what is the type of bad or bad2 or bad3 or bad4? Plainly Nat is not the type of natural numbers, because bad-bad4 are not natural numbers.

      So any function on Nat is not (just) a function on the natural numbers, and any function returning Nat may return one of bad-bad4. All of this makes reasoning about programs – every program – harder.

      Contrast with Standard ML.

    • will87 says:

      In Haskell the strict definition of Nat means that fix Succ (omega/infinity) is intensionally equal to _|_ (which ML types do contain as it is a Turing Complete language, i.e. you can write non-terminating definitions). The values which can inhabit this type are therefore equivalent to the least fixed point of its definition and hence are the inductive naturals, and equivalent to the ML type.

    • Abstract Type says:

      I’m sorry, but this is not correct.

    • will87 says:

      Please elaborate. Why is a function that returns fix Succ for a strict Succ any different from a non-terminating function definition in ML.

    • Abstract Type says:

      In a lazy language non-termination is a value, whereas in an eager language it is an effect. In particular, variables in a lazy language range over computations, including the non-terminating computation, whereas in a strict language they range only over values. Given a variable x of type nat, if natural number induction were valid, you could prove that x is either zero or a successor. This is true for ML, but false for Haskell, because x could be “bottom”, ie the non-terminating computation, which is a value. For this reason no inductive type, not even the Booleans, are definable in Haskell. It is not a matter of strictness annotations on data declarations, it is a matter of the most fundamental level of semantics of the entire language itself.

    • sclv says:

      @glorkspangle: In ML, any function typed as returning Nat may not terminate. Six of one, half a dozen…

    • sclv says:

      And just to add, my point with the code was that with a strict datatype, undefined == bad..bad4. So of course the blog post is correct that every value has a bottom. But strict datatypes can ensure that every value has at most one bottom, which, as far as I can tell, is contrary to what Kenneth B was arguing.

    • will87 says:

      Well then, I rescind my earlier point and agree that, unlike ML, Haskell does not have natural numbers. Learnt a lot thanks :)

    • tpnyberg says:

      In addition to bottom, doesn’t Haskell also include exception values for every type–ostensibly to allow pure code to “throw” exceptions?

    • sclv says:

      In GHC Haskell (which is an extension of Haskell98, mind you), there do exist imprecise exceptions. However, denotationally, bottom is simply identified with the set of all possible exceptions. So you don’t have imprecise exceptions in addition to bottom — you still just have bottom. In genuine pure Haskell, these bottoms are all indistinguishable. It’s only in verybad-no-good-impure-anything-goes-IO-land that one can catch them. And even then, there’s a deliberate nondeterminism encoded in the semantics of catching exceptions. See “A semantics for imprecise exceptions” by Jones, Reid, Hoare, Marlow and Henderson for more.

  12. mcandre says:

    I agree that Haskell may not be the best choice of language for teaching functional programming, but for different reasons. Haskell uses functional programming brilliantly, constantly encouraging programmers to think and program functionally. For example, function composition. Why roll a complicated function when you can just string functions together, or even partially apply functions? Also, strictness and laziness forces the programmer to treat functions that should be pure functions, as pure functions. In other words, they are distinct from functions with side effects by the very syntax of Haskell. If anything, Haskell is too alien from imperative programming, which is one reason Lisp may be better for teaching FP (Lisp can be done FP, imperatively, or both).

    Haskell’s type system is beautifully consistent and powerful. You can build incredibly complex systems that still boil down to basic types; you can use the type system to enforce error correction (Maybe and Either); you can even perform mathematical operations with the type system. I’m new to declarative programming, but I’ve never seen anything quite as powerful yet simple as Haskell’s type system. It’s one reason Erlang feels kludgy.

    Just because the type system can’t do a particular thing (natural numbers) doesn’t mean it’s lacking or useless.

    Yes, it is difficult to reason about time and space in a lazily evaluated language. However, when time and space do matter, the big O of the algorithm in question is far more important than language details. And you can still force evaluation by using “evaluate” and IO.

    Parallism DOES arise naturally in a pure, strictly typed, lazily evaluated language such as Haskell. You’re under a beginner’s impression that you can’t force evaluation in Haskell. There’s nothing further from the truth: just use <- instead of let.

    There’s an extremely simple parallel example at YelloSoft that demonstrates this using pseq.

    Lazy languages have products, not sums? I’m probably mistaken by taking that description literally, but I will. Haskell has both product and sum in the Data.List module.

    Again, laziness is optional. You can use <-‘s throughout a Haskell program to force evaluation. And laziness offers many benefits that outweigh its apparent flaws. Laziness allows you to have infinite lists. E.g., [1..] is the list [1, 2, 3, … up to infinity. Only a fool would force its evaluation. Rather, the Haskell programmer takes only what he needs from it. “take 5 [1..]” would yield [1, 2, 3, 4, 5]. Prime number generators can thus go up to any arbitrary n with ease.

    • jedaifou says:

      @mcandre > You’re mistaken when you think that the author is a beginner that doesn’t understand how Haskell works… In fact compared to him you are the beginner.
      When he says you can’t define the type of natural number or the type of list in Haskell he is wrong (I think, since strict types can be defined with a !) but not as you think he is : the Peano Numbers you’re thinking about aren’t exactly the Naturals since infinity is one of the possible value, similarly the list [] type isn’t really a list since infinite lists are possible which should more properly be called stream (but that’s a matter of vocabulary on which you’re free not to agree).

      Still it seems to me that :

      > data Nat = Zero | Succ !Nat

      is the type of naturals ? (ok, there still is undefined but I think you’ll agree that we can ignore this value in Haskell context if that’s the only possibility outside legitimate naturals)

      His arguments against laziness are very high level, there’s plenty of people that understand those arguments and don’t agree with him (I’m not sufficiently clear on that to count myself among them) but you should probably acquire better understanding of categories, inductivity, coinductivity and type system theory before you try to defend Haskell against him.

    • glorkspangle says:

      I infer from your comments that you are a newcomer to functional programming and to type theory. How familiar are you with Standard ML? How would you characterise its type system, in comparison to Haskell’s?

      you can even perform mathematical operations with the type system
      This is more-or-less the definition of a type system.

      You’re under a beginner’s impression
      Coffee down nose. Ow. I take it you don’t know whose blog this is? Hint: http://mitpress.mit.edu/catalog/author/default.asp?aid=857

  13. Luke says:

    I think you argue very well by symmetry that neither laziness or strictness is fundamental to pure functional programming.

    I think the “advantage” to laziness has in fact been the reasoning complexity it induces when dealing with effects. Haskell could not “fall back” on the traditional call-by-value mechanism for effects, because that would break all sorts of desirable properties of the language and make it extremely difficult to program in. Thus Haskell was forced to search for a more well-abstracted solution, and out of that exploration popped the tower of standard typeclasses that we know and love. A call-by-value language could just have well come up with these same abstractions, but it would have no need to because cbv effects were “good enough”. Laziness forced Haskell to maintain its purity in the face of real-world problems.

    • Ashley Yakeley says:

      That’s pretty much the “social benefit” point that ee8a91jjf makes earlier, that SPJ himself claims.

      It might be true, but I think one can still want an eager pure language.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 1,302 other followers

%d bloggers like this: