Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why Lisp is a Big Hack and Haskell is Doomed to Succeed (2011) (axisofeval.blogspot.ca)
127 points by mmphosis on March 30, 2014 | hide | past | favorite | 117 comments


For a language that prioritizes "safety" above all things, there is an awful lot of flying blind and dangerously in Haskell. It's so, so easy to write Haskell code that's safe until you change something distant in the system, which changes when things get lazily-evaluated, and now you have a very serious resource leak. And because of the IO restrictions, you aren't likely to put logging in your code - and if you do, the logs will themselves change the lazy evaluation behavior of your code. I've seen Haskell programs that stop crashing when you pass in --debug !

If the Haskell environment was more like a virtual machine - like in Java - where you could connect into a side-channel and see what types of data were persisting in memory as the program ran - you'd at least have a chance of debugging this sort of thing. But instead it compiles to machine binaries.

There doesn't seem to be any interest in the Haskell community in making tools to deal with this sort of thing - they say "you should learn not to make resource-leaking code". Which is the same thing the Lisp hackers say - "just learn not to make type errors".


It is true that Haskell is about denotational safety (correctness) not operational safety (performance limits).

You do not need IO limitations for logging. You can use Debug.Trace

Haskell does have a heap analyzer. Like C, you can choose to compiler an instrumented binary with debug symbols.

There are a huge suite of tools coming around this year. Debugging and analysis has been a huge theme recently. Simon Marlow's recent book gives a taste, as does the latest Communities And Activities Report.


The book mentioned seems to be "Parallel And Concurrent Programming In Haskell", available online here: http://chimera.labs.oreilly.com/books/1230000000929


How is a program correct if resource leaks cause it not to perform the intended task?


I think this is more of an implementation issue than a language issue. A haskell compiler could choose to reevaluate everything every time it's demanded, and it would run in very little space. GHC uses thunks to represent computations which haven't been run yet, which are replaced once any threads evaluates them. This can lead to space leaks, but I don't believe there's any requirement that this be how the results are evaluated. GHC chooses this evaluation strtategy because it usually results in faster code, but obviously can lead to problems if you don't know what you're doing. Like all languages, it takes experience to avoid whatever deficiency exists in your language of choice; it won't be obvious to a beginner why the naive implementation of fibonacci works for some values but overflows the stack on others, but they will eventually learn.


Haskell the language is concerned with proving small theorems about the program as specified by the source code. Actually running it is outside the scope of language specification; it's an implementation detail. GHC (the defacto compiler for Haskell) makes a lot of resource tradeoffs to try and achieve better performance.


"There doesn't seem to be any interest in the Haskell community in making tools to deal with this sort of thing"

Actually there's a great deal of interest in dealing with lazy evaluation and making it more explicit so that you're less likely to leak resources.

http://hackage.haskell.org/package/pipes


If you take a little care to e.g. make data types' fields strict, this tends to be a non-problem. It is harder to debug the code since it is lazily evaluated (and it's not imperative!), but recent versions of GHC provide pretty powerful profiling, stack traces, etc.

I'm not sure that I would agree that you're flying "blind or dangerously" either way--at least not when it comes to building up large unevaluated thunks. I would agree somewhat if you were referring to code using unsafePerformIO, which makes it the programmer's responsibility not to break referential transparency. While that doesn't tend to be an issue either, Safe Haskell does mostly solve it.


I disagree. I've never seen a program fail because of an unsafePerformIO call, but I've never seen a Haskell project that didn't suffer from at least one mysterious, hard-to-solve unevaluated-thunks-filling-up-RAM bug.

Even the company that was on HN recently cites thunk leaks as their biggest problem with Haskell: http://engineering.imvu.com/2014/03/24/what-its-like-to-use-...


Hi. I wrote that article.

I've never seen a Java application that hasn't succumbed to an unexpected NullPointerException at least once over its entire development cycle. That doesn't mean that the language isn't a reasonable choice. It's just a common problem that you have to accept on that platform.

Space leaks in Haskell are similar. You're going to run into them every so often, and you'd really rather not, but they're easy enough to deal with.


I'd take a NullPointerException over a memory (edit: space) leak any day of the week; the former is instantly resolved, the latter, ?.

Anyway, in Java 8 they've started taking steps to address the null problem with the new Optional type, and, FWIW, in Scala nulls are a more or less a non-issue when you use the FP side of the language.


I think there may be some confusion about what a space leak in Haskell is.

When you apply a function f to a, that is not actually evaluated. Rather, a "thunk" is created that will evaluate `f a` only when that value is actually needed.

If, in your program, you never need anything, or don't "force" your function calls and data structures ("pretend" to need something) in intermediary stages, then the thunks may take up a non-trivial amount of memory. This is not a memory leak in the traditional sense, just temporarily increased memory usage.

It is very easy to (pre-emptively) handle most space leaks in Haskell, but you do need to know how they arise.

There are two very simply rules you can follow that take care of the vast majority of space leaks:

1. Make data fields strict unless you actually want them to be lazy, i.e. instead of:

    data Foo = Foo
        { bar :: String
        , baz :: Int
        }
write

    data Foo = Foo
        { bar :: !String
        , baz :: !Int
        }
2. When you write recursive functions that depend on values which are not forced (e.g. pattern matched against) in each function call, use either `seq`/$! or bangpatterns to make sure the value is evaluated (to HNF) rather than building up excessive thunks. For example, instead of:

    acceptLoop :: Socket -> Int -> IO ()
    acceptLoop sock connNum = do
        econn <- accept sock
        _     <- case econn of
            Left err   -> printf "Error accepting connection %d: %s" connNum err
            Right conn -> forkIO $ runConn conn
        acceptLoop sock (connNum+1)
write either

    acceptLoop :: Socket -> Int -> IO ()
    acceptLoop sock !connNum = do
        econn <- accept sock
        _     <- case econn of
            Left err   -> printf "Error accepting connection %d: %s" connNum err
            Right conn -> forkIO $ runConn conn
        acceptLoop sock (connNum+1)
or

    acceptLoop :: Socket -> Int -> IO ()
    acceptLoop sock connNum = do
        econn <- accept sock
        _     <- case econn of
            Left err   -> printf "Error accepting connection %d: %s" connNum err
            Right conn -> forkIO $ runConn conn
        acceptLoop sock $! connNum+1
to make sure that connNum is always just a single value rather than a series of unevaluated thunks. That way you won't get a space leak if you rarely have problems accepting new connections.


Thanks, helpful explanation.

Now, that begs the question, why lazy by default and not opt-in lazy?

From the outside looking in it seems that deep expertise is required in order to launch a Haskell production app with any degree of confidence (i.e. to quickly dig yourself out of runtime issues like space leaks where the means to avoid them may be known, but the means to resolve them when they occur, non-trivial).


> Now, that begs the question, why lazy by default and not opt-in lazy?

Very good question. Actually, I think most haskellers agree that laziness complicates things more often than not, and if we could start over we wouldn't make Haskell lazy by default. (Although that's not to say there won't be even simpler ways to "strictify" things in the future. Also, many libraries already provide functions that are strict in their arguments by default.)

However, there is also agreement that Haskell's laziness is the reason the language got purity right: there was simply no other way, since laziness meant evaluation order was unclear.

> From the outside looking in it seems that deep expertise is required in order to launch a Haskell production app with any degree of confidence (i.e. to quickly dig yourself out of runtime issues like space leaks where the means to avoid them may be known, but the means to resolve them when they occur, non-trivial).

As someone who writes Haskell for a living, I really just follow a few rules like this without thinking too much about laziness, and I tend to not have any problems. I have had maybe one nasty space leak in the past five years.

Yes, sometimes they do come up, but it takes ~5-10 minutes to pinpoint the problem spot with the heap profiler. It is not nearly as messy as using Valgrind to find actual memory leaks.


> I think most haskellers agree that laziness complicates things more often than not, and if we could start over we wouldn't make Haskell lazy by default.

Following Haskell's evolution somewhat from the outside, this is surprising. (And also somewhat disappointing, as laziness always seemed an important part of Haskell's elegance.) Is laziness now considered something of a failed experiment?


> laziness always seemed an important part of Haskell's elegance

It's part of it, but to a lesser extent than you would expect. The much more important part of Haskell is its no-corners-cut separation of effects, which happened chiefly because, without it, laziness meant that there was no way to know when fireTheMissiles() would actually happen.

> Is laziness now considered something of a failed experiment?

To some extent, yes. No one is saying laziness doesn't make the implementation of some algorithms and data structures extremely elegant, just that, most of the time, you don't actually gain much by leaving your function calls and data types lazy.

I enjoy being able to make infinite/self-referencing data structures, or leaving fields lazy and "performing" all of the function calls in the initialization of the "struct", but have only the functions producing results that will actually be needed matter performance-wise. However, if you don't use any strictness annotation, that benefit doesn't outweigh the problems that can be caused by space leaks.

If you do use strictness annotation, I don't think it matters too much whether the language is lazy or strict, you just have to write strictness annotation instead of laziness annotation.


I think this might be overstated. It is certainly an opinion with some mindshare; I would hesitate to guess whether it's in the majority or minority, and it's definitely not universal.


FWIW, Simon Peyton Jones agrees laziness isn't the boon it's made out to be.

While not universal, I feel confident stating that the opinion is shared by the majority of long-time Haskell programmers.


Because the downsides of strictness are great as well. Laziness lets you "decomplect" production from consumption, with a lot of safety, most of the time. I think few who've spent most of their time working in strict-by-default languages recognizes the pain of it. Those who do write opt-in lazy language properties like generators.

So the question becomes whether strict-by-default or lazy-by-default is better given that both kinds of evaluation have their place. I personally have come to believe that lazy-by-default is nicer since there are fewer places where I really demand strictness... but it also basically forces your language to be pure.


Without laziness the Haskell community would likely not have maintained functional purity for very long. Looking at the history of all other programming languages it's difficult to find one that hasn't succumbed to the temptation of impurity at some point or another. Without this enforced purity, there would not have been the same pressure to develop technologies such as Applicatives and Monads.

The other advantage of laziness is that it is a big aid to composition, modularity and concision. It allows you to perform common sub-expression elimination which can be a major boon for code readability. See this article for some examples:

http://augustss.blogspot.ca/2011/05/more-points-for-lazy-eva...


    data Foo = Foo
        { bar :: !String
        , baz :: !Int
        }
This is actually a great illustration of why "making data fields strict" is a lot more tricky than it sometimes looks. All that

> bar :: !String

is going to get you is the first cons cell in weak head normal form.

If my string is "Hello", then in unevaluated form I have

<THUNK>

Using the strict data field gets you weak head normal form, which is

<THUNK>:<THUNK>

Probably not what you had in mind. As far as Haskell is concerned those two thunks might be

undefined:undefined

Using a strict Text will get you where you want to go, because when you reduce that to weak head normal form you will get a fully formed Text.

Watch out with lists, which is all that a String is.


Right, to fully evaluate something you'd use Deepseq. With data structures like Data.HashMap.Strict, Data.Text, etc., the HNF gained from strict data fields tends to be sufficient to avoid most issues caused by space leaks -- but even for something like !String, i.e. ![Char], you'd have to try pretty hard to blow up the stack. (Evaluating the first cell is also more useful than it might seem at first glance.)

I find myself using deepseq the most when I want to be sure a data structure (and any exceptions any pending operations might throw) has been fully evaluated before passing it off to another thread, not to prevent space leaks.


I am surprised that your comment was downvoted. I upvoted you. I feel that you are 100% correct when you write:

"I'd take a NullPointerException over a memory (edit: space) leak any day of the week; the former is instantly resolved, the latter, god knows."

If someone really feels that your remark should be downvoted, I hope they post an explanation about why.


I think it was because - in the words of The Dude, "Yeah, well, you know, that's just, like, your opinion, man."

> I'd take a NullPointerException over a memory (edit: space) leak any day of the week.

That's your preference and I can understand that preference from your point of view because in order to debug a space leak in a Haskell program you would first have to learn Haskell.

I'm not the one who downvoted, but I also didn't think it contributed much =(


Sorry, it's not an opinion, it's a fact ;-)

Why? NPEs are easily solved, even for beginners. Stack trace says blah blah blah occurred at line X in class Y. Easy fix, totally low hanging fruit for beginners and experts alike.

The space leak issue, OTH, may be easily avoided (if one is well versed in Haskell best practices), but resolving them when they occur is something else entirely. Seriously, you have to break out a memory profiler in order to find out where the issue _may be_, not exactly where it is on line X of class Y.

So, yes, I never get them (in Scala), but I'll stand by taking an NPE over a space leak any day of the week.


thirsteh summed up my feelings in his response to your comment but I'd also like to add that a preference is an opinion and not a fact. You may prefer apples over oranges. It is not a fact that apples are better than oranges.

For instance, what if your null pointer in language x causes a silent fail? Now your apples are beginning to look like oranges.


This is really quite silly since the choice between NullPointerExceptions and space leaks is a false one. You can have both in either language.


The choice between NullPointerExceptions or occasional space leaks isn't really apples-to-apples. You could similarly ask if you'd rather have solely impure functions and no isolation of effects, or occasional space leaks. Clearly (probably?) the answer would be the latter.


Yeah, an apt comparison would be race conditions/spooky action at a distance vs. space/resource leaks. Pick your poison.


If you're going to have errors anyway (and you are), then it's better to have errors that fail fast and early and in a clearly identifiable way. NullPointerExceptions are easy.


And that's why the 'war' has just moved to the JVM: Closure is a Lisp, Scala steals a whole lot of things for Haskell, trying to make it actually practical.

I think Haskell's ultimate role is a bit like Ruby's: It can't really win, but it's destined to be influential. That's a much harder road for Lisp, as its greatest strengths are also its greatest flaws.


> Scala steals a whole lot of things for Haskell, trying to make it actually practical.

LINQ is "practical Haskell," so is most of Rust (if "practical" means approachable to imperative programmers.)

Besides being on the JVM (which is a big plus), Scala hardly makes anything more practical than it is in Haskell. In fact, Scala programmers tend to migrate to Haskell (and ML languages) rather than the other way around.


If by LINQ you mean the map/filter/fold crew, I think this may be selling Haskell a little short. You could say the same about python. At least F#'s computation expressions give you the full power to create your own monads with it's "computation expressions", and in LINQ things are more statically locked down.


Right -- I wasn't sure whether F# would be considered "practical" in this case.


I guess F# isn't practical if you're building a WPF application (I agree the tooling is lacking). Otherwise it seems like it can do everything C# does (including mutation, classes, interfaces, properties), only with a slightly unfamiliar (at first) syntax.


Doesn't scala suffer from the JVM constraints, like no tail call recursion, everything is an object, such that the compile times become enormous to work around them?

Haskell is indeed influencing, but it is not remaining stagnant.


> Doesn't scala suffer from the JVM constraints, like no tail call recursion, everything is an object, such that the compile times become enormous to work around them?

Yes. Yes it does. Scala is a bodge that is destined to be replaced (or at least rewritten), and I say that as a huge fan of the language. I would happily bet on Haskell outlasting Scala in the long run.

(But I use Scala today, because in the long run we're all dead. Scala inherits a lot of useful production infrastructure from the JVM, and on recent progress it looks like Scala can get faster compiles quicker than Haskell can get better infrastructure. Which means that today, in many environments, Scala is the better choice)


The author of scala at one point made some very good responses to questions about compile times on a stackoverflow question here: http://stackoverflow.com/questions/3606591/why-does-intellij...


The way I understand it, is that Scala does support tail-call recursion by way of compiling to a loop. Compilation was definitely a pain when I experimented with Sit.


Scala supports simple recursive tail-call optimization, but is less elegant in handling mutual recursion (due to JVM constraints).

Clojure is probably the one you're thinking of that doesn't support tail-call elimination at all. Rich Hickey thought that if you couldn't do it cleanly in all cases (like mutual recursion), you may as well just come up with something else. So instead of optimizing recursive calls, Clojure has the recur function.


I recall that scala uses "trampolines" by compiling to what is more like a state machine.


I thought Actors does that, but not Scala in general.(?)


Rúnar Bjarnason talked about it in his presentation "FP Programming is Terrible" [0] (he makes up for that title in the end)

I asked him about it on twitter since it reminded me of thunks, he said " a trampoline executes a sequence of thunks, yeah" [1]

[0]: https://www.youtube.com/watch?v=hzf3hTUKk8U [1]: https://twitter.com/runarorama/status/449070763421618176


Only self recursion, which is a special but common case.


I'm not sure whether I am parsing you correctly, but did you just call Lisp not influential?


I concur when it comes to lazy-evaluation. I once lost 10% of my grade in a project because I removed a debugging print statement, which in turn made a data-structure lazily evaluated, which in turn meant that a significant portion of my type-checking was never evaluated.


What? This means that you must have been using exceptions to communicate type-checking information. That's just a terrible way to write Haskell. Exceptions only have nice semantics when they're e.g. thrown in the IO monad.

    -- No, no, no, no!
    checkValid :: Int -> ()
    checkValid x = if x >= 5
                   then error "Invalid number!"
                   else ()

    -- This works fine.
    checkValid :: Int -> Either String ()
    checkValid x = if x >= 5
                   then fail "Invalid number!"
                   else return ()
This may sound a bit harsh, but I think you deserved to lose that 10% if you had such a fundamental misunderstanding of "lazy evaluation".


Astute observation; I agree that I should not have used runtime errors. I quickly converted the code to a Monad based solution. That being said, I was in the middle of trying to understand both Haskell and my task at hand, so I was quite thankful to have it working at all. With no one that understood Haskell enough to explain Monads to me, I had to slowly come to the understanding of them on my own.


It's really not a bug in Haskell that you ran out of time to learn what you needed before an arbitrary unrelated deadline. I hope you didn't let a silly grade stop you from continuing to learn.


I didn't. I just completed the first phase in which we produce assembly code and will be finishing the project up this month. I hope to put the code up on Github. Perhaps someone else will find it curious and/or learn from my mistakes.

This project has been the most learning-filled project of my college career. I wouldn't trade that for any easy A. :)


[deleted]


> If this opinion is a reasonable example of Haskell programmers' attitudes, I would expect some significant portion of programmers would want to stay the heck away from it.

The opinion that people should get less of a grade if they don't know what they're doing is unreasonable? I'm not a compiler writer, but using exceptions for regular control flow in a type checker sounds very iffy to me, Haskell or not. Why would exceptions even show up in a type checker, for that matter?


Telling a kid in college (I was a sophomore when I was writing Haskell) that he deserves to lose 10% over something like that is crazy. The obvious to us, many years into programming professionally, is not always obvious to newer folks.


The ideal purpose of grades is teaching. It's a way of informing you of the parts of the curriculum that you do not yet fully understand. This sounds like an entirely appropriate situation to lose grade points on.

Unfortunately, grades now also serve as some poor measure of intelligence or competency in many people's minds.


Marking work is a way of teaching, grades are used as a measure of understanding. I had a friend in college that, at the start of a semester, was typically a mediocre to borderline failing student if you looked at the marks on his assignments. By the end of the semester his understanding would be on par with mine (typically As in CS/math courses with the occasional B). However, his grade reflected his poor early start with the material and he'd end up with Cs. Now, that's good enough to pass to the next class, but it still helped to screw him over when he started applying to jobs after college.


A grade is also how they decide if you get your undergraduate degree. It would be a strange shame if that decision had nothing to do with your intelligence or competency as you imply.


I think that depends on the project.

If the project was the entire grade, then yes that's a bit much. But if we're talking about 1 of 10 assignments, then 10% really isn't that much.

I've had plenty of assignments were the granularity was just something like excellent, good, pass, fail.


Grades are for showing (to teacher and pupil) which students can do the activity in question better than others. There's a lot of BS in modern education that would have you believe otherwise.


You can connect to the run-time system e.g. using the ekg package.


> they say "you should learn not to make resource-leaking code". Which is the same thing the Lisp hackers say - "just learn not to make type errors".

Errors are unavoidable, no matter what language you use. The important thing is that you can catch them in testing. I don't value static typing very highly because type errors show up quickly. Pass the wrong type, and execution fails. Ideally this happens in your unit tests. During development. Because you're doing TDD.

Resource leaks, on the other hand, are usually very hard to catch, and often don't show up until production.


"I've seen Haskell programs that stop crashing when you pass in --debug !"

Haskell has a self correcting compiler.


> they say "you should learn not to make resource-leaking code". Which is the same thing the Lisp hackers say - "just learn not to make type errors".

<sarcasm> We should just stop writing the bugs. If, instead, we focused in writing code that runs correctly, everything would be much better. </sarcasm>


I cannot speak for Lisp, but I am in the throes of writing a compiler from scratch in Haskell (while concurrently learning Haskell). I feel/felt very much as this article described—constrained. And then I learned ETA-reduces. Then I learned monads. And then I learned … The list just keeps going.

As I slowly learned the language the proper way, I am now able to do anything I did in a imperative/OOP/whatever language, but now I have that foundation of type safety. Like many will say, if it compiles, it most likely just works. Coupling this with automatic checking via GHC-mod and I am just as performant (in terms of writing code) and encounter half as many bugs as any other language I've ever used. Haskell isn't a panacea, but it's a very good language.


> if it compiles, it most likely just works

I believe the more you learn, the more this becomes true as well. To begin with, you'll write a lot of code that has potential bugs (like missing cases when pattern matching, using types that you really shouldn't, using error when something like Maybe or Either would be more flexible etc.). eventually you realise how to write code that can avoid a lot of these problems.


An interesting, to me, capacity of Lisp is to be represented uniformly - so called homoiconicity. For example, having a Lisp program, you can relatively easily add - statically - a debug statement after each statement, and then use that resulting program instead. That would be harder to do in Haskell - because the Haskell syntax is much richer.

Another doubt to "eating all languages lunches" comes from having multiple different paradigms in programming languages. I can imagine Haskell eating, say, Prolog's lunch - for example, Norvig has shown how Prolog could be implemented as embedded in Lisp. But I suspect it's going to be harder to repeat in Haskell the strong points of Forth (stack computations), Tcl (strings as universal media?) or J (composability of primitives), even though some approximations could be made.

It's fishy idea to search for a singular "perfect" language - unless that's something like English, with all its imperfections built-in.


The singular "perfect" language would have to have pluggable syntax - so that anyone can have their perfect syntax as long as it generates the same parse tree.

At the same time, its type system will be pluggable too, so that people can keep improving the type system (dependent types, dynamic types, etc.) without changing the language.

Oh yeah, and of course its code generation/execution model will be pluggable as well. Do you want to interpret it? Sure! Do you want to compile it so it performs better on your target computers? No problem! Compile it to JavaScript? Why not?

In that way everyone can be programming in the same language that's flexible enough for everyone's use, but at the same time can contain DSLs. Pluggable syntax/custom type checking means you can embed SQL code and make sure it's valid before running it. It also means SQL could be a custom SQLStatement data type with its own semantics.

In short, only a language that allows every type of programming can be the "one true language"


Doesn't this just describe lisp, though? Most of your questions have a dialect or library set that is specifically made to address them.


No, lisp doesn't allow you to specify your own syntax. You can't just start writing stuff without parens and define what that parses to later.


You'd be somewhat surprised on that point. Especially if you just take it such that everything is just trying to build up an s-expression. While the homoiconicity of the language is incredibly cool for macros and whatnot, I don't think you strictly need it. Especially not at the top level. (That is, if you made a language that "compiled" down to s-expressions, what is missing?)

At the extreme end, take a look at Dylan.

Though, I was really referring to your other points. It seemed every one of your "questions" is directly addressed.


But everybody defining their own syntax would not make a language "the perfect language". Quite the opposite, because nobody would be able to understand each other's code anymore. It would be the ultimate fragmented language.

And because of that, I don't think a perfect language is even theoretically possible. Everybody has their own syntactical preferences, and allowing all of them means fragmentation.


Yes, it does. Look for "reader macros" in the Common Lisp spec, do a search for "lisp infix notation", etc.


How do you evaluate Common Lisp's reader macros?


Funny you should mention Forth; the "concatenative" languages are very Forth-like, and if you constrain yourself to what they call "point-free" style in Haskell, it's also rather Forth-like, or at least past simple cases you have to play the sort of weird games you do in Forth to make up for not being able to name parameters.


Coincidentally, I just saw a reddit comment where someone showed a way to statically add a debug statement to every function definition. The trick is to use a "pattern guard" to evaluate the trace, and then compare the result to "undefined" (never matches), so that the program continues on to look for a pattern that matches.


do you mean the:

    foo x | trace (show x) False = undefined
          | otherwise = ... x ...
? That's not comparing anything to undefined, now is it a pattern guard, it's executing the trace function which then returns False leading to the next guard being evaluated and the actual computation being performed. The undefined is just there because it always type checks, and will never execute.


Your logic and reasoning seems to be all correct except for one thing... "Better languages" don't necessary succeed. I mean, just look at PHP and Javascript... Would you say that they were doomed to succeed because of their language features? There's more to "language success" than the technical characteristics of the language.


Yes, PHP and JavaScript show that deployment is everything. A language that works well with your target platform is better than one that doesn't.

(With PHP, it was apparently a bit subtle; ISP's can host it easily and it works well with live updates via ftp, and that beat out language features.)

Objective C is a third example: It's popular because iPhone.


The author raises valid points about Haskell, but I disagree with his statement about a static-typed lisp. Typed Racket is useful and enjoyable to program with, and includes a handy optimization coach.


Using the type system to create less buggy code beyond what's done in mainstream statically typed languages (that are not types-all-the-way-down) is an interesting, but still very-much -open research question. The type system, beside being expressive, would need to be easy to understand and debug and require significantly less effort to wield than debugging less-typed code.

On a related note, I'm watching with interest how the new Java 8's pluggable type systems[1][2] would play out (I understand the project is expected to have a big released April 1st). Those are pluggable intersection types, that can be inferred and injected to legacy code that was written without them.

[1] http://docs.oracle.com/javase/tutorial/java/annotations/type...

[2] http://types.cs.washington.edu/checker-framework/



i would argue that clojure is very successful i think (probably more like a statment of obvious) language success depends equally on the libraries available for it

clojure have a lot of libraries, and leiningen

and echo the "worse is better" slogan i think that languages that offer a little more feature above the current mainstream languages, will success more than languages that offer a lot more

most programmers are doers, they prefer to spend more time doing rather than learning

languages that are too smart ... are less likely to succeed not until the day ... they become only a little bit better than the mainstream

we move slowly from c to c++ to java to ruby ... the next big language is one that is only a little bit better than ruby ... not a lot better

i think clojure fit the bill


If clojure is only a little bit better than ruby, is there a language you think is a lot better?


well, i guess ... having a second thought about it, clojure is very different from ruby, being a lisp

i am sure most of the ideas in clojure wont be alien to most rubyist ... but still its a fairly large departure from ruby

a language that is only one or few steps above ruby, will have to use closer syntax ... be more or less focused on OO rather than functional programming


Can someone explain why anyone would think that dynamic typing is "clearly-on-the-horizon" for Haskell? (Not saying it isn't; just wondering.)

Also, FTA:

> Haskell is clearly moving towards dependent typing, which in theory, allows the expression of arbitrary invariants that are maintained statically, without having to run the program.

Well, "arbitrary" within limits. Dynamic type checking is still strictly more powerful than static type checking -- in the sense of what Boolean statements it can test about particular values -- no matter how you slice it.

(No, I'm not arguing for dynamic type checking.)


So Haskell will eventually be able to do everything lisp does, but it's impossible for lisp to do everything Haskell does. I smell bullshit.

Both languages have their place. Sometimes functional and type-safe isn't the best way to go about something. Sometimes it is. Sometimes it depends on the programmer.

There is no One Right Way or One True Language.


Also, it's a weird comparison because Haskell is one, crisply-defined thing -- but what does he mean by "Lisp"? Common Lisp? Scheme? Racket? Clojure? Does Dylan count? The only Lisp he mentions specifically is Emacs' Elisp -- which would be a straw man to pick for a comparison.

Things like Typed Racket (and its port to Clojure.Typed) show that you can have a lisp with static typing, as well as the traditional strengths of a lisp.

You can also have a lisp like Dylan or Pyret that doesn't even use s-expressions, but is most definitely a lisp.


haskell evangelism? I mean I like haskell, but this theory or prophecy is based on no facts.


It seems weird to think that there exists any language currently which will wind up being the one-true-language.

As an industry (and a research area) there still doesn't seem to be any real consensus around "what's best"; just a bunch of differing opinions and trade-offs. I don't think any amount of evidence (were it even to exist) would convince someone not amenable to strongly-enforced static types to see their value.

The entire practice of software development seems oriented around feelings and past experience. I can appreciate that their are groups of people doing research to try and bring rigor and quantified data to the process, but if at the end of the day, a developer can spend a weekend putting together a node.js web app and have that take off and prove successful, you've pretty much lost any opportunity to convince them that they should stop using their tools and switch to some different tools.

I don't actually think there's anything wrong with that either; good for them for being suspicious.

I decided to investigate Haskell about 8 months ago when I had an opportunity to write a big system for my job. It fit well within my constraints and requirements, and the little I knew of it at the time seemed like it would be a good language to spend time getting to know.

I liked that everything in the language seemed like it got there through reasoned debate and experimentation, and that it seemed like a language-feature sandbox that more mainstream languages were eventually pulling from (Perl advocates say the same thing about Perl, mainly that it already has all the features that other languages are now trying to figure out how to implement). I liked that they don't seem to punt on the hard problems (which over time become more and more of the problems left for languages to address), even if that means that doing complicated things in Haskell is complicated.

I don't know how I'd feel about Haskell suddenly becoming super popular though. Even aside from the "God I hate that this band I've been into for a while is now suddenly popular" trendiness, I don't think the community would be able to handle sanely what an influx of massive amounts of new users would do to things. It's hard enough getting all the category theorists and abstract algebra professors to deal with the fact that Cabal takes lower and upper bounds on dependencies.

If I could spend the entirety of my career using Haskell for everything, maybe that would be great. I haven't gotten good enough yet to have strong opinions about its failings, so I'm still very much in the honeymoon period.

But that seems like a silly thing to shoot for, even if I feel the same way about Haskell in 10 years that I do now. And it seems silly to expect that everyone else would feel the same way.


I love Bob Harper's view that something like Type Theory will eventually become the one-true-language. His arguments arise from a POV pretty different from the standard argument here—it's not that some particular implementation will win, but instead that the entire design space of languages we will eventually gravitate to type theory because it's just right.


I think type theory is supremely interesting, and I'm curious to see how far you can take it. I worry a little that at some point your type system needs its own type system, and then we've just moved the argument about typing one level of abstraction deeper.

I'm not willing to concede that strong static typing is a universal truth though. Too many people much smarter than me seem to disagree.


This is a solved problem; they are called Universe Types.


At the end of the day who wants dependently typed bash?

It's a great intellectual position for happy hour at the campus pub. Yet from a practical standpoint, it's hard to see how programming languages requiring more attention to type system will facilitate banging out code for ordinary problems more quickly.

There are times when it is really important to be able to prove code is correct and times when it is enough to just provide a plausible answer. The market for ML on Rails remains without validation.


With something like Haskell (very good type inference) the types will only bother you if you're writing wrong code (or writing particularly complex expressions, I'm not necessarily against compilers complaining about things that are complicated).

It's not a free lunch, but damn is it cheap.


The Haskell compiler will complain if I am writing wrong Haskell code, or more generally when the types are indeterminate and can't be inferred.

This not the same as 'wrong code' in the abstract. The code will run fine if I don't pass in mismatched data, or more generally bad data. And if I am passing in bad data, static typing doesn't give me good answers, it just keeps the program from crashing. Don't get me wrong, there are times when crashing is bad. But there are times when the cost of a runtime error is nominal and the value of flexible code is high.

Static typing trades one type of cognitive overhead for another. The Java program of 500 classes is its manifestation.


What is the cognitive overhead Haskell is introducing? Types? Because as far as I can tell we use types in all OOP, they're just implicit and not checked.


Static type checking, regardless of language, requires thinking about programs in a particular way because one possible mode of failure is prioritized over all other modes. It does so regardless of whether absolute type safety deserves to be prioritized given the purpose of a particular program and it does so regardless of whether absolute type safety is an appropriate concern at a particular stage of the program's development.

Static typing can make "how do I get this to compile?" a design criterion Consider year 2038 problem. In MySQL, various date types are coerced to the timestamp type by design. Otherwise the program would not compile. Compilation takes precedence over problem solving.

http://dev.mysql.com/doc/refman/5.0/en/datetime.html


I've personally found that static typing is an aid to comprehension and thought. I spend more time fiddling with untyped code than typed code. I also disagree that static typing prioritizes a particular mode of failure—the notion of failing to typecheck is a rather general one.


There are two contexts in which one can think about data types. The first choosing among or constructing data types as abstractions. The data type as metaphor is useful regardless of language. An important property of this context is it's not just useful externally via an automobile class in a used car lot application but internally with ports and pipes for I/O and threads and locks and semaphores for processes and so on.

But the other context in which we select and choose and construct data types is because a language insists upon it. Here our choices are not based on how to best represent the world, but by how to package our metaphor into a pre-existing schema. The very first time we compile our code, we have been forced by the compiler to crystallize our code based on an early guess.

When a flat roofed building uses scuppers to provide emergency overflow drainage, it is good if water passing through them makes a mess of the plantings below and perhaps stains the facade. It indicates that the primary drains are clogged before the roof collapses. Likewise, runtime type errors might be preferable to zeros silently inserted into a database.

Static and dynamic typing each catch some types of errors at runtime at the expense of masking other types of errors at runtime.


I think runtime errors are a fine way of detecting such failings. I don't understand why typing is at odds with that.

I think types make us write out the why next to the what. That why might be a domain model justification, or something much more trivial. It's also completely possible to encode an untyped regime in a type system. You're always crystallizing your design, you just can either provide good information to understand its failings and be more prepared to fix them. Or not and chase logic errors throughout an undocumented, dynamic system.


The holy grail of type theory is to write propositions as types and have proofs fall out of those propositions naturally. Those proofs comprise your program. So-called dynamic typing throws that all out the window and just says "I'll permit any program you want to write and bail out at the first opportunity". It's a proposition that is trivially true and thus it's not useful at all.


Who says it has to feel much at all like DTLs of today? I probably do want to be at least warned when the compiler can't be sure my bash is terminating.

And I'd love Bash embedded in a DT. Even if I never write the DT components being able to nicely map Bash -> DT is powerful.

Finally, I'd love Bash with sum types.


Pretty much every program humanity has produced is grossly defective. Certainly there's nothing I use heavily that hasn't failed. IMHO shipping more garbage more quickly is not a problem worth solving.


<sarcasm>That seems to be the principle use case for Javascript, and that is (arguably) the lingua franca of our generation. </sarcasm>


> It seems weird to think that there exists any language currently which will wind up being the one-true-language.

Odd, but not uncommon. Programmers are prone to falling in love with their tools. I've seen a lot of one-language-programmers proclaiming the advantages of their language choice over every other programming language. Luckily, this is not the case with this article. Still, I don't think there is One Language to Rule Them All.


i never understood the sentiment, that people are afraid that their tool of choice becomes "mainstream" popular

i think people argue a lot about which language is better, because programming languages need to be popular

they need to be popular, because this is the best guanranty they will have lots of good quality libraries

and a bad language with good libraries trumps a good language with no libraries

if haskell becomes "ruby" popular, and haskell i believe is actually one of the more popular functional languages, you will only benefit


I think people's negative reaction to "hipness" is just human nature, and while I completely agree it's a terrible reason to make decisions about things, I'm not certain it's able to be overcome completely.

I think people argue about which language is better for much the same reasons they argue about which mobile phone operating system is better, or which console, or which text editor. They've invested (in the case of programming languages, potentially many years) in something, and want to feel like they've invested wisely. I think that's also probably just human nature.

I disagree with the notion that programming languages need to be popular, at least, as popular as they needed to be in the past.

20 years ago it was important for languages to be popular because that was the only way they'd reach enough critical mass to be discoverable. Growing up, there was only one book store within driving distance of my house that had any programming books. That meant if you wanted to learn programming, you were limited to what had been "blessed" by industry (in my case, one book on C++ that was actually just a syntax reference). That seems to no longer be true (the internet has made discovery of programming languages easier in much the same way it's made the discovery of everything easier).

There is certainly a benefit to having critical mass, but that mass seems to be much smaller than has ever been required in the past. I've seen Haskell developers on several occasions talk about the fact that Haskell's popularity has been in this nice "happy medium" that provides enough eyeballs to flush out issues and provide feedback, but also allows them to not worry about breaking things.

A language getting popular is not all benefits with no drawbacks. It gets harder to make breaking changes to the language (Javascript can be considered a good cautionary tale about what can happen when a language becomes so popular that you are held hostage by backwards compatibility). And you can't read a single article on here about some less popular language that doesn't prompt at least one comment about how because the language isn't popular, the quality of developers using it is higher (which is always posited as a plus for hiring developers in that language). Paul Graham mentioned it regarding Python explicitly in one of his essays.

More people being part of a community means more everything. More of what's good (libraries, eyes looking at bugs, ideas on solving problems) and more of what's bad (bike-shedding, shitty libraries, conflicting programming idioms).

I have no idea if there's some magical sweet spot, where you get all the benefits of a language being popular without any of the detriments; but I suspect it doesn't exist. As it stands, yes, I'd like there to be more Haskell libraries. I'd like there to be more blog posts about using Haskell for things that aren't abstract math. But I'm not necessarily itching for Haskell to become much more popular.

We're already starting to see some of the industry people that use Haskell start to explicitly target evangelism (like the work FP Complete is doing). I think it's great that they want other people to use this thing that they like, but I am worried anytime a community switches from "stealth" mode to "evangelism" mode. Incentives around addressing problems starts to become perverse when you have to worry about messaging, and when you're trying to convert other people. It seems like it evolves eventually to cargo-culting.


well, most open source projects have a small set of core developers, a larger set of testers (users who discover bugs and report them properly) and larger set of passive users

being more popular will only slighly enlarge the core, and keep it healthy (if a member drops, someone else comes in)

popularity is good for OOS projects ... i also disagree that a larger community will increase the bad, again the core developer size will probably remain small enough to work coherently

more testers cannot be bad more libraries can never be bad, it will just increase the likeleehood of having good ones happen

another reason for languages need for popularity is that it really take a lot of time and effort to master one if you spend years mastering a tools, you sure want it to be popular for more years ... to get a chance to use this knowledge


More users means more people who feel breaking changes. If you have one user, and someone demands a change that might be breaking, there's a pretty good chance everyone who feels that is willing to go to the effort of fixing up any breakage. If you have ten million users and someone requests a change that might be breaking, you've got to treat it much more carefully.


I fail to see the concrete examples you came up with to back your claims up. Or...were there none?


Lisp can do that stuff. Point two depends on the implentation but point one is standardized.


Why controversial blog post titles get clicked and why...


The impression I have from reading the comments, seems to indicated that many believe Common Lisp not to have types !

With CLOS (and :before methods!), and optional type declarations, and the awesome implementations which do type inferencing, you tend to catch far more bugs compared to say, something like Python.

Yes, dear brethren, Common Lisp code (with SBCL) will not compile if the ftype conflicts with the inferred types of the arguments.

Edit: Besides, the author, does not seem to have the same opinion about Lisp vs Haskell today.


It really is amazing how many errors SBCL catches during compilation. At first it was annoying me coming from a more lax CLISP, but then I found that it was always my mistake that resulted in them.


How is something which needs a >700 MB runtime doomed to succeed. I think even Java is smaller than that. Not sure though.



700 MB runtime implies every compiled binary is 700+ MB, which is obviously untrue. You probably mean the compiler and standard libraries.


700MB? Considering my SSD is 400GB, why would I care?


Because some devices don't have 400GB of SSD?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: