Menu

Programming Language Performance Part XIII: Haskell

January 27, 2013 - Programming

This is part of a occasionally updated series on various programming languages. It should not be interpreted as a benchmark, but rather as a casual look at various programming languages and how they might be used by somebody for a practical purpose.
Currently, there are Articles written regarding Python,C#, Java and VB6 (merged for some reason),Scala,F# & Ruby,Perl, Delphi, PHP,C++,Haskell,D, VB.NET, QuickBASIC, and Rust

Haskell is one of those languages that, to paraphrase Rodney Dangerfield, can’t get any respect. I mean, sure, it get’s a lot of use in academia and by some people, but by and large it is ignored in favour of other languages such as C++,Java, and C#.

One of the reasons Haskell has that difficulty finding use as a mainstream language is that it leans heavily on the Functional Programming paradigm. We saw a bit of this in the F# example. Haskell, however, abandons almost all Imperative stepping-stones; while F# has various imperative constructs and is essentially a multi-paradigm language as a result, Haskell provides very little that could be used to fake imperative styles, and those they do have are tedious to use, and as such discourage their use.

Like the other examples, this example implements an algorithm to find all the anagrams in a dictionary list of words. The Haskell Implementation:

(I’ll be surprised if the above is syntax highlighted through the plugin I use…)

This is the part I have to admit ignorance. I did write this, but at this point, I have absolutely no clue how it works. This is arguably because I tend to think in terms of imperative programming styles, rather than functional, and so some of this is just gobbledygook to me. This is particularly unhelpful given the fact that I’m supposed to be explaining how it works. Most of the work is done in the last few lines, which work to sort each entry and place them in a Map appropriately:

this line will take each element of words, and create a map; each sequence element of the group will be used as x in the delegate, and that creates the map element where the key is the sorted equivalent of x (the word) and the value is a list of the word. To my understanding elements that map to the same key will be appended to that list, but if not, well I guess it’s being done elsewhere. Most likely the work to properly “condense” everything is done in the following line, which filters the aforementioned words listing.

One of the other things that haskell get’s some flak for is speed. That is, in the negatives. This implementation executes in 3.1 seconds, which is about equal with the perl implementation. So some of the speed concerns might be justified- since Haskell is in fact compiled, not interpreted, so being comparable in speed is a bit of a worry. On the other hand, it’s still quite a bit faster than a few of the implementations, and really I don’t know haskell so an “expert” haskell programmer could almost certainly come up with a blazing fast implementation. The language itself is terse, and very expressive; The code that actually runs here is shorter than the current shortest implementation, Ruby. It’s worth noting that the shortest implementations are all very expressive languages, which is unlikely to be a coincidence.

Have something to say about this post? Comment!