Menu

Programming Language Performance Redux

January 1, 2013 - General Computing, Programming

This is not another entry into the series regarding anagrams, but a much simpler synopsis using a very basic algorithm. The performance of programming languages and technologies are cited by a lot of people as being “slow” or “faster” or any number of terms of the sort. Recently, I’ve been told that Flash/ActionScript is “slow”. Now, I am not doing this to prove that wrong- in fact I think the results may prove that to be the case- but rather to show something that has been repeated several times by others.

First, the algorithm. In order to keep things simple, we’ll make the algorithm simple as well- it will simply generate a random number a million times and put it into an array or list.

The ActionScript version. This is ActionScript 2.0 since I only have Flash 8 installed:

Elapsed time was 7586 milliseconds (or 7.5s).

Now, to another language: let’s go with python for the moment.

This doesn’t even finish in 10 minutes. Clearly we’ve proven that Python is really slow, correct? Not really, no:

The reason the previous one was slow was because of how Python works. The line arr = arr + [random.random()] was allocating a new list and assigning it to the list. The old one was freed for garbage collection. The big problem was that as it went on those allocations got larger and larger. The above uses the append() method of the list, which adds it to the end of the same list- rather than creating a new one each time. The resulting execution time? 340 milliseconds or so.

OK, let’s move on to C:

This one is a bit messy since I couldn’t get the clock() stuff working, it always gave me a negative value, so I just included windows.h and used GetTickCount() instead. An interesting side-note is that I originally declared the array of int’s statically, as in:

Hilariously, though- that failed catastrophically, causing a StackOverflowException in the run-time itself when the program started. I had to switch it to dynamic allocation as shown above. The error makes sense since locals are normally stored on the stack; using malloc() puts them on the heap instead. Unsurprisingly, this implementation results in a run-time of 34 milliseconds.

What do we conclude? Well, first, it’s important to realize that C is simply less maintainable than Most other languages; this means that the “faster” implementation is going to take quite a bit longer to write, and it’s going to be more effort to maintain as well as add features too. Additionally, by virtue of it leaving far more responsibility in the hands of the programmer, it is more likely for bugs to creep in. Various controls, such as code reviews and standards can mitigate these concerns, but they cannot eliminate them completely. My point is that using C over a high-level language is not a magic bullet for a faster application, but requires more programmer time in terms of writing and maintaining the code to begin with, and is additionally more susceptible to bugs through that maintenance. In this instance, we are looking at over 34 times the speed. But, the only important concern is whether that speed difference matters. Another thing is that the Python implementation could likely be made even faster too- I didn’t fully explore that; either through code changes, or through the use of another python interpreter or compiler technology. Basically unless speed is actually an issue and it can be traced directly to the chosen language platform- and there is no way to mitigate the problem- does it make sense to choose a language like C, or even C++.

Have something to say about this post? Comment!