This is part of a occasionally updated series on various programming languages. It should not be interpreted as a benchmark, but rather as a casual look at various programming languages and how they might be used by somebody for a practical purpose.
Currently, there are Articles written regarding Python,C#, Java and VB6 (merged for some reason),Scala,F# & Ruby,Perl, Delphi, PHP,C++,Haskell,D, VB.NET, QuickBASIC, and Rust
In order to compare various languages, I will be implementing a anagram search program in a variety of languages and testing and analyzing the resulting code. This entry provides some conclusions on what I discovered.
Returning to my previous attempts, I looked to my C# implementation. Second fastest so far, but can it be made faster? Probably. My attempt to do so was to upgrade it to C# 5.0 and use the Parallel tasks library to try to get some measure of asynchronous execution with the core loop. I had to essentially put a lock around the Dictionary modification code, meaning that, fundamentally, the only thing that happened asynchronously was the sorting of the words, which is still better than nothing. I managed to get it under half a second using this approach. This is the ‘new’ for loop used for this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
Parallel.ForEach(readlines(readfile),new ParallelOptions(), (line, pls, tl) => //foreach (String line in readlines(readfile)) { String sorted = sortchars(line); lock (anagrams) { if (!anagrams.ContainsKey(sorted)) { anagrams.Add(sorted, new List<string>()); } anagrams[sorted].Add(line); } }); |
I only changed the foreach loop- here commented out- and replaced it with a Parallel.ForEach call. Most of this code is just adding the items to a Dictionary, which isn’t easily parallelizable due to race conditions. The best I could think was to try using the ConcurrentDictionary and ConcurrentBag in place of Dictionary and List, respectively, but this made performance drop significantly, taking several dozen seconds.
Performance results
Here is a listing of the languages; with their execution time and a speed index that I created based on the length of the solution and the time it took.
language | Execution Time(ms) | Length(characters) |
---|---|---|
Python | 1,200 | 399 |
C# | 700 | 1,333 |
Java | 1,050 | 1,471 |
VB6 | 9000 | 2,202 |
Scala | 1,400 | 1,055 |
F# | 500 | 899 |
Ruby | 10,000 | 320 |
Perl | 3,100 | 394 |
Delphi | 1,000 | 1,848 |
PHP | 114,000 | 1,071 |
C++ | 400 | 984 |
D | 7520 | 173 |
VB.NET | 610 | 1436 |
Edit: After researching some information on the Java VM, I reran this test using the Server profile (Java -server anagram). This sped it up from 1.8s to just a smidgen over 1 second.
What can we conclude from this table? Well, nothing, really. Like most statistical analyses of Programming language performance, it’s almost meaningless. A high level view, however, tells us there are two kinds of languages; those that require the programmer to do a lot of work, and those that don’t. Because many high-level constructs are implemented on the former languages, they get used frequently, and as a result it’s difficult for the implementation to be optimized for all cases. For other languages, however, you often implement such structures yourself (and therefore can decide on the best design to use based on how you intend to use the data structure), or, if you are lucky, you get to choose from a set of genericized implementations. For example, the .NET framework has several classes that implement the basic List Interface (LinkedList, Queue, Stack, etc) and a number of other implementations that use various algorithms.
An Omission one could note in the .NET framework is the fact that it lacks a built-in, generic Tree structure. However, this is not really an Omission, but more a Design choice. While a List Interface and assorted implementations can have generic uses, a lot of the purpose of a similarly purposed generic tree is mostly academic; when creating a data structure, sometimes you don’t even fully realize that the structure you are creating has some tree properties, and even when you do, the various parts of that tree require extra information.
Some make the argument that programming languages that do things for the programmer are “for beginners”; for example, C# and by relation the .NET framework provide a myriad of data structures, such as LinkedLists, Lists, Dictionaries (Hashmaps), Queues, etc. Some argue that these are basic concepts that a programmer should know how to write. I agree 100%. However, what I don’t think we should need is for every programmer to have their own personal implementation of various data structures; while usage over time will iron out problems, those are the sorts of things that ought to be directly in the language library. Most popular programming languages have those or similar constructs available via their build in libraries. Of course this does not prevent you from writing your own implementations, and could be seen to help, since you can compare your own implementation to the well-tested library code.
I suppose what it sort of boils down to is an argument of speed of development versus speed of execution; languages like Ruby offer very quick development, easy reading, and quick, concise syntax to do what you need. However, one pays for this in a higher execution time compared to other languages. Personally, I air on the side of programming languages being geared towards speed of development; it doesn’t really matter how fast your program runs if it never exists; additionally, computers can always be made faster, as can language implementations. For example, the concise ruby version I provided might run faster in the future because of enhancements to the interpreter as well as being run on a faster computer. However, adding those language features that I exploit to other languages won’t magically make my implementations in those languages faster, because it will require the code be revised, or even rethought.
Add to this that most people who cite evidence against their least favourite language could very well simply not be very good at them; for all I know my python implementation is slower than my C# version simply because I suck at Python. However, the fact is that most people that state “I tested several languages…” and then state a conclusion that coincidentally matches their original stance are oddly tight-lipped when it comes to actually sharing the code they used to test.
This write-up is not designed to point out the relative performance of different languages, but rather point out something that should have been obvious from the beginning; all programming languages are different, and they differ in expressive power as well as execution speed, but the latter is a less important concern than the former, all things considered.
One thing that comes up often when you have programmers coming from different language and platform backgrounds is suggestions that come from those backgrounds. This is a mixed bag- a person that is familiar with functional programming languages may give you new insight into the problem you are working on. Similarly, somebody may bring with them, from those other platforms and systems, a way of doing things that you can apply to your language/technology. On the other hand, another thing you sometimes see is suggestions or criticisms that are based on how the person understands their language, rather than the one being dealt with. A good example of this I’ve witnessed first-hand (as the recipient of the critique) was when I wrote a small Python program that used Dictionaries to make a rather short and concise Rock/Paper/Scissors game. The criticism was that I was using “Rock”, “Paper” and “Scissors” as string indices into the Python dictionary, and they said that if I used numbered constants, it would be faster. However, this was not the case- as I demonstrated shortly afterwards, the changed code was in fact the same speed but much harder to read. This was because they were assuming that strings would be slower- but Python Dictionary’s are actually heavily optimized for string indexing, and additionally due to string interning, the comparison to the key is just a pointer comparison, so they were functionally identical.
I guess my overall point is that a Language itself is never going to make a big enough performance difference over another to make it worth either switching or even using a language purely on that basis. Different language paradigms and idioms Programming languages ought to be judged on their merits as a translatory language between humans and the machine. Additionally, if the speed of a language- such as the most common example of C or C++- is really fast enough to matter compared to myriad other languages, than surely hand-tuned Assembly code does as well; compared to Python, Ruby, Java, or C#, C and C++ typically require a lot more design, planning, experience, and development and maintenance time; the counter-argument is always that C/C++ require that because they are low-level (they aren’t, they are high-level languages) and are faster than the other languages. However that same trade-off can be found between C/C++ and hand-tuned assembly, making that argument more or less silly. The counter to this is that Assembly is a lot harder than C/C++. But that just reinforces my point; writing proper C/C++ code for relatively simple tasks can be exponentially harder than in Python, Ruby, Java, or C#. Additionally, it completely ignores the additional development and maintenance time that using it requires.
Have something to say about this post? Comment!