With the runaway success of Visual Basic 1.0, it made sense for future versions to be released improving incrementally on the existing version. Such was how we ended up with Visual Basic 2.0. In some ways, Visual Basic 2.0 changed Visual Basic from a backwater project like Microsoft Multiplan into something that would become a development focus for years to come- like Microsoft Excel.
Visual Basic 2.0 was quite a leap forward; it improved language features, the IDE, and numerous other things. One of the biggest changes was the introduction of the “Variant” Data type. A full list of new features, from the Visual Basic 2.0 Programmers Guide, with annotations on each by me.
Variants are an interesting topic because while they originally appeared in Visual Basic 2.0, they would eventually seep into the OLE libraries. As we move through the Visual Basic versions, we will see a rather distinct change in the Language, as well as the IDE software itself to reflect the changing buttresses of the featureset. One of the additional changes to Visual Basic 2.0 was “Implicit” declaration. Variables that hadn’t been referred to previously would be automatically declared; This had good and bad points, of course- since a misspelling could suddenly become extremely difficult to track down. It also added the ability to specify “Option Explicit” at the top of Modules and Forms, which required the use of explicit declarations. Visual Basic 1.0 also allowed for implicit declarations, but you needed to use some of the ancient BASIC incantations (DefInt, DefLng, DefSng, DefDbl, and DefCur) to set default data types for a range of characters. It was confusing and weird, to say the least.
256-colour modes uses a single byte to index each colour, and palette entries are stored separately. The index them becomes a lookup into that table. This had some interesting effects; Applications could swap about the colours in their palette and create animations without really doing anything; the Video Adapter would simply change mappings itself. This was a very useful feature for DOS applications and Games.
Windows, however, complicated things. Because Windows could run and display the images from several Applications simultaneously, 256-color support was something of a tricky subject. Windows itself reserved it’s own special colours for things like the various element colours, but aside from that 8-bit colour modes depended on realized palettes. What this means is that Applications would tell windows what colours they wanted, and Windows would do what it could to accomodate them. The Foreground application naturally took precedence, and in general when an application that supported 8-bit colour got the focus, it would say “OK, cool… realize my palette now so this owl doesn’t look like somebody vomited on an Oil Painting”. With Visual Basic 1.0, this feature was not available for numerous reasons, most reasonable among them being a combination of it just not being very important paired with the fact that VB was designed primarily as a “front end” glue application for other pieces of code. Visual Basic 2.0 however adds 256-color support. This adds quite a few properties and methods that are used. VB itself manages the Palette-relevant Windows Messages, which was one of the reasons VB 1.0 couldn’t even be forced to support it.
On a personal Note, Visual Basic 2.0 is dear to me (well, as dear as a software product can be), since it was the first Programming Language I learned and became competent with to the point where I realized that I might have a future with software development. Arguably, that future is now, but it hasn’t actually become sustainable (I may have to relocate). But more so than this is the fact that I was given a full, legal copy of the software. This in itself isn’t exceptional, but what is is the fact that it had all the manuals:
Dog-eared, ragged, and well-used, these books- primarily the Programmers Guide and Language Reference- became the subject of meticulous study for me. From VB2 I jumped directly to VB6. This set of Articles, however, will not. Next up- Visual Basic 3. It looks the same- but deos it vary more than it appears?
104 total views, 2 views today
XNA Game Studio is a finicky bugger, and it’s even more finicky now that it is no longer being maintained. One of the biggest issues is that it doesn’t work properly with the latest version of Visual Studio, 2012. However, you can cajole it and reassure it that things will be OK.
One way to do so is to essentially “translate” an existing, working install, and “hack” it’s configuration so that it works in Visual Studio 2012. In order to get the appropriate file setup, you will have to install XNA Game Studio 4 into Visual Studio 2010.
With Visual Studio 2010 Installed and XNA Game Studio happily installed, you can force the XNA 4 installation to get along with Visual Studio 2012. XNA Game Studio installs by default in
%ProgramFiles(x86)%\Microsoft Visual Studio 10.0\Common7\IDE\Extensions\Microsoft\XNA Game Studio 4.0
the first step in whipping it into shape is to simply copy that folder to the appropriate target location in the Visual Studio 2012 installation. This is located, by default:
%ProgramFiles(x86)%\Microsoft Visual Studio 11.0\Common7\IDE\Extensions\Microsoft\XNA Game Studio 4.0
Not done yet, though- it’s copied over, and VS2012 will try to run it, but it will look inside the plugin manifest, see that it doesn’t say it works with VS2012, and then explode (well, it doesn’t explode, it just doesn’t load the extension, but that’s hardly very exciting.) In order to do so, you will need to open and edit the extension.vsixmanifest file within the XNA Game Studio folder you copied (the new one, that is). You’ll notice this:
What you can do should be pretty clear now; just flooble 1 the “Version=”10.0″ entry to point to version 11. Save the file, Open Visual Studio 2012, and BAM! Game Studio projects should now be available via the New Project Dialog Templates.
Sometimes- perhaps You’ve been badmouthing Visual Studio behind it’s back; Maybe you complained under your breath about how slow Visual Studio 2012 is when you went to deal with Intellisense. Maybe you told Visual Studio 2012 where it can shove it when it provided you with a helpful popup tip. Visual Studio 2012 has feelings and, like a Cat, never forgets it’s enemies.
On the other hand, you might just need to delete the extensions cache file. The extensions cache file can be found at “%LocalAppData%\Microsoft\VisualStudio\11.0\Extensions”; specifically, the “extensionSdks.en-US.cache” and “extensions.en-US.cache” files in that folder. Delete them with impunity and regret nothing. Visual Studio 2012 has forced your hand. Innocent Files must die so that Game Studio 4.0 may live.
1 flooble is a word I just invented now. Patent Pending!
112 total views, 6 views today
C#’s linq(language-integrated query) features are some of the more powerful features available through the language. Aside from it’s query operators that are, as it’s name would suggest, integrated into the language, the feature also brings along a large set of Extension methods to the IEnumerable
One interesting re-use of the IEnumerable
What we have here is similar- but not quite the same- as a switch statement within a loop. What makes this particularly interesting as a structure is that it acts sort of like a seive. The source to the extension EachCase() method is shown here:
The function works by calling the passed Action for all entries in the enumeration that cause TestExpression to return true. Those that return false are yielded to the caller. In the original code the EachCase() calls were chained, one after the other; each one operated on the entries that the previous one didn’t work with. the final .Count() call in the original code is to force it to be enumerated, allowing all the code blocks to evaluate appropriately. What use-cases might this have? I have no idea. However I feel confident in saying that it could make for a more readable alternative to something that might otherwise require a separate Finite-State machine.
316 total views, no views today
The D Programming language is designed as a extension of C and C++ into the domain of a higher-level programming language. It is not 100% Source compatible with C++ or C, in fact it’s very much not source compatible, but it provides the same system-level constructs and tinkering to the programmer while also, and in general, defaulting to- safety. Whereas Java takes the approach of, “If it’s not safe, it’s not worth it” and C# takes the approach of “Well, if you really are willing to jump through these hoops, it must be worth it” D takes the approach of, “You can do it, I just hope you know what you’re doing because there ain’t any training wheels”.
Working with D is actually quite easy, once you get the hang of it’s basic design strategy. Arguably, this is true of pretty much any Programming Language. The D Implementation of the anagrams program is one of the shortest thus far:
Well, it’s not as short as F#… and certainly not shorter than Ruby, but a lot of the conciseness we see here is thanks to conceptual shortcuts facilitated by the language design. Note the declaration of the anagrams associative array. in C#, we used this:
In D, we use this:
So the question is- what does this mean? well, D Associative arrays work rather simply:
Therefore in the above instance, we have the Type name of each value as string  , and the key type is itself another string. The result is that we have string array’s that are indexed by another string, which is the favoured approach I’ve been using for each anagrams implementation. Another shortcut we are able to take:
This is thanks to another interesting design standpoint of D, which adds a lot of first-class support for lists and enumerations. This includes operators, such as the tilde (~), which have the affect of concatenating items to the end of an array. In this case, we cast gotline to a string, and concatenate it to the anagrams array, storing the result in the same location. I’m not 100% on the performance implications of this usage, or how it could be done faster; but this was a trade-off IMO that favours readability. I could, arguably, find some faster way to do it, but this is one of the highlights of programming- you need to know when your software is fast enough. It makes sense to spend a few extra minutes making a program faster if you plan to use that software frequently, but otherwise you might get a better benefit out of making it more readable, which in and of itself can help future maintenance. It all depends on if you can justify the extra man hours invested into something as saving more man-hours down the road.
The implementation shown above averages around 7 seconds of execution time to complete; this is when compiling with the -O flag, which optimizes the output. Given how much slower that is than some of the other implementations, I have to admit I am a bit disappointed. I Did a bit of research and found that there is an Appender type in D’s std.algorithm library that I might be able to use to eke out a bit of speed; but the default array allocation is rather efficient as well, from what I’ve read; in this case however the problem is that the program is creating a lot of small arrays and only appending to them a few times, so the reallocation strategy doesn’t work as well.
378 total views, 2 views today
The following consists of my opinion and does not constitute the passing on of an official statement from Microsoft. All thoughts and logic is purely my own and I do not have any more ‘insider’ information in this particular topic than anybody else
I’ve been hearing from the community a bit of noise about Microsoft’s XNA Framework- a Programming library and suite of applications designed to ease the creation of Games- being cut. A google reveals a lot of information, but a lot of it is just plain old rumours. The only one I could find that was based on actual information still makes a lot of assumptions. It is based on this E-mail:
Our goal is to provide you the best experience during your award year and when engaging with our product groups. The purpose of the communication is to share information regarding the retirement of XNA/DirectX as a Technical Expertise.
The XNA/DirectX expertise was created to recognize community leaders who focused on XNA Game Studio and/or DirectX development. Presently the XNA Game Studio is not in active development and DirectX is no longer evolving as a technology. Given the status within each technology, further value and engagement cannot be offered to the MVP community. As a result, effective April 1, 2014 XNA/DirectX will be fully retired from the MVP Award Program.
Because we continue to value the high level of technical contributions you continue to make to your technical community, we want to work with you to try to find a more alternate expertise area. You may remain in this award expertise until your award end date or request to change your expertise to the most appropriate alternative providing current contributions match to the desired expertise criteria. Please let me know what other products or technologies you feel your contributions align to and I will review those contributions for consideration in that new expertise area prior to the XNA/DirectX retirement date.
Please note: If an expertise change is made prior to your award end date, review for renewal of the MVP Award will be based on contributions in your new expertise.
Please contact me if you have any questions regarding this change.
This is an E-Mail that was sent out- presumably- to XNA/DirectX MVPs. I say presumably because for all we know it was made up to create a news story. If it was sent out, I never received it, so I assume it would have been sent to those that received an MVP Award with that expertise. It might have been posted to an XNA newsgroup as well. Anyway, the article that had this E-mail emblazoned as “proof” that MS was abandoning XNA seemed to miss the ever-important point that it actually says nothing about XNA itself, but actually refers to the dropping of XNA/DirectX as a technical Expertise. What this means is that there will no longer be Awards given for XNA/DirectX development. It says nothing beyond that. Now, it could mean they plan to phase it out entirely- but to come to that conclusion based on this is a bit premature, because most such expertise-drops actually involved a merge. For example, in many ways, an XNA/DirectX expertise is a bit redundant, since XNA/DirectX works using a .NET Language such as VB.NET and C# and very few XNA/DirectX MVPs truly can work with XNA in any language at all, it might make sense to just clump them with us lowly Visual C# and Visual Basic MVPs.
To make the assumption that XNA is being dropped based on this E-mail is a bit premature. In my opinion, I think the choice was made for several reasons. I guess some of the misconceptions might be the result of misconceptions about just what a Microsoft MVP is. First, as I mentioned before, a lot of the expertise of XNA/DirectX involves an understanding- and expertise- in some other area. Again, Visual C#, Visual Basic, Visual C++, etc. So in some ways they might have considered a separate XNA/DirectX expertise redundant. Another reason might have to do with the purpose of an MVP. MVP Awards are given to recognize those who make exceptional community contributions in the communities that form around their expertise. For example, my own blog typically centers around C#, solving problems with C# and Visual Studio, and presents those code solutions and analyses to the greater community by way of the internet, as well as sharing my knowledge of C# and .NET in those forums in which I participate. MVP awardees don’t generally receive much extra inside information- and that they do get is typically covered by a NDA agreement. The purpose of the award is to also establish good community members with which Microsoft can provide information to the community. MVPs are encouraged to attend numerous events where they can, quite literally, talk directly to the developers of the Products with which they acquainted. in some way you could consider MVPs “representatives” of the community, who are chosen because their contributions mean they likely have a good understanding of any prevalent problems with the technologies in question, and interacting with MVPs can give the product teams insight into the community for which their product is created. Back to the particulars here, however- as the E-mail states, XNA Game Studio is not under active development. Now, following that, it seems reasonable to work with the assumption that either that product has no Product Team, or those that are on that Product Team are currently occupied in other endeavours, or other products for which their specific talents are required.
It’s not so much that they are “pulling the plug in XNA”- the product is currently in stasis. As a direct result of this, it makes sense that without an active Product Team, having specific MVP Awardees for that expertise isn’t particularly useful for either side- MVPs gain from personal interactions with the appropriate Microsoft Product team as well as fellow MVPs, and Microsoft gains from the aggregate “pulse of the community” that those MVPs can provide. Without a Product Team for a expertise, that expertise is redundant, because there is nobody to get direct feedback. This doesn’t mean the XNA community is going away, just that, for the Moment, there is no reason for Microsoft to watch it’s pulse, because the product is “in stasis” as the OS and other concerns surrounding the technology metamorphize and stabilize (The Windows 8 UI style, Windows Store, and other concerns in particular). Once the details and current problems with those technologies are sussed out, I feel they will most certainly look back and see how they can bring the wealth of game software written in XNA to the new platform. Even if that doesn’t happen, XNA is still heavily used for XBox development- which is also it’s own expertise.
I hope this helps clear up some of the confusion that has been surrounding XNA. It doesn’t exactly eliminate uncertainty- this could, in fact, be a precursor to cutting the technology altogether. But there is nothing to point to that being the direction, either.
456 total views, no views today
Haskell is one of those languages that, to paraphrase Rodney Dangerfield, can’t get any respect. I mean, sure, it get’s a lot of use in academia and by some people, but by and large it is ignored in favour of other languages such as C++,Java, and C#.
One of the reasons Haskell has that difficulty finding use as a mainstream language is that it leans heavily on the Functional Programming paradigm. We saw a bit of this in the F# example. Haskell, however, abandons almost all Imperative stepping-stones; while F# has various imperative constructs and is essentially a multi-paradigm language as a result, Haskell provides very little that could be used to fake imperative styles, and those they do have are tedious to use, and as such discourage their use.
Like the other examples, this example implements an algorithm to find all the anagrams in a dictionary list of words. The Haskell Implementation:
(I’ll be surprised if the above is syntax highlighted through the plugin I use…)
This is the part I have to admit ignorance. I did write this, but at this point, I have absolutely no clue how it works. This is arguably because I tend to think in terms of imperative programming styles, rather than functional, and so some of this is just gobbledygook to me. This is particularly unhelpful given the fact that I’m supposed to be explaining how it works. Most of the work is done in the last few lines, which work to sort each entry and place them in a Map appropriately:
this line will take each element of words, and create a map; each sequence element of the group will be used as x in the delegate, and that creates the map element where the key is the sorted equivalent of x (the word) and the value is a list of the word. To my understanding elements that map to the same key will be appended to that list, but if not, well I guess it’s being done elsewhere. Most likely the work to properly “condense” everything is done in the following line, which filters the aforementioned words listing.
One of the other things that haskell get’s some flak for is speed. That is, in the negatives. This implementation executes in 3.1 seconds, which is about equal with the perl implementation. So some of the speed concerns might be justified- since Haskell is in fact compiled, not interpreted, so being comparable in speed is a bit of a worry. On the other hand, it’s still quite a bit faster than a few of the implementations, and really I don’t know haskell so an “expert” haskell programmer could almost certainly come up with a blazing fast implementation. The language itself is terse, and very expressive; The code that actually runs here is shorter than the current shortest implementation, Ruby. It’s worth noting that the shortest implementations are all very expressive languages, which is unlikely to be a coincidence.
608 total views, no views today
One common problem that comes up in the creation of stylized windows such as Splash screens is the desire to have parts of the form be “shadowed” or, in other words, blend with the forms behind them. You can see this effect in action with other application splash screens, such as photoshop:
What witchcraft is this? How does Photoshop do it? One of the first things you might try with .NET is something like the TransparencyKey of the form. Of course, this doesn’t work, instead you get this unsightly ring of the transparency key around it.
The solution, of course, is a bit more complicated.
Starting with Windows 2000, the Win32 API and Window Manager has supported the idea of “layered” Windows. Before this, Windows had concepts known as “regions”; this basically allowed you to define regions of your window where the background would show through. This didn’t allow for per-pixel blending, but that sort of thing was not widely supported (or even conceived of, really) by most graphics Adapters. The “Layered” Windowing API basically provided a way to allow Per-pixel Alpha. Which is what we want.
Windows Forms, however, does not expose this layered window API through it’s methods or properties, with the exception of the opacity property. I don’t recall if WPF does either, but I would be surprised if it’s capabilities exposed more of the Layered Windowing API. Basically- we’re on our own
So what exactly do we need to do? Well, one thing we need to do is override the CreateParams property of the desired form, so that it adds the WS_EX_LAYERED style to the window, letting the Windowing Manager know it has to do more work with this window. Since we would like a neatly decoupled implementation, we are going to create a class that derives from Form, but use that as the base class for another form. This let’s us override the CreateParams property within that base class, but this has a bit of a caveat as well, since I’ve found that makes the Designer a bit annoyed and it complains and won’t let you use the designer. So you will have to deal with that particular point yourself. (I imagine there is a way to make it work, but I don’t want to bloat the class with such functionality). If necessary, the appropriate functions and the CreateParams override can be put into the classes that need this functionality instead.
Here is the PerPixelAlphaForm class. By the way, I’m going with Visual Basic.NET because there are already a million hits for this functionality with C#:
This provides the core implementation. The “Win32″ class here provides the needed API declarations.
In order to use this, we need to make our form derive from it, and then use the inherited SetBitmap function:
(Here I just used an image on my second Hard disk that had a transparency channel, but the Bitmap constructor parameter can easily be changed.). The result:
The Project I used for this can be downloaded from here . This was created using Visual Studio Professional 2012, Update 1- other editions or earlier versions may have problems with the project or source files.
968 total views, no views today
Though I do tend to try to write my samples and test applications, as well as any new programs, using Visual Studio 2012, I haven’t yet migrated most of my other applications to it. For BASeBlock and BCJobClock, for example, they started off in Visual Studio 2008. More recently, I migrated them to Visual Studio 2010. BCJobClock had it’s project upgraded to Visual Studio 2012, (but for some reason the admin applet doesn’t work on Windows 8, which is something that certainly bears further investigation). Anyway, I’ve been strongly considering it the more I think about it. In particular, I’ve found Visual Studio 2010 to actually be a lot slower. Originally, I thought this was because my 201 install had several add-ins installed, such as ReSharper, but now that I also have a version of ReSharper that works with Visual Studio 2012, I still find 2012 to be far more responsive.
Now VS2012 has gotten some flak from certain users for adopting the Windows 8 UI style into it’s icons, as well as using Caps in it’s menus. I really have no strong preference here at all, and I personally don’t see what the fuss is about. I’ve heard people say that the capitals “hurt their eyes” but I can honestly say that I don’t understand how that could be the case.
Anyway: I’ve been considering switching BASeBlock over the Visual Studio 2012. If I do I won’t be upgrading the project to use the 4.5 framework, though. At least not yet.
262 total views, no views today
One of the main things when creating a PHP implementation is to get an environment set up. If I was to implement the script and run it on my webhost, the results would no longer correspond properly; what has to happen is the PHP has to be interpreted on my machine for the results to be something we can compare. PHP is designed more as a web-based language- as opposed to a local scripting environment. By this I mean it’s prevalent- and almost universally used on the server side; whereas other languages like Python, Perl, etc. find a lot of use client-side as well. Because I hadn’t ever actually run PHP client-side I had to do a little searching. But the search was relatively short. Since I plan to run these tests on my Windows machine, I needed the PHP interpreter for Windows. What I didn’t need, however, was Apache; and while I have IIS installed and could probably run PHP that way, I am concerned that the web server components might add some overhead to the running of the script, and generally make the timing “off” in comparison to the other languages I used. As I write this I am installing the file from here , which ought to give me the tools to write a PHP equivalent of the Anagrams Algorithm. This entry (evidently) describes the PHP entrant into that discussion. A naive anagram search program will typically have dismal performance, because the “obvious” brute force approach of looking at every word and comparing it to every other word is a dismal one. The more accepted algorithm is to use a Dictionary or Hashmap, and map lists of words to their sorted versions, so ‘mead’ and ‘made’ both sort to ‘adem’ and will be present in a list in the Hashmap with that key. A psuedocode representation:
The basic idea is to have a program that isn’t too simple, but also not something too complicated. I’ll also admit that I chose a problem that could easily leverage the functional constructs of many languages. Each entry in this series will cover an implementation in a different language and an analysis of such.
PHP is one of those languages that have a lot of fiobles and historical issues that are maintained to attempt to make sure things are compatible. In some ways, it reminds me of the story of Visual Basic 6. after several language iterations, it still holds to it’s roots. So on the one hand, you have very old code that still works well in the latest version as it used to, and on the other you have some tiny incompatibilities that might require changes to certain kinds of code. One of the most popular PHP-detracting on the web today is PHP: A Fractal of Bad Design Which makes some good points, but also ignores the fact that PHP was really never designed- it ‘evolved’. It was never really designed as a programming language, but rather as a way to get what needed to be done done in the fastest possible way and with the fewest headaches. The inevitable result is what we have now, which has some crusty deposits, but those crusty deposits are what makes PHP so useful.
As far as the anagrams concept goes, I figure the best way to accomplish the goal is to use associative arrays. PHP has these built in, and this is arguably an extremely awesome feature that is frequently overlooked. This is the resulting implementation:
A run of this took 116 seconds. Now, PHP being slower than many other languages, directly, is not amazing. However this is quite a bit slower than any other implementation by a factor of ten. Therefore I think it is fair to PHP to see if perhaps I did something wrong in the above implementation.
I am fairly certain that the issue is with my use of array_merge, since it rather re-creates the array every time, which can be costly given the dictionary has over 200K words. I changed it to use array_push:
I also “fixed” a minor issue from before. We are basically working with a top level associative array which uses the sorted words as their key, each one of which stores another array of the words that sort to that sorted word. Each of which would be anagrams of one another, naturally.
I also added a check at the bottom to make sure that I was getting some sort of result. This one clocked in at 114 seconds, which is still rather dismal. At this point I’m still convinced I’m just not using it “properly” but it is also possible that PHP really has this issue regarding speed. That said, I could probably find a Dictionary/HashTable class implementation on-line that would work better than a nested associative array. Another thing might be that I am simply using a slower PHP implementation. I’m using the installation from “php-5.3.20-nts-Win32-VC9-x86.msi” which is accessible from here . I went with the “Non thread-safe” version since that is usually faster. It’s also possible that running this on a Linux machine will yield faster results, since that is PHP’s ‘native’ environment. I considered the stumbling blocks with other languages, particularly Java and C#, with which I only got maximum performance when disabling debug mode. With that in mind, I sought to make tweaks to the php.ini file that came with the PHP interpreter, in the interest of increasing speed. I wasn’t able to make any significant changes to increase speed, though I did get it down to 109 seconds.
I think a fair assessment here has to do with the File I/O being done only a tiny bit at a time. This is one of the issues I had with some of my other implementations as well. In this case it creates and allocates more memory (possibly, depending on the specific PHP implementation of adding elements). So it makes sense to try to do it in one fell swoop, and then iterate over those elements. With that in mind, I made some changes:
Which still didn’t fix the speed problem (113 seconds this time). I did notice that when I had error display on, I got a lot of notices- as the loop ran- about undefined indices when setting $anagrams [$sorted] . So if I had to hazard a guess I’d say that the speed issue is a combination of a very long loop (200K words) combined with it being entirely interpreted that results in the rather slow run-time. The language itself is a bit messy, though it’s certainly not as bad as Perl as far as special punctuation marks go. Of course with with the loss of that special punctuation ($ for scalars, @ for arrays and % for hashmaps in Perl, but PHP only keeps $ and lacks the multiple contexts) we get a simplified language that is a bit easier to work with cognitively but lacks some power and capability. Nonetheless, it is still a very workable language, and most of it’s capability can be leverages by using existing frameworks and code; PEAR libraries can be found for pretty much every purpose. I hazard to wonder how it can be considered scalable, given the speed issues I’ve encountered, but I’m still quite certain I’m simply using it incorrectly. I have to say I was surprised that I remembered the foreach syntax off the top of my head- (foreach($array as $value)). Not because it’s complex or esoteric but because I have been messing with C#, java, and other languages a lot in the meantime, many of which use very slight differences- Java uses it’s for loop but with a special syntax for(value:array) C# uses a foreach loop foreach(value in array) , Scala uses for(value < - array) , and Python doesn't technically have a standard for loop and everything is really a foreach for value in array: . F# uses a similiar syntax, replacing the colon with it’s own equivalent keyword for value in array do . I was convinced when I used as that I was remembering some other language implementation and would have to look it up, but to my surprise I remembered it correctly. probably just luck though, to be fair. Even though C# is my “main” language for desktop development I do in fact find myself using the Java for syntax for foreach loops, too. (or even trying to use for(value in array) and wondering why the IDE yells at me on that line until I facepalm that I used the wrong keyword).
One might thing this particular example proves PHP is “slow”, but I disagree. I think it just shows that PHP is not well suited to iterating over hundreds of thousands of elements. But given it’s particular domain- web pages- you shouldn’t be presenting that much information to the user anyway, so that sort of falls out of the languages intended use cases’ scope.
In the works: Part XI, Haskell. I had a nice long one almost finished but I appear to have either forgotten to save it when I restarted my PC or something along those lines, so I’ve had to rewrite it. Thankfully I still have the actual Haskell code, which is arguably the most intensive part of each entry in the series.
480 total views, no views today
This is not another entry into the series regarding anagrams, but a much simpler synopsis using a very basic algorithm. The performance of programming languages and technologies are cited by a lot of people as being “slow” or “faster” or any number of terms of the sort. Recently, I’ve been told that Flash/ActionScript is “slow”. Now, I am not doing this to prove that wrong- in fact I think the results may prove that to be the case- but rather to show something that has been repeated several times by others.
First, the algorithm. In order to keep things simple, we’ll make the algorithm simple as well- it will simply generate a random number a million times and put it into an array or list.
The ActionScript version. This is ActionScript 2.0 since I only have Flash 8 installed:
Elapsed time was 7586 milliseconds (or 7.5s).
Now, to another language: let’s go with python for the moment.
This doesn’t even finish in 10 minutes. Clearly we’ve proven that Python is really slow, correct? Not really, no:
The reason the previous one was slow was because of how Python works. The line arr = arr + [random.random()] was allocating a new list and assigning it to the list. The old one was freed for garbage collection. The big problem was that as it went on those allocations got larger and larger. The above uses the append() method of the list, which adds it to the end of the same list- rather than creating a new one each time. The resulting execution time? 340 milliseconds or so.
OK, let’s move on to C:
This one is a bit messy since I couldn’t get the clock() stuff working, it always gave me a negative value, so I just included windows.h and used GetTickCount() instead. An interesting side-note is that I originally declared the array of int’s statically, as in:
Hilariously, though- that failed catastrophically, causing a StackOverflowException in the run-time itself when the program started. I had to switch it to dynamic allocation as shown above. The error makes sense since locals are normally stored on the stack; using malloc() puts them on the heap instead. Unsurprisingly, this implementation results in a run-time of 34 milliseconds.
What do we conclude? Well, first, it’s important to realize that C is simply less maintainable than Most other languages; this means that the “faster” implementation is going to take quite a bit longer to write, and it’s going to be more effort to maintain as well as add features too. Additionally, by virtue of it leaving far more responsibility in the hands of the programmer, it is more likely for bugs to creep in. Various controls, such as code reviews and standards can mitigate these concerns, but they cannot eliminate them completely. My point is that using C over a high-level language is not a magic bullet for a faster application, but requires more programmer time in terms of writing and maintaining the code to begin with, and is additionally more susceptible to bugs through that maintenance. In this instance, we are looking at over 34 times the speed. But, the only important concern is whether that speed difference matters. Another thing is that the Python implementation could likely be made even faster too- I didn’t fully explore that; either through code changes, or through the use of another python interpreter or compiler technology. Basically unless speed is actually an issue and it can be traced directly to the chosen language platform- and there is no way to mitigate the problem- does it make sense to choose a language like C, or even C++.
498 total views, 2 views today