21 Sep 2015 @ 10:56 PM 

Unit Testing. Books have been written about it. Many, many books. About Unit testing; about Testing methodologies, about Unit Testing approaches, about Unit Test Frameworks, etc. I won’t attempt to duplicate that work here, as I haven’t the expertise to approach the subject to anywhere near that depth. But I felt like writing about it. Perhaps because I’m writing Unit Test code.

Currently what I am doing is writing Unit tests, as mentioned. These are written using C# (since they are testing C#, so it makes sense), and they make use of the built-in Visual Studio/NET Unit Testing features.

In my mind there are some important ‘cornerstones’ of Unit Tests which make them useful. Again, I’m no expert with Unit tests but this is simply what I’ve reasoned as important through my reading and interpretations on the subject so far.


making sure Tests engage as much of your codebase as possible applies to any type of testing of that code, and Unit Tests are no exception to that rule. It is important to make sure that Unit Tests execute as much of the code being tested as possible, and- as with any test- should make efforts to cover corner cases as well.


When it comes to large software projects, one of the more difficult things to do is encourage the updating and creation of new Unit Tests to make sure that Coverage remains high. With looming deadlines, required quick fixes, and frequent “emergency fixes” It is entirely possible for any Unit testing code designed to test this to quickly get out of date. This can cause working code to fail a Unit test, or Failing code to pass a Unit Test, because of changing requirements or redesigns or because code simply isn’t being tested at all.


While this in part fits into the Maintainability aspect of it, this pertains more to automation of build processes and the like. In particular, with a Continuous Integration product such as Jenkins or TeamCity, it is possible to cause any changes to Source Control to result in a Build Process and could even deploy the software, automatically, into a testing environment. In addition, Such a Continuous Integration product could also run Unit Tests on the source code or resulting executables and verify operation, causing the build to be marked a failure if tests fail, which can be used as a jumping off point to investigate what recent changes caused the problem. This can encourage maintenance (if a code change causes a failure then either that code is wrong or the unit test is wrong and needs to be updated) and is certainly valuable for trying to find defects sooner, rather than later, to try to minimize damage in terms of Customer Data and particularly in terms of Customer Trust (and company politics, I suppose)

I repeat once more than I am no Unit Test magician. I haven’t even created a completely working Unit Test Framework that follows these principles yet, but I’m in the process of creating one. There are a number of Books about Unit tests- and many books that cover Software Development Processes will include sections or numerous mentions about Unit Test methodologies- which will likely be from much more qualified individuals then myself. I just wanted to write something and couldn’t write as much about Velociraptors as I originally thought.

Posted By: BC_Programming
Last Edit: 21 Sep 2015 @ 10:56 PM

EmailPermalinkComments Off on Unit Testing
 15 Aug 2015 @ 4:18 PM 

Windows 10 has been out for a few weeks now. I’ve upgraded one of my backup computers from 8.1 to Windows 10 (the Build I discussed a while back).

I’ve tried to like it. Really. I have. But I don’t, and I don’t see myself switching my main systems any time soon. Most of the new features I don’t like, and many of those new features that I don’t like cannot be shut off very easily. Others are QOL impacts. Not being able to customize my Title bar colour and severely removing all customization options, for example, I cannot get behind. I am not a fan of the Start Menu, nor do I like how they changed the start screen to mimick the new Start menu. I understand why these changes were made- primarily due to all the Windows 8 naysayers- but that doesn’t mean I need to like them.

Windows 10 also introduces the new Universal App Framework. This is designed to allow the creation of programs that run across Windows platforms. “Universal Windows Application” referring to the application being able to run on any system that is running Windows 10.

If I said “I really love the Windows 8 Modern UI design and Application design” I would be lying. Because I don’t. This is likely because I dislike Mobile apps in general and having that style of application not only brought to the desktop but bringing along the same type of limitations I find distasteful. I tried to create a quick Win8 style program based on one of our existing winforms programs but hit a brick wall because I would have had to extract all of our libraries and turn it into a web service, then have it running in the background of the program itself. I wasn’t able to find a way to say “I want a Windows 8 style with XAML, but I want to be able to have the same access as a desktop program”. It appears that this may have been rectified with the Windows 10 framework, as it is possible to target a Universal app and make it, errm- not universal- by setting it to be a Desktop Application. I hope- though have as of yet been unable to determine if that is possible and it is looking more and more like it isn’t. This makes my use case- to provide a Modern UI ‘App’ that makes use of my company’s established .NET Class Libraries – impossible. This is because for security reasons you cannot reference standard .NET Assemblies that are not in the GAC. I was thinking they might work if they are signed in some fashion, but I wasn’t able to find anything that would indicate that is the case.

the basic model, as I understand it, mimicks how typical Smartphone “apps” work. Typically they have restricted local access, and will access remote web services in order to perform more advanced features. This is fairly sensible since most smartphone apps are based off of web services. Of course, the issue is that this means porting any libraries that use those sorts of features to portable libraries which will access a web service for the required task. (For a desktop program, I imagine you could have the service running locally)

I’m more partial to desktop development. Heck right now, my work involves Windows Forms (beats the mainframe terminal systems the software replaces!) and even moving to WPF would be a significant engineering effort, so I keep my work with WPF and new Modern UI applications ‘off-the-clock’.

Regardless of my feelings regarding smartphone ‘apps’ or how it seems desktop has been taking a backseat or even being replaced (it’s not, it’s just not headline worthy), Microsoft has continued to provide excellent SDKs and Developer tools and documentation, and is always working to improve both. And even if there is a focus on the Universal Framework and Universal Applications, their current development tools still provide for the creation of Windows Forms applications, allowing the use of the latest C# features for those who might not have the luxury of simply jumping to new frameworks and technologies willy-nilly.

For those interested in keeping up to date who also have the luxury of time to do so (sigh!) The new Windows development Tools are available for free. One can also read about What’s New within the Windows development ecosystem with Windows 10, And there are also Online courses regarding Windows 10 at the Microsoft Virtual Academy, as well as videos and tutorials on Channel9.

Posted By: BC_Programming
Last Edit: 30 Dec 2015 @ 08:01 PM

EmailPermalinkComments Off on Windows 10
 01 Jul 2015 @ 9:01 AM 

A while ago, I noted in my post about remapping keys how I got a new laptop. Though at the time I had not used the system enough to feel it fair to provide any sort of review on the product, I’ve been using it for a month now and feel that should be enough to offer my thoughts on the product.


it is worth noting that the T550, like Lenovo’s other Thinkpad models, offers a lot of customization options. In my case, I configured it with a 2.6Ghz i7 5600U processor, 16GB of RAM, a 2880×1620 Multi-touch display, A Fingerprint reader, a 16GB SSD Cache,and a 500GB HDD. Since then I have replaced the Hard Disk with a 480GB Sandisk Ultra II SSD. It is somewhat notable that the system does not feature any sort of discrete graphics capability. My purposes for the machine was primarily for work tasks, so Visual Studio, Text Editors, pgadmin, Browsers, Excel, Skype, and so forth. “Gaming” would be off the table pretty much, though I imagine for some games it would run admirably, the lack of dedicated graphics means that desktop applications are the main benefactor.

I am quite impressed with the system and how well it holds up. It has amazing battery life- over twice the battery life of my previous laptop, which now serves the purpose of a clock on my nightstand. The high resolution of the screen makes it easy to have a lot of different applications open, and while I’ve found I needed to increase the DPI of the screen to be able to read anything, The added definition is amazing to see on a laptop system. It has a higher resolution than my desktop screen (which is 2560×1440) but is about a quarter of the area so pixel density is amazing.

I’ve taken to trying to use the system as my primary development system. This allows me to segregate some of my personal stuff and my work stuff. Realistically I’ve ended up using both my desktop and my laptop for development tasks- simply because it is faster to do so. I’ve also installed some prerelease VS versions for testing purposes, which I haven’t done on my desktop mostly due to disk space considerations (a 480GB SSD is only large if you don’t install a lot of stuff on it, it turns out)

Arguably one complaint I can think of would be how difficult it is to access the system’s innards. With my older Thinkpad 755CDV system, getting access to things like the Hard Disk was incredibly straightforward- the keyboard tray basically lifted up and you could remove and replace components toolessly. With this new T550, I had to release several captive screwes, spudger apart the bottom panel, and then it took quite a bit of force to remove it and get to the insides. Not a massive dealbreaker- as I don’t exactly intend to be constantly replacing components- but it was something of a surprise to see that accessibility has actually decreased with more recent models!

Of note perhaps is the expandability that requires said disassembly. Internally it can support up to 16GB of RAM, and has three M.2 slots. In my case, one has the Wireless card, and the other has the 16GB cache SSD, with the third remaining empty. This leaves some room for expansion, with the option of replacing or upgrading one of the existing M.2 cards and even adding a while new one. It should be noted that things are tightly packed and larger M.2 cards may not fit, though.

All in all I’ve found the Thinkpad T550 to be an excellent machine that while lacking a bit of Oomph compared to “Gaming” PCs it has excellent build quality and (most important to me) a Trackpoint. The Trackpoint has actually “ruined” me in the sense that using the AccuPoint on my old Toshiba feels odd simply because the Nub on the Toshiba is far smaller and has to be operated slightly different. With this more recent system I hold my finger over top, and gently push down and in the direction I want to move the cursor; with the Accupoint this sort of works but it lacks grip and typically you would push it from the side, or at an angle from one side depending on the direction you want to send the cursor.

Posted By: BC_Programming
Last Edit: 01 Jul 2015 @ 09:01 AM

EmailPermalinkComments Off on Thinkpad T550 Review
 13 Apr 2013 @ 3:55 AM 

Integrated Development Environments. These are the programming tools that most of us have come to almost take for granted. They provide the ability to write, debug, edit, and otherwise develop our software within a cohesive software product. You can find Development Environments for nearly every language platform. Microsoft Visual Studio can handle C#, VB.NET, F#, and C++ out of the Box, while also providing a wealth of other language capabilities through aftermarket software. Java has Eclipse and Netbeans, just to name two- the list goes on.

However, for every IDE user, there is a person who ‘looks down’ on the lowly IDE user; “they aren’t actually writing code” they grumble into their bag of cheetos- they are simply selecting things from an autocomplete list. These types of people are, perhaps in a statistically significant way, usually users of *nix based systems. They extoll the virtues of their favourite text editor- emacs, vim, nano, and look down on IDEs, which they will sometimes refer to as “glorified text editors”.

If my patronizing tone in the previous paragraph was not obvious- or if you’ve never read one of my programming-oriented blog entries, I’m firmly on the side that supports and uses IDEs, wherever they are available. The arguments against them are often lame; arguing that getting used to a feature like autocomplete or parameter lists, or dynamic help windows, and tips and whatnot make us “soft” is absurd- the same could be said of keyboards- by extension it just makes inputting data easier, so clearly we should be flipping switches manually. My point is that IDE software aims to make the task of software development easier, and more productive. And IMO it does this is spades.

There are even add-ins for many IDEs that basically provide all sorts of world-class capabilities in the IDE. Resharper is one exceptional such plugin that is available for Visual Studio- at this point, if it’s missing, it’s like I’m missing an appendage. It has made my work in Visual Studio so much more enjoyable and productive, that I almost feel it’s a crime not to have it installed. Similar addons are available for all sorts of IDEs; even Visual Basic 6 has things like MZTools(Free) or AxTools CodeSMART(Commercial).

Of course, IDEs lose a lot of their edge in certain languages, particularly those that lean towards the dynamic end of the spectrum. So much information about the program relies on run-time, that it’s a tricky proposition for a piece of software to try to figure out what is going on and provide any sort of tips or suggestions. Unsurprisingly, most of those that find IDEs childish use languages such as Python, Perl, Ruby, and PHP; I myself do not use an IDE for these languages either; primarily because I couldn’t find one (VS 2012 has an add-in available called “PHP Tools” that apparently brings support for PHP, though I do not know the extent of it’s assistance). However, if there was a software product available that provided the same level of assistance to languages like Ruby and Python as I currently get from Visual Studio or Eclipse, I would jump on it like Mario on a Goomba.

We ought to think of the software as not only our tools, but our own assistants. Most people wouldn’t raise any objections about being given a personal assistant for their work or daily tasks. In the case of Resharper specifically, that assistant is also an 11.

Posted By: BC_Programming
Last Edit: 15 Apr 2013 @ 03:49 AM

EmailPermalinkComments Off on On IDEs
Tags: , , , , ,
Categories: Programming
 21 Nov 2012 @ 1:04 AM 

Recently I started working on what turned out to be a lot bigger of a project than original intended. It is a Bukkit Plugin; Bukkit being a server “replacement” for Minecraft. Since it’s written in Java, that means the Plugins are as well. As a result, I’ve been working frequently with Java. In a previous blog post on the subject, I noted how C# Was, in almost every discernable way, better than Java. I still hold to that. That post did not of course (and I believe I mentioned this in the post itself) mean to say that Java was useless. Even though C# is cross-platform, it’s not marketed as such; additionally, many Windows-centric frameworks have appeared (WPF, WCF, etc) which aren’t available on other platforms, causing C# code to be coupled to the platform. Of course this can happen with Java code as well, but it’s probably more common with C# code.

Anyway, since I’ve been working with Java, both it’s strengths and weaknesses have popped out at me. This is more a list of opinions, really, but that’s stating the obvious.

Classes have to be in their own files

I’m not 100% sure about this one. It can really work both ways. With BASeBlock, some of my files have way too many classes, and it makes navigation difficult. On the other hand, a lot of those issues are alleviated by using the Solution Explorer to navigated as a class view, rather than file view. Additionally, I think the Java limitation of one public class per file is one of those design changes to Java that serve more to make it easy to write compilers as well as “force” what the designers of the language thought of as “good design”. I prefer having the ability to declare classes in the same files; a single C# file could add classes to various namespaces, add extension methods to existing classes, and so forth. Of course, it’s quite possible to go overboard with this, but I think that is something that should be dealt with by the programmer, not something entirely prevented in the interest of good design. It also makes file management easier.

Java also enforces a system where a class is either an inner class, or it MUST have the same name as the “compilation unit” (file). For example, a class named MonkeyGizzards must be in a file named MonkeyGizzards. I have to say I really do not like the way java uses the file system to manage packages, it just feels… weird, to me. It also has the odd side effect that package names need to conform to file system rules as well as the rules of java, which is arguably easy to avoid by simply not giving your packages stupid names, but is still a weird effect.

No Delegates

Personally, I actually find this, well, kind of infuriating. It’s one thing to lack lambda’s and closures, but completely lacking any functional data type is just weird to me, now that I’ve grown to use them. There are workarounds, but the problem with those is that they are just that- workarounds. They don’t fix the underlying omission. For example, Let’s go with a relatively simple function- it takes a List, and filters out elements based on the second parameter.

Of course, with C#, this is relatively simple- in fact, I believe the routine is provided as is within linq, but for the sake of the exercise let’s assume we need one:

A call to this function might look something like this…

One line of code within the function; one line to actually use it. This actually leverages several C# Features that Java doesn’t have; first, it uses a delegate as the argument type to the routine, and second is that it uses a lambda in the call to the function, which can be implicitly converted to the argument type, which utilizes operator overloading for implicit and explicit types. It’s also notable that it is possible to pass primitive types as type arguments, which in Java requires to use of one of the Boxing types, such as Integer.

The Java equivalent is more verbose. First, you need to define an interface:

Then you need to write the implementation- in this case, a Filter routine:

This is of course but one possible implementation. Other frameworks often provide much better functionality (apache commons has some excellent extensions of this sort). Anyway, using something like this would simply require the use of an anonymous class declaration. behold!

Quite a lot of code, for something so basic. It is worth noting that writing this sort of stuff is almost an initiation for Java programming. I can see it being useful to learn how to write these things if you are actually learning in a academic setting, but this is still the sort of stuff that should be supported out of the Box. The above could likely be condensed to use the Iterable (java.lang.Iterable) interface instead of List, since it certainly doesn’t need anything specific provided by the List interface. But that is a fringe concern.

the C# Version is supported by the linq namespace. Linq adds All, Select, SelectMany, and various other methods to some core collection interfaces; the result being that these methods appear to be on those interfaces, but are implemented elsewhere. This is good, because those methods really don’t fit anywhere else.

For some time, I wondered two things: One was why Eclipse was adding Imports one by one- importing each specific class; and why the import blah.blah.*; was so frowned upon. Fundamentally, it is because of a lack of flexibility in the imports statement, as compared to the C# equivalent, “using”. the C# using statement supports Aliassing; for example:

Using CollectionsList = System.Collections.List;

would ‘import’ the System.Collections.List Type, allowing you to use it as CollectionsList. java’s import statement lacks this feature, so you need to import more carefully to avoid collisions. The primary concern for those that push importing only those types you use is that adding a new asterisk import might cause issues. Looking this up online and it seems there was a “tribulation” some time ago, when Generics was newly introduced, and there was the new java.util.List class. Many java programs started with:

what ended up happening was that the compiler would prefer java.util. This had no bearing at the time, but with the introduction of generics- and, more specifically, the List class, there was now a List class within java.util as well as java.awt (java.awt.List being the ListBox implementation). proponents of the “import only classes you use” idea cite this rather often; what they don’t cite quite as frequently is that the problem is easily averted by simply adding a new import:

This is because precedence of imports goes to explicit class imports first, then to classes in the same package, and last to asterisk imports. Arguably, explicit imports would have prevented any issues, but then again, usually programmers move to a new SE platform to use new features, so they might actually want to use the new Generics features of the time; meaning they have the same problem either way… there are two classes with the same name declared in different packages and no way of aliasing, so they end up referring to one of them with the fully qualified name.

IDE’s such as Eclipse will allow you to import classes you reference in code explicitly, but I don’t see this as a good thing; imports you are no longer using don’t go away on their own, and good tooling to try to cover up deficiencies in a language is not something I feel should be encouraged. In this case it’s even arguably not a deficiency in the language, but rather a inability to recognize what is realistic; A person usually upgrades a Project to a new version of a platform to use new features, and the above issue is fixed with a single import anyway. Overall, the best compromise is if you are using an IDE with the ability, list all the imports you use. If you are writing with a text editor or another tool, just use asterisk imports. The sky will not implode in itself if you do.

Anyway, for C#, a lot of features really just make code shorter- they are, in many ways, syntactic sugar. But arguably any programming language is syntactic sugar, so trying to use it as a derogatory term doesn’t really work; I’ve had Java programmers tell me that Lambda’s and Delegates are just syntactic sugar for anonymous interface implementations. I agree. But I like quite a bit of sugar in my coffee- why should my tastes for programming language syntax be different? When you write 20 times as much code, you are going to have 20 times as many bugs, which is part of my issue with Java as a whole.

Now, don’t get me wrong; it really does provide enough base functionality to work with. And it’s important to realize that Java has been around for quite a long time, and back then, we didn’t have such prevalent access to features such as Generics. This has the advantage that there is quite a lot of Java code available, in various libraries. But also has the downside that those libraries might be out-dated, replaced by new features in the JVM, and so on.

Posted By: BC_Programming
Last Edit: 21 Nov 2012 @ 01:04 AM

EmailPermalinkComments Off on More stuff about Java
 21 Jun 2012 @ 11:50 AM 

Call me old fashioned, or possibly slow, but for some reason I never seem to be using the latest version of a piece of software. Until recently I was doing all my .NET work with Visual Studio 2008; this was because VS2010, bless it’s heart, felt sluggish to me.

With the pending release of Visual Studio 2012, which as I write this is available for a free download as a Release Candidate, I decided I’d bite the bullet and start switching. This was also because I wanted to dip into XNA, and As near as I could tell the latest version only worked in conjunction with VS2010. I had to reinstall Resharper to get proper VS2010 support, since I had installed Resharper before I installed VS2010, and after applying my own preferences to both Visual Studio as well as Resharper, I was able to get back into coding. (Am I the only person who hates the preferences IDE’s have to automatically complete parentheses and braces and stuff? I always find myself typing the ending parenthesis, ending up with double, so I delete the extra ones, then I forget where I was in the nesting; and if you get used to that behaviour, suddenly you find yourself not typing ending parentheses in plain-text editors. You can’t win! I’m not a big fan of that sort of autocomplete; Actually, I don’t really like any form of autocomplete, but that’s sounds like material for another post altogether.

The End result is BCDodgerX, which is available on my main downloads page. It is essentially a rewrite of BCDodger, with an unimaginative X added onto the end that means pretty much nothing.

Overall, VS2010 is actually quite good. Call it a belated review; I almost purposely fall several versions behind for some reason. I cannot say I’m overly fond of the use of 3-D Acceleration within a desktop application, but at the same time all the Controls still have the Windows Look and Feel (which is my main beef with Java’s Swing libraries, which have a look and feel all their own), and the desktop itself is accelerated with Aero anyway so I suppose it’s only a natural progression. (Besides, I don’t play games very often and this 9800GT should get some use…).

The tricky question now is when I should start migrating my VS2008 projects to 2010, and whether I should switch to the latest framework. I can switch to VS2010 without using the latest framework, of course, but I wonder what benefits I will see? One day I’m sure I’ll just say “screw it” and open say, BASeBlock in VS2010 and jump in; I’m falling behind, after all (What with the aforementioned release of 2012 on the horizon). And VS2010 is definitely an improvement both tool and functionality wise over 2008, so there isn’t really a good reason not to switch now. No doubt I’ll keep making excuses for myself. Oh well.


At first, I thought I hated XNA; but now I know that what I actually hate is 3D programming. I imagine this is mostly because I got extremely rusty at it; additionally, I had never truly done 3-D programming, at least in the context of a game. My experience at that point was pretty much limited to the addition of 3-D graphic capabilities to a graphing application that I wrote (and never posted on my site because it hasn’t worked in ages, is old, and uses libraries/classes/modules I have updated that are no longer source compatible etc.). Of course that didn’t have constantly changing meshes, used DirectX7, and it was shortly after I had finished that feature that I abandoned the project, for whatever reason. I had never dealt with 3-D in a gaming capacity.

The purpose of XNA is to attempt to simplify the task of creating games- both 3D and 2D, for Windows as well as XBox 360. And it definitely does this; however you can really only simplify it so much, particularly when dealing with 3D Programming. My first quick XNA program was basically just to create a bunch of cubes stacked on one another. This is a very common theme given the popularity of games like Minecraft, but my goal was to eventually create a sorta 3-D version of Breakout (or, rather, BASeBlock 3D).

I was able to get the blocks visible, after a lot of cajoling, and doing the work on paper (Visualizing 3-D space and coordinates are not my forte). But it ran at 10fps! This was because I was adding every single block’s vertices to the VertexBuffer; for a set of blocks in a “standard” arrangement of, around 1920 blocks (which is probably a number that would make the 2-D version go around 10fps, to be absolutely fair here), that is over 11520 faces, each of which actually consist of a triangle list of 6 points (I tried a triangle fan but it didn’t seem to even exist (?), oh well) meaning that I was loading the VertexBuffer with over 69120 texture-mapped vertices. That’s a lot to process. The big issue here is Hidden Surface Removal; obviously, if we had a cube of blocks like this, we don’t need to add the vertices of blocks that aren’t visible. I’ll admit this is the part I sort of gave up on that project for the time being; that would involve quite a bit of matrix math to determine what faces were visible on each block, which ones needed to be added, etc based on the camera position, and I kind of like to understand what I’m doing, and I, quite honestly, don’t have a good grasp over how exactly Matrices are used in 3-D math, or dot products (at least in 3-D), and I prefer not to fly blind. So I’ve been reading a few 3-D programming books that cover all the basics; the book itself I believe goes through the creation of a full 3-D rasterization engine and has a lot of in-depth on the mathematics required; this, paired with concepts from Michael Abrash’s “Graphics Programming Black Book” should give me the tools to properly determine which blocks and faces should be added or omitted.

Anyway, scrapping that project for the time being, I decided to make something 2-D; but since I was more or less trying to learn some of the XNA basics, I didn’t want too much of the concepts of the game itself getting in the way, so I chose something simple- I just re-implemented BCDodger. I added features, and it runs much better this way, but the core concept is the same.

Some minor peeves

XNA is quite powerful- I have no doubt about that. Most of my issues with it are minor. One example is that XACT doesn’t seem to support anything other than WAV files, which is a bit of a pain; this is why BCDodgerX’s installer is over twice the size of BASeBlock’s, despite having far less content). Another minor peeve is that there is no real way to draw lines, or geometry; everything has to be a sprite. you can fake lines by stretching a 1×1 pixel as needed, but that just feels hacky to me. On the other hand, it’s probably pretty easy to wrap some of that up into a class or set of classes to handle “vector” drawing, so it’s probably just me being used to GDI+’s lush set of 2-D graphics capabilities. Another big problem I had was with keyboard input- that is, getting text entry “properly” without constant repeats and so forth. Normally, you would check if a key was down in Update(), and act accordingly. This didn’t work for text input for whatever reason, and when it did it was constrained to certain characters. I ended up overriding the Window Procedure and handling the Key events myself to get at Key input data as needed, then hooked those to actual events and managed the input that way.


Overall, I have to conclude that XNA is actually quite good. There are some intrinsic drawbacks- for example it isn’t cross platform (to Linux or OSX), and the aforementioned basic problems I had, which were probably just me adjusting my mindset. It’s obviously easier than using Managed DirectX yourself, or DirectX directly (if you’ll pardon the alliteration), and it is designed for Rapid creation of Games. With the exception of the High-Score entry (Which took a bit for me to get implemented properly) BCDodgerX was a single evening of work.

Posted By: BC_Programming
Last Edit: 21 Jun 2012 @ 11:50 AM

EmailPermalinkComments Off on VS2010, XNA, and BCDodgerX potpourri
 04 Jun 2012 @ 12:16 AM 

The currently released version of BASeBlock is 2.3.0. I have made a lot of changes to the game, added a few blocks, abilities, and other fun stuff, and refactored various parts of the code to make things work better since then. One of the biggest new features is “framerate independence”. 2.3.0 and earlier versions basically did velocity like this for every game tick:

However, the faster the game loop ran, the more times this would run, and typically the higher the fps the more the game loop would run too. This meant that the speed of objects could be the same internally but visibly the objects seemed to move at wildly different speeds. The “fix” to this is relatively simple- instead of simply adding the velocity to the location, we need to take into account some other factors. First, we analyze the problem. What do we want to achieve? The quick answer is “we want the movement of objects to remain equal regardless of how fast the game ticks go”. The best way I’ve found is to choose a given framerate as the “ideal” framerate; if the game runs at this fps, than the result would be that the velocity is added verbatim; If the framerate is less, than we add “more” to compensate; for example, a framerate of 30 in this case would double all speed additions that are performed; and a framerate of 120 would half them.

BASeBlock already tracks the FPS, so the solution was three-fold; first, create a routine that would retrieve the appropriate multiplier based on the framerate and the desired framerate, next, create a routine to simplify the incrementing of a location with a velocity that would take into account the current multiplier that was derived from the framerate, and also to change all the code that simply adds them to use the new routine.

Implementing this in BASeBlock was something I was wont to do for quite some time; it seemed a lot more involved than it really was. Eventually I just decided to try; if things went sour I could always roll back to a previous SVN commit anyway.is has

First, I added the routine for getting the game Multiplier. This required the current FPS of the game. Since that seems like something best dealt with in the presentation layer (and also since the main game form was already tracking FPS for the FPS counter) I simply added a property to the IClientObject interface, which is designed to allow for a way for the form and the game logic to communicate without explicitly requiring knowledge on what it is communicating with. With that property in place, I simply implemented the multiplier routine as a basic division- the DesiredFPS divided by the current FPS. (There is an exception for the case where the retrieved FPS is 0 where it will return 1 for the multiplier). One very interesting side effect of this is that I could, if I wanted, “fake” slow motion by munging around with the CurrentFPS as returned by the clientObject, though that is probably not a good use of this design.

I then implemented a simple routine for incrementing the location, not surprisingly I called this “IncrementLocation”. It adds the velocity, but multiplies it by the multiplier as derived from the currentFPS and desired FPS.

This worked rather well, once I found and replaced all the old direct-addition code with a call to this routine. However there were still some odd behaviours; mostly related to velocity decay. Some objects- particles, the gamecharacter’s jumping, some items falling, and whatnot would reduce or increase their speed by multiplying components of that speed by a set factor. For example, a particle might “slow down” after it spawned by multiplying it’s X and Y speed by 0.98 each frame. I needed to make similar adjustments to the multiplications factors there in much the same manner as for the additions.

I still encounter minor issues that are a direct result of the changes to a “managed” framerate concept; a nice benefit compared to 2.3.0 is that I was able to remove the silly Thread.Sleep() call that slept for 5 or 50 milliseconds (I forget specifically) so the framerate is typically higher; on the “Spartan” Level set builder, the framerate is usually close to 200, which is pretty good for GDI+, and that’s the debug build, too, which is slower than release.

After this, I tried to improve the platforming elements of the game a bit more. I added some new powers, fixed a few minor issues with some of the powerup management code, and added a new interface for the editor to allow blocks to draw something “special” when being shown in the editor; this is used by the powerup block to show the contents of itself as well as modify the tooltip shown. Another change was “block tracking” at the level of the PlatformObject. This also sounded a lot more complex than it was. The idea was simple- when the character, or anything, is on a block, we want them to move with it. This was done by having the platform object track any block it is on, then, each frame, adding the distance the block moved, if any, to it’s own location as well.This has worked spectacularly. I also added an interface for blocks so they can receive notifications from a platformobject when they are stood on.

There is a bit of a downside to this idea, though, based on how I implemented some other “moving” block features for performance reasons. I have a few blocks that give the illusion of moving when hit, but in fact destroy themselves and spawn another object in their place that looks the same. These blocks include BlockShotBlock, BallDirectShotBlock, and the “magnetAttractor” block; the first one gives the appearance of shooting upwards when hit, breaking all blocks in it’s path; the second goes in the direction the ball that hit it was going, and the third works in tandem with another instance of a magnetAttractor block to create the illusion of the two blocks flying towards each other and exploding, or flying apart. These rely on GameObjects to control their behaviours after they are hit, allowing themselves to be destroyed and allowing the rest of their “action” to be governed by those objects. Most specifically, the “BoxDestructor” which is used to create a block-shaped projectile that can destroy other blocks. The magnetAttractor creates two such blocks when necessary, and controls them with yet another gameobject that handles their velocity change, and detects when they meet, creating the requisite explosion. I did it this way because my animatedBlock “architecture” is terrible and annoying to work with, or, at least it was at the time. This means that a gamecharacter cannot stand on such a block and be “fired” along with it, which would have been an awesome gameplay principle for level design. I did create a movingplatform block that opens up some neat possibilities too, though. And causes some really goofy gameplay when I replace all the blocks in a level with them.

My next endeavour was related to the editor; With the new platforming component, I had made it possible to create a Platform-oriented level, with or without a paddle, by adding the appropriate triggers and components to a level. I forgot to add some of these more than once; in fact the second level of the “testplatforming5.blf” levelset included with 2.6 forgot to set the autorespawn field of one of the spawner blocks, meaning that once you die, you cannot beat the level, since only the paddle respawns, not the character. To help alleviate this, I decided to create “templates”. This means that when adding a new level, as well as being able to just add a blank level, one can create a new level copied from a template. This really added a richness to the editor. Templates are loaded from the templates directory, and can be shown either in a categorized drop-down or in a categorized dialog; the “category” design derives from the template concept used with tools such as Visual Studio itself or VB6, which separates the templates into separate categories. This should make the creation of custom levels, particularly platforming levels, far easier. Templates can also add sounds or images to the loaded Set. (possible revisions might be to warn when a template object conflicts with an existing resource, rather than replacing it).

I also fixed a myriad of other bugs and UI issues that I encountered while working on other features. The newer version is really shaping up to be a great update.

Posted By: BC_Programming
Last Edit: 04 Jun 2012 @ 12:16 AM

EmailPermalinkComments Off on BASeBlock 2.4.0 Dev notes
 02 Dec 2010 @ 4:34 PM 

As with most of my projects, progress is usually slow because it’s something I do “when I feel like it” rather then on a schedule. Because of this I often have a lot of “design-time” where I don’t actually write code but idly wonder about what I could do to make it better.

One consideration I recently decided on was supporting the packaging of the various files — images, sounds, etc — into a zip file. This came about as a result of my experience (easy enough) of using gzipStream to make the size of the serialized LevelSet data a tad smaller (the default levelset, which is 5 levels, still weighs in at about 145KB; most of this is because it stores a lot of PointF structures from the PathedMovingBlock that I recently added, but I won’t get into that. In any case, I considered at the same time- why not allow the loading of files from a zip?

Of course, this is hardly a new idea. I never thought it was. placing the application/game data into one monolithic (and easier managed file-system wise) file is ancient; games have been doing it for ages. In either case, since I had decided to use ZIP (rather then, say, define my own package file like I did with my Visual Basic BCFile project) I had to be able to work with zip files.

Thankfully, there are a lot of free solutions to read,extract, and otherwise deal with zip files from .NET. after some searching, I decided on dotnetzip

. The reasons I decided on it were simple: it’s easy to use, and it supports both extracting as well as creating zips. I’d rather learn one library rather then several to perform the various tasks in the future. The first step was to decide how it was all going to be done.

A little overview of how I had it arranged in the code may help. The Sounds and Images are managed by classes called cNewSoundManager and ImageManager respectively. recent changes have caused me to rewrite the SoundManager to create the “New” SoundManager, which supports different adapter classes for using various sound libraries (of which BASS.NET has become my top choice).

Thankfully, during the creation of Both ImageManager and SoundManager, I made it able to accept a number of paths in their constructors via a overload accepting an array of Strings. In fact, it was the “low-level” implementation for loading; the other overrides simply transformed their data into an array and called it, so I knew that worked. Although my ImageManager could probably be modified to load files from a stream (the ZipFile class can retrieve a inputstream for the files in a zip) the SoundManager could not feasibly do so; many sound libraries will only load from files, and since most of them are really wrappers around even lower level libraries, I couldn’t optimistically assume that I would always be able to convert a filestream into something I could use; I realized that the “Driver” classes could always take the passed in stream and create a file, but that sort of redirects the issue. Instead, I decided to leave it as-is (loading from files) and instead make it so that during the bootstrap phase (where it loads images and sounds) it also extracts the contents of any zip files that are in the application data folder as well.

I chose to change the extension I would use to bbp (Base Block Package) so that the game won’t get confused or screwed up if a standard ZIP happens to be in the same folder. The first question however is where I would extract these files too; obviously it was a temporary operation, so I opted to choose the system Temp folder, which is easily obtained in .NET via Directory.GetTempPath(). I then decided that as I enumerated each zip, they would be extracted not to this temp folder but to another folder that would be created beneath it; this way, the files in each ZIP file can have conflicting names, and still extract properly. (although at this time that will cause both ImageManager and SoundManager to throw a fit, I decided it best to not add a third class that didn’t understand that files in different folders can have different names). The next problem was easy; I simply took all the various folder names and added them to the arrays I passed to the SoundManager and ImageManager constructor. Easy as pie.

Now I needed to make sure that when the program terminated, the files were deleted. During startup it detected if the special temp folder existed and deleted it, but it would be ideal if that folder could also be deleted when the program is closed normally. the problem here is that I was initializing all of this in a static constructor (well, not really, since apparently Mono didn’t like me using static constructors for this, but it was still a static routine). There is, however, no Static “Destructor” that I could use. So I opted to create a small private class implementing IDisposable that I could create as a static variable and initialize with the name of the temporary folder to delete; the Dispose() method would then delete it, easy as pie.

However, upon testing, I encountered an error; apparently, the handle was still open in the dispose routine. After a little digging, it was clear that the problem was at least partially as a result of the Image.FromFile() method, which apparently caches that it was taken from a file as well as the filename and will keep it open as long as the image is around; since I couldn’t always be sure that the temporary class would be disposed after the ImageManager (and therefore the various Images it holds) it was difficult to make sure they were closed.

However, I decided to change my use of the FromFile() method to something that won’t pass the Image class any filename data at all; that way the Image class couldn’t possibly hold the file open, as long as I close it properly myself.

To do so, I replaced the code:


And so far, it’s worked a treat.

Posted By: BC_Programming
Last Edit: 17 Dec 2010 @ 12:09 AM

EmailPermalinkComments Off on The locked image
 03 Nov 2009 @ 12:31 PM 

I have been using Visual Basic 6 for many years; I have come to the point where using it is effortless; nearly any problem I have I can design and program with Visual Basic 6.

However. Visual Basic six is over 10 years old. Mainstream support ended a few years ago, and after Vista Microsoft makes no promises that programs designed with Visual Basic 6 will work. Even creating programs that support the various UI features of XP could be a chore. With Vista, Not only does one need to include the proper manifest resource or file  to force their VB6 applications to link with the newer version of comctl32, but it is almost always necessary to include an additional set of directives in the manifest to make the program request administrator permissions. I have yet to determine why some of my less trivial programs crash before they even start when run with the same security as the user, but I imagine it’s directly related to COM components, their usage, and the permissions surrounding them.

Another area of concern is with the use of proper icons; Visual Basic complains when you try to use a icon with a alpha channel. However, through a few API techniques and some resource editor trickery, it’s possible to have your application use 32-bit icons both as the program file icon as well as the icon for your forms. Rather then repeat the material here, I will point you in the right direction if this type of this piques your interest. www.vbaccelerator.com- I cannot praise that site and it’s creator enough. While many of the projects and controls he has on-line I have personally attempted before finding the site (I had a somewhat working toolbar control and a control I called “menuctl” that allowed moving the main menu around as a toolbar), the sheer number of completed, documented, and well written controls on his site is simply mind-blowing. There is also a .NET section to his site as well, which brings me to my next point.

There are only a few reasons why a programmer would choose to use Visual Basic 6 for a new project today. The main reason is simply because we are stubborn, for the most part. The fact that .NET is better in many ways then VB6 does not sway us to use it. The fact is, we all feel “betrayed” in a way, but the shift to .NET. Millions of lines of code that were dutifully compatible through all 6 versions of Visual Basic 6 now break when loaded in VB.NET. But I believe, that the majority of VB6 programmers have simply been blinded to the number of problems Microsoft would have faced to continue using the same COM oriented framework that VB4 and higher have used.

COM,or, Component Object Model,(sometimes referred to as “Common Object Model” which is dead-wrong) is a Binary compatible method of providing interoperability between applications. COM was essentially designed to prevent what was known as “DLL hell”, since at that point in time DLLs provided their functionality through exposed functions, some versions not compatible with previous versions, meaning it might be necessary to, for example, have 5 different versions of MFC41.dll on ones PC. The idea was, each version of a COM component would be Binary compatible with the previous version, which means, for example, that a program designed for version 1 of “foocomponent” could still run and use version 4, but without the new features of version 4. This functionality was implemented by the creation of Interfaces. Each version of a component would add a new interface- for example, IFooComponent, IFooComponent2,IFooComponent3, etc, and client applications who want to use FooComponent would use the interface appropriate to the version they wish to use.

There was, however, one problem. Most of the maintenance between versions was left to the programmer of the component- they had to create the new interface, make sure previous interfaces worked, that old clients could still instantiate their objects, etc. Basically, it made the critical mistake of putting the user of the technology (in this case, the programmer) in a critical position and with a number of responsibilities to get things to work properly.

Microsoft, of all companies, should know that putting the programmer in a position of such responsibility is prone to failure; hell, many of them can’t even be bothered to follow standard API documentation; for example, actually reading the documentation; this resulted in hundreds of man-hours of programmer time being consumed by the creation of “compatibility shims” to let these programs work. (otherwise, installing a new windows OS would break these programs; they worked before, so as far as the user sees the new Operating System is to blame). Anyway- this failed miserably. Programmers would sometimes simply change their interfaces rather then implement new and old ones, meaning, like with the DLLs of before, new DLL versions were incompatible with the old ones.

It was clear that COM, or, at least, COM as it was presently designed, was far to dependent on the programmer to “do the right thing” then was reasonable. So, Microsoft, at some point, decided they needed a new object framework architecture.

VB6, as a COM-based language, would have required extensive changes to support this new architecture. the prospect of such a huge revision probably made them take a second glance at the language itself, and the cruft it still has from previous iterations of the basic language. aside from retaining such archaic constructs as the “GoSub…Return”, VB6 also “failed” in a sense on a number of other areas. Error-Handling, for example, was still done using “On Error” statements, which redirected flow to another segment of code. It was up to that block of code to evaluated the error, using the “Err” object (In VB1-3 there was only Err which was the error number and Error$, which was the description), and then either resuming that same statement that caused the error (Resume) skipping that line, and continuing with the next, (Resume Next) or even raising the error again, causing the error to cascade up through the call stack.

This Error architecture had a critical flaw- by using this form of error handling, flow could change to the error block for any reason, at any time. This meant that if the procedure dealt with resources, such as file handles or memory blocks, it would have to keep track of what needs to be undone so that the error code could also double as partial cleanup code. Another critical flaw was simply that it was ugly; it looked and functioned nothing like the Try…Catch statements in many other languages. Also, it could become impossible to trace exactly where an error occured when errors cascaded; and error handler might be forced to handle an error from three levels down in the call-stack, so even if it understood the error in the context of the procedure, the context that the original error occured in and exactly what it means was lost.

My main language is Visual Basic 6, but I am not so blind as to reject VB.NET, or .NET as a whole, merely because it essentially replaced VB6. The truth is- we, as VB programmers, have made a large number of requests to the VB developers. VB .NET answered and fixed a huge number of those requests, and yet it is still shunned; it is clear to me that it is not merely the loss of backwards compatibility that causes such antagonism with VB6 programmers, but also the human element of resistance to change.

With previous versions of Visual Basic, one could migrate all their code to the new version with little or no difficulty.

This, however, had a price- since the new version made few, if any, requirements for conversion, old antique code would often be upgraded and imported into the new environment. Since backwards compatibility was the rule, old elements such as line numbers gotos, and gosubs remained in the language. Antiquated concepts such as type declaration characters remained in the language. Such visages of a forgotten era had no place in a modern language.

All the above being said, VB6 is still a language capable of creating modern applications; however it is important for the programmers who still use it to realize that they aren’t using it because it is superior or because .NET or any other language “sucks” by comparison, but rather as a result of their own stubborness and unwillingness to learn new programming concepts.

A anecdote, if I may, can be found in my introduction to the use of “class modules” within Visual Basic. at first, I had no idea what they were- I simply shied away from them, and stuck to Forms and code modules. I used all sorts of excuses- Class modules are slower, they bloat the code, etc. All of which were, almost universally fabricated or found on the web written by grade 8 students who barely understood the meaning of the word “class” in the context of programming or objects.

After, however, creating ActiveX Controls using the “userControl” Object, I realized the similarities, and the possibilities that could arise. My first conversion attempt was on my Current “flagship” program, the game I called “Poing”. At that time, the entire game was designed using User defined types as functions that operated on them. I understood the concept of encapsulation and managed to convert the entire architecture to a Class based object heirarchy- and it worked. My concepts still contained flaws, such as including critical game logic in down-level objects, but for the most part my udnerstanding was sound.

As my understanding of the concepts involved improved, so too did my antagonism disappear. It was clear to me that the fact that I didn’t understand classes at the time lent itself to a distaste for them- basically, the old adage that one is “afraid” of what one doesn’t understand was at least partly true. This, I feel, is at the very core of the antagonism against .NET. the main detractors of the framework are often people that neither understand the concepts involved nor do they realize how said concepts add increased possiblities and easier maintenance.

Even so- .NET has, in my opinion, one critical flaw. the IDE is slow. even on my quad core machine I see huge delays as intellisense is populated or any number of other operations. Perhaps it is a result of a mere 7200RPM hard drive? I don’t know. perhaps I need more then my current 8GB of RAM? who knows. I think, that using a 10 year old program and expecting and recieving quick responses from it have perhaps jaded me in terms of what the extra features of the new IDE actually cost in terms of performance; the delays feel like minutes, but in general it is only a few seconds. On the other hand- a few seconds is a lot longer then necessary to make one lose their train of thought. At the same time, this same argument was used against the initial usage of Intellisense; and there is no denying that although the initial display of a number of said intellisense lists can take some time, subsequent usage is nearly instantaneous, and the lists provide far more in terms of function information then the VB6 OR C++ 6 IDE could provide; this, in addition to the ease of use of assemblies between multiple .NET languages is not something that should be passed up because of an ego-centric desire to prevent change. The IT industry changes constantly. The fact that VB6 is now a “past item” should not dissuade us from moving forward because of a snobbish desire or fictitious affection for the corridor of our programming efforts for many years; the complaints about VB6 when it was introduced were very vocal. This is, no different with VB.NET, however the very complaints made about VB6 that have been remedied with .NET are now being passed off as inconsequential (since in many cases programmers have devised ways of working around limitations or even forcing behaviour that VB6 was not designed for, such as, for example the creation of Command-line programs.

The mistake Microsoft made was not the creation of .NET, but rather the belief that any sane person would move to a new platform if it was superior. They forgot the take account of the psychological factors involved.

Posted By: BC_Programming
Last Edit: 03 Nov 2009 @ 12:31 PM

EmailPermalinkComments Off on Thoughts on VB6

 Last 50 Posts
Change Theme...
  • Users » 47469
  • Posts/Pages » 397
  • Comments » 105


    No Child Pages.

Windows optimization tips

    No Child Pages.

Soft. Picks

    No Child Pages.

VS Fixes

    No Child Pages.

PC Build 1: “FASTLORD”

    No Child Pages.