24 Apr 2015 @ 7:21 PM 

BASeBlock was one of my first projects that I wrote in C#. I wrote a program that sort of worked like HijackThis and a INI file reader class- which I Discussed 5 years ago (That long? Amazing), But BASeBlock was my first attempt at creating a project using a language that wasn’t horribly lobotomized. (I refer to Visual Basic 6).

I’ve found myself avoiding it somewhat for quite a long time. It is using code that I have since extracted, repurposed, and improved (such as my update library) for other projects. Another problem is that there are some glaring architecture issues in it’s design (when isn’t there) which were because I was unfamiliar with certain design patterns. For example, my game state management is a switch in the drawing and game tick routines on a game state enumeration, whereas a better design is to use a gamestate interface of some sort and have your different game states compartmentalized better.

However I’ve more recently come to rather like one aspect of BASeBlock. I designed it. I wrote it. While I imagine there are some helper classes and such which I may have acquired elsewhere, the bulk of the implementation is me. There is an aspect of that that I find particularly tantalizing, especially compared to my work. Don’t get me wrong I do love what I do and I have a lot of input in engineering decisions that affect my work, but like any software it is designed to a certain requirement, and with a certain timeline, and a lot of that simply isn’t mutable.

This brought my personal projects into new light for me somewhat when I think of it this way. For some time I’ve only thought of them in the back of my mind as a burden- after working 10 hours dealing with invoice printing considerations, I’ll remember my personal projects and feel bad that they have fallen so far by the wayside they’re stuck in the radiator. But I also forget that, unlike my work- my personal projects have no external director. I can add, change, remove, refactor, and redesign anything I want, and I have to answer to nobody- I’m “in charge” of what they become and how they get there.

Which that in mind I’ve taken it upon myself to actually start progress on making BASeBlock more palatable for me to actually work on. The GameState oddity is like an unsightly mole or other minor annoyance- you don’t notice until you have it pointed out to you (or in my case, until I learned of the pattern, and used it successfully in my poorly named Prehender and BCDodger). It is a major change/refactoring that I wasn’t really keen on doing even when I was working on the game almost daily and had a fairly good understanding of it’s architecture and design, so trying it now is perhaps putting too much faith in not only my abilities but also in my abilities when I was writing it, but if I want to improve it, I need to eliminate that unsightly implementation or I’ll just avoid it until I do.

What is this “GameState” implementation?

In my haste, I did rather forget to explain the particulars of what I was talking about when I talk about “Game States”. Game states are basically what state a game is in and are rather commonplace. A Game will have a different “state” for being paused, showing a menu, showing an introduction sequence, etc. an “Enumeration-based” approach like the one I described might implement things like so (halfpsuedocode)

The basic architecture is to have a switch or similar style of conditional processing within any of the routines which will act differently based on the games current state. This does the task and works as intended, but it tends to lead to long rambly routines, and then all your different states get mixed up in one function. The interface-based approach creates an interface and then each State will be a class that implements that interface. This can be implemented in a number of ways, but the effect is to separate the information and implementation required for each state to a separate class. This allows you to add new features to a state without affecting the “namespace” of other states or interfering with it’s variables, which is sort of the entire point of having separate scope abilities to begin with. It also allows some interesting Chaining abilities. If the Running State stores all the information about game objects, for example, and knows how to draw or tick those objects, a “Slow Motion” state could simply delegate to a provided standard State but only call the tick once for every two calls; a Pause screen could not call the Tick function of a provided composite class, and instead call the draw method and then render a 25% shaded box atop it, or apply a color transformation to wash out the result, or a pause logo of some sort on top to provide the “Pause” effect. An “introductionImage” type state may have a single purpose- you give it an image and a GameState, and it will fade the image in, wait a few seconds, fade it out, and then set the current state to the provided state. Games often show various startup logos, and this can be used to easily fade in and fade out through numerous images and then go to the main menu (presumably a Menu state), and so on and so forth. This can all be implemented, of course, with the enumeration method but it requires a lot of plumbing which also tends to get in the way of everyday development and iterative design.

Posted By: BC_Programming
Last Edit: 24 Apr 2015 @ 07:21 PM

EmailPermalinkComments Off on Back to BASeBlock
Tags
Tags: , ,
Categories: Programming
 16 Oct 2014 @ 7:30 AM 

Console applications are fairly common for handling simple, routine tasks; sometimes the task is straightforward and otherwise runs on it’s own with little to no user input. Sometimes it has to generate a lot of output that isn’t really necessary. But inevitably, we find we want to launch one of those Console applications from a Windows program, and in order to integrate well into your program you figure you should do something other than just have the program appear in a console window. Enter Standard handle redirection.

Launching a Process in C# is fairly straightforward:

This is just one way to do it; another common approach is to create the ProcessStartInfo structure and use that as part of Process.Start(); this won’t give you the process object until it launches though- that can be troublesome, particularly if you decide you want to standard output, input, or error streams.

There are a few different approaches to redirecting the standard handles from within the .NET Framework. You can set the RedirectStandardOutput, RedirectStandardInput, and RedirectStandardError properties of the ProcessStartInfo to true, and then manipulate the StandardOutput, StandardInput, and StandardError Streams yourself. This can make for very complex code if you want a responsive design. A better alternative is actually built into the Process class itself- and that is to actually use an event-based approach and have an event fire when and only when one of your redirected output streams has output. There are a few gotchas in this approach that can trip you up if you aren’t careful though. the appropriate ProcessStartInfo setting, RedirectStandard* Property should be set to true; UseShellExecute should be false, and “EnableRaisingEvents” on the process should be set to true. Connect event handlers to OutputDataReceived or ErrorDataReceived, Start the program, and then Begin asynchronous reading of the Output and Error streams with BeginOutputReadLine and BeginErrorReadLine. The events you setup will fire, and e.Data will, for example, contain the line of text that was output. Here is a quick example:

With this you can fairly easily redirect the output and error from a command line (or any other) program, and do with it what you like. You could even parse each output line and present standard input to “emulate” a user running the program. This could be useful for test harnesses as well as for when you use a command line tool but would rather that not be made prevalent- the command line program can be run hidden and you can parse and interpret the standard output as needed to update your UI. (this is in some part how many Linux GUI applications function)

Posted By: BC_Programming
Last Edit: 16 Oct 2014 @ 07:30 AM

EmailPermalinkComments Off on Redirecting Standard Handles – C#
Tags
Tags: , ,
Categories: .NET, C#, Programming
 30 Jul 2014 @ 8:56 PM 

This post is inspired mostly by an interesting post that was posted to the Computer Hope Forum. To summarise, this individual is looking to effectively re-enable Window’s ability to run USB flash drives automatically. There is quite a bit of griping about change, about how the user should be in control, etc. Fairly understandable.

While There is certainly some merit to the idea that a user should be able to configure their system how they want, to implicate that it is not possible is rather silly. Though the software as provided has had that feature removed, there is always the option to write your own solution. Of course without programming expertise, you may be lead to ask for such a program on a forum. In fact, that is exactly what the post is about- It’s asking for a free segment of batch script based on slightly vague but mostly understandable requirements which are unfortunately interspersed in some personal, contrived rant about how things just aren’t what they used to.

Security is not something that should only concern people in top secret facilities, working for governments, etc. It should concern any person that uses a PC. While the ability to toggle an option in order to force USB Flash Drives to autorun (via autorun.inf) like CD-ROM drives when they are detected certainly makes sense from the standpoint of somebody who requires or wants that feature in their everyday computing, it does not make sense at the grander scale. An option exposed to the user is an option any program can change at will.

That is less of a concern, when you think about it- if a program has already run and is able to change those settings, than it doesn’t really need to change those settings- since you are already compromised. And this is true. However software has a habit of interacting in unexpected ways. In fact the very feature being discussed was never explicitly added; the feature was for CD-ROM installations, which in those days were stamped and always provided by the manufacturer. There was little reason to ever suspect such a disc may contain malware, and in those days, making things smoother and easier to use was considered fairly important. With Burned discs, there were further concerns- this is why Windows XP and Vista Display a prompt. The addition of USB Flash drives were the same deal. The biggest issue is simply attack surface. There are a number of attacks which exploit the autorun capabilities that are vulnerable by default. For example, Conficker was one such program. Autorun would automatically install and infect any system that the flash drive was plugged in to. It would then reach across the network and infect further systems, which would infect any flash drives plugged into them, etc etc. Even wit hteh addition of a ‘Default to not enabled, but allow people to disable it” feature only goes so far. OS Security is about reducing attack surface; various exploits could easily trick the system into running the autorun program. A prompt or dialog of options is not a security solution by any stretch of the imagination, particularly if it isn’t running from a secure desktop.

That is, at least, why they have been so far unable to create the functionality they desire, which, as I understand, is through simple modification of a batch script. Eve nso, there are some ways to provide this feature. For one, while it is not possible to enable autorun functionality for anything other than CD-ROMs or DVD-ROMs, this doesn’t prevent Emulated CD-ROM and DVD-ROM Drives from working in that manner. This is how U3 Smart drives work, for example- unfortunately, U3 Smart enabled drives don’t work as-is and would require modifying the protected areas of the Flash Memory. I am unsure if that is possible.

In their postings we see they found and used one alternative, AP USB 47. The claim, however, is that it is very limited for what they are looking for. This seems- odd. It can be configured to automatically launch and it can be configured on all their systems, so it would run just fine for exactly what they want- it supports the same autorun.inf features as well. IIf I was to reply I might inquire as to how it falls short of their requirements, because they didn’t specify how it did so- since it does exactly everything they described.

The best I can gather is that they think the capability should not require running an additional program. This is a rather shallow, and perhaps even ignorant perspective because the capabilities of an OS are usually better kept to a minimum. When Microsoft was loading Windows with bells and whistles, that was, security wise, the worst time for them. And we aren’t just talking about corporations; the fact that any XP machine can be infected, instantly, by plugging in a USB Flash drive is exactly that sort of problem. By removing that capability entirely, it reduces the attack surface by removing the surface entirely rather than trying to coat it with armour-all and hoping the water keeps beading. That is the reason. Whether they want to accept it is their problem, But suggesting that your issue with some very specific requirement involving a side-hobby that has nothing to do with actually running a business is suddenly the problem fo a vendor who’s software is designed for solving real-world problems is somehow of paramount importance and that not being able to do it “makes you sick” is perhaps astigmatic.

We see their requirement, somewhat more pronounced- and particularly why they don’t want to run the program in question, here:

i can save time not clicking buttons. And limiting how much software is installed and starting up on my little duo.

As such we see their requirement is that they don’t want to click buttons (and yet they don’t want to simply launch a program when the drive is inserted, so I’m not sure what they want here) and more importantly they “want to limit how much software is installed and starting”).

That second requirement is always baffling. the Program in question perfectly duplicates the older Autorun.inf behavior- I am unsure why they keep referring to it as autorun.ini, I hope is because they are misspelling in posts and not trying to create the wrong file this entire time. The only reason they don’t want to use it is because they don’t want to use disk space- a full- what, 100KB? ANd they don’t want to use processor power on the extra program running in the background. The first is invalid- disk space of such small amounts is simply inconsequential when you have this sort of issue. And the later is not really valid either. They are working, I believe, on the assumption that more processes means a slower system, which they cannot support with any sort of evidence because no such evidence even exists, because it is not true. Particularly in this case, a program is going to spending almost all of it’s time sleeping- in fact, almost all of it. It is just listening for the same system events that the OS listens for for it’s own autorun features, but instead of ignoring attachments of Fixed disk type drives, it will emulate the standard autorun.inf capabilities and attempt to autorun them.

Thus I am left to conclude that this individual is either a troll- supported somewhat by the fact that their first post is practically the furthest thing from actually making people want to help them, all the while effectively feeling that they are entitled to that assistance because hey it’s the internet, as well as making grandiose false analogies to support their ill constructed perception of “how it should work”, or is simply a poor communicator, because based on what they’ve written I have no idea what tehy want- particularly given that the aforementioned program they already tried, hits every single one of their points; save the one, implied point that is found between the lines.

The problem, in this case, is not with the software they are using. It is, In my opinion, with their approach to what they think is the problem. In their eyes, the problem is clearly the autorun. But why can the problem not be something further back that has led them to somehow require that they be able to arbitrarily plug in USB drives in order to use and run software. This is particularly true since they cannot seem to tell the difference between autorun, the features of AutoRuns (which has nothing to do with autorun at all) and features like the local registry options to auto-start applications- something which ironically is used specifically by the program they reject for specifically the purpose they claim to require. From my gathering, they just want a USB flash drive that triggers an action, and they really don’t need any software on said drive at all. That software is easy to write. I could write it now; just have it register a listener to watch for Device added and device removed events and parse and operate on the autorun.inf contents if present respectively, and irrespective of whether that drive is an optical drive or not.

However, I’m not going to, because that isn’t going to solve the problem, which is a rather loose and I might even say poor understanding of the underlying technologies involved such that their question doesn’t really make completely sense particularly given the rejected options.

Posted By: BC_Programming
Last Edit: 30 Jul 2014 @ 08:56 PM

EmailPermalinkComments Off on When you don’t bother to find the problem, you won’t find the right solution
Tags
 10 Jun 2014 @ 6:54 AM 

It is an interesting phenomena to consider, our human ability to turn that which we find extraordinary into something mundane, or even undesirable through repetition.

For example, as far back as I can remember I remember that I would have given anything for a Job that let’s me deal with computers and/or programming all day long. I even promised myself I would never take it for granted.

For the last year that has been a reality- and that reality turned from a dreamy visage that I expected to be forced out of into a harsh reality where it did not exist, to something more like the constant cold feet you would get from not wearing socks during winter. It’s something that we might not notice without reflection but something that I think we all should be vigilant for to prevent stagnation.

I recently noticed that about myself. And it has a implicit duality that makes it both dumb and valid simultaneously. On the one hand, we have a case of comparing what we thought to reality- and what we think something entails seldom fully indicates what that something is in reality. After all even the best job will leave you frazzled and worn out at times.

At first I thought this was a result of it not being a fit. Perhaps, I reasoned, I simply cannot do this. However on further thought I think that is simply a good indication that one is working. It’s possible to both love doing something while at the same time not enjoying it. This also has an interesting mixin (no pun intended) when we look at the overall message of books like “Clean Code”. Many of the common frustrations of Programmers is the result of, well, programming. For example, if we are constantly bumping into some code limitation we built in a library; or we are frequently writing and adapting to new viewers and wrappers and filters and other programs that call other programs who’s syntax is frequently changing and so on and so forth. Just yesterday I (and a few others) spent the entire day trying to figure out a single problem that turned out to be a comparison method that was not transitive- and was used for completely redundant debugging code. I wasn’t able to appreciate the irony of something written to aid debugging causing problems with debugging later at the time but I can do that now.

It is an interesting contrast to compare the sort of solid, well-engineered, modern software you expect to see being used for businesses mission critical software with the software actually being used. As an example, at my previous job we clocked in and out using a touchscreen system. The software was reasonable on the surface but I was able to copy it and decompile it and see the guts and it features innumerable things that I found abhorrent; hard coded passwords and usernames, poor flow control, and all sorts of other oddities that seemed completely out of place for a software suite that had it’s intended level of usage.

Of course that perspective fails to take into account the possibility that the software had to remain compatible in some way with older software; or perhaps at some point it was moved from ISAM to another database until finally being moved to MySQL and accessed from C#. So it has a lot of legacy.

I have yet to finish the book “Clean Code”, or really even get started. But one message I think I found that seemed clear was the idea that software should be rewritten at intervals. This flies in the face of my more conventional understanding where rewriting an application or program is bad, especially considering we have so many examples of the latter. But the reasoning it presents seems quite sound, And certainly seems to fit with my own experience. The fundamental idea is to prevent too much legacy from carrying over from decades previous. For example, if we have a MySQL table called “RESVPOSX”, then we have a very clear smell. And in the long run it is always best to eliminate as many smells as possible. It just makes sense.

The problems appear when airing out the code in such a fashion by refactoring it, renaming things, and moving things around as well as adjusting them start to stack up in terms of man-hours as well as causing even more problems. It can seem almost counter-productive- one day it is agreed that the configuration names need to be standardized, the next you are dealing with the problem of moving the old systems using the non-standard names to the new system without breaking other systems at the same site that are running the old version and not breaking new versions when those old versions run. It’s a endless game of cat and mouse and fundamentally there is no end to it. Any piece of non-trivial code can always be made faster, cleaner, easier to read, or easier to use from other code. The real question is when you draw the line.

As a direct example I’ll take BASeBlock. For the moment we will ignore that I haven’t touched it in ages since that isn’t related to my point. It was effectively a rewrite of a game that was pretty much the same. I learned new things while writing that first one and applied them to BASeBlock; When I wrote BASeBlock I learned even more and applied them to “PreHender” which is a game who’s name had some clever backstory that I don’t remember at all but has at least succeeded in making a name that seems unique. This is important, I think- learning new things about the language and technologies of which you use. It’s even cooler when you get to use those technologies and programs in your work later on- There is something somewhat fulfilling about having Update code I wrote over 5 years ago in my spare time when I was making minimum wage be applied and reconstituted with a dose of liquid refactoring to a real live project used by real live people. At the same time, however, it can also be scary. to take another direct example of this, for this website early on I felt I could use an Updater. Or, rather, the creation of an Updater to meld with the PHP of the site seemed like an interesting challenge. And that is what I did- I created a web-based Update system that I found to be quite powerful. It existed in a few of my programs but to be honest nobody is going to be affected financially if their version of BASeBlock is out of date. There is something- uneasy- about having that code be placed in an environment where it working and working well is of the utmost importance. This I have done and if there is one thing I can say it’s that we have somehow moulded it into something that works great and presents a pretty cool looking sort of “front-end ” first experience of sorts.

My use of the word “we” rather than “I” is important above. If there is one thing that is important to realize is that in any development team there are islands of one. Without the ability to work with others you are going to fall on your face; and without the ability to interact with others without exposing yourself as a social retard you will also experience problems. The worst mistake any developer can make at any point is to ever think they are, for whatever reason, smarter than their co-workers. That is an incredibly stupid and perhaps ironically small-minded approach. I won’t get into that too far except to state that if at any point you think that, you are wrong. wrong wrong wrong. Additionally, it’s worth noting that if you are in such a state of mind where you are thinking such things about your other team members, they are almost certainly thinking the same thing about you. A team where every person thinks they are the smartest will end up being the dumbest- it’s anti-productive and trading in productivity as part of a team for some form of validation of your own hubris is one of the biggest mistakes you can make.

A much better approach seems to be that expressed By Scott Hanselman. Fundamentally, if you just assume you are the dumbest, the worst outcome is you learn. On the flipside, this does mean you can’t just let your team members blame shift everything onto you. One needs to strike a healthy balance, and it is almost certainly difficult depending on the size and makeup of the team in question.
and work towards a common goal- Too often you see development teams and IT teams turn into a game of office politics where individuals blame-shift to other members of their team in order to position themselves in some entirely imaginary “pecking order”; This is counter-productive because it doesn’t establish any form of teamwork, instead of drives wedges between members of the team and creates rivalries.

Another consideration I think is important is to keep your mind at work. Speaking personally the challenges I would face through my work are going to be wholly different in many ways from any I would face from my work on BASeBlock or Prehender, or many of my other projects- Still programming of course, but the problem sets will be entirely different. Where one focuses on legacy compatibility and data integrity, another focuses on what particde timeout and acceleration or alpha falloff makes the coolest explosion. There is something to be said about being able to write an Application or Program without having to care about whether it ever works- This is something that I did back with VB6; I had so many program ideas over time (it’s a shame I lost them) came up with entire MenuStrip-type frameworks that used hooks, wrote custom tab controls, and all sorts of things like that, but at no point did those things need to work- that is part of what might have made it most enjoyable. That is not to suggest that working on programs with goals that need to be met is much different- the difference is simply that the goalposts are more rigid, but they also won’t get smushed under the weight of not caring, because you need to care on a professional level.

Posted By: BC_Programming
Last Edit: 10 Jun 2014 @ 06:55 AM

EmailPermalinkComments Off on Anything can be the norm
Tags
Tags: , , ,
Categories: Programming
 14 Oct 2013 @ 12:21 AM 

This is a topic that I find recurring between myself and my non-programmer friends, as well as people providing freedback on some of the Open-Source repositories I contribute to and help maintain. Questions such as “what drives you” or “if you aren’t getting paid, why do you do it?” Sorts of questions.

My normal response is “Why do painters paint?”. The old standby. But I think it’s an interesting question too. The reasons that I like to develop, design, and implement software can be traced to the same things. I Enjoy programming for a multitude of reasons. It’s something I’ve found that I’m good at, and that I can continue to grow better at over time; there is no glass ceiling of skill that I will hit. Something about it makes it far more expressive than it would seem at first glance. There is more than one way to write a program to do a non-trivial task, and with experience and skill you are able to become familiar with both the language in question as well as how you use it; and you can carry over some of this skill and ability into everything you do.

There is something innately fulfilling about looking around and seeing people benefit from software you’ve written, regardless of what it is. Arguably depending on the software there is also responsibility for that software that you implicitly take on regardless of any “Terms of use” that remove such warranty. For example even though you are legally in the clear if a piece of software does something it shouldn’t or causes a problem for a user, you are still bound by the contract of not being a massive douchebag to at least try to help especially in terms of fixing the bug if not helping to recovery any lost time or data that the user lost.

Even in the most boring and otherwise dry Application, there is some consideration to be made for how maintainable the software is going to be. If you go into a large project and just code what you need by the seat of your pants you are just going to make more work for yourself in the long run; so you have to try to get an idea of the larger picture, and then focus on drawing out the details in each area so it flows smoothly; like the charcoal pre-painting step of an accomplished painter, you are able to see if this will work in the overall composition as per the software requirements. Additionally even in the most restrained set of requirements there are choices and decisions to be made by the programmer in terms of the actual make-up of the code and logic- eg, what will be represented by classes and how those classes will interoperate. A common quotation, (though I’ve forgotten the attribution) is, “The definition of Insanity is doing the same thing twice and expecting different results”. I guess you could say the definition of Art is getting two people to do the same thing and getting vastly different results.

This takes me to the almost separate topic of whether Software Development is an “art”. I’d say that it is in many ways- the aforementioned tongue-in-cheek definition being a good example of why this is. But at the same time, how is it “Computer Science” when it’s actually an Art? This is something that baffles onlookers to the subject- some Programmers feel they are artists. Others feel they are academics working in a professional Science.

The answer is simply that they are separate. For example, when you study light and colour and pigments in chemistry and science classes- that is science. It is when you apply that understanding as well as a understanding of how we [i]interpret[/i] colour that it becomes an art; and in terms of painting (or any graphical art) the “art” is more in terms of using the skill with the medium to give a message. When it comes to Programming, we have a bit of a different thing going on; I’d say that for Software Development you actually have to deal with two things- you need to deal with the Compiler/Interpreter/Computer that is compiling and running your code, but you also have to deal with the people who will read your code later on, including yourself. It becomes an “art” to balance the requirements of the program and try to come up with the most elegant, and easy to read Source code to describe an otherwise complex problem.

I imagine this is partly why I find it so fascinating. Another reason is that once I’ve written something well, and moved on- I can always both revisit that project or class to make improvements, or simply touch it up and post it here on my blog (which I’ve done for a few classes) which actually has another interesting sidebar in that those improvements I do make often come as a result of my actually opening that project or class to try to touch it up for a new blog post. In that way I build up a library of small libraries, and classes that I can use for a variety of common tasks, making complex requirements such as high-level automatic support for List View Sorting UI handling almost trivial. I think this sort of “art” helps the end-user because it means for example that the otherwise complex functionality, which now takes a single line to actually implement, basically needs a good reason to not exist. You cannot argue that it would take too much time to add a single line of code to add the implementation, and the class itself being reasonably well-tested is a sort of assurance of how well it will work.

That isn’t to say it’s always fun and games. TO be honest I’m actually a bit disappointed with how little work I’ve done- I mean I do have a good number of projects, but I feel I could have a lot more, and even then those projects I do have suffer from neglect because unfortunately there are only 24 hours in the day and I’ve decided against moving to Venus because I prefer to not have my skin boiled off and corroded by a real-life dutch oven planet, even if that would give me a day that is 100 times longer. And that joke had far too much setup and a delivery that makes me glad I didn’t pay shipping. Anyway- I don’t even remember when I last opened BASeBlock- and even then it was probably to see how I did something to try to adapt it for a work project; I would have to look back even further to see what the last time was that I actually opened it to try to get some work done on it. Another issue I’ve found for those projects recently is that now that I actually do this sort of thing for my actual work I find that- unlike my previous Jobs- At the end of the day I simply don’t want to see Visual Studio anymore. This is actually fine by me- it tells me that what I am doing is mentally engaging and I’m still learning.

This actually makes me understand some of the weird habits I’ve seen from career programmers. I’ve noticed for example that many career programmers that have hobby projects will make their hobby projects in another language from their work, which I’ve never entirely understood. But Now I can see- they want to distance those hobby projects from the work they do to make their living, if only in their own mind. Even using a separate IDE for the same language can sometimes make the difference. For some Java Work Projects I used Netbeans, but for my hobby programming in Java I used Eclipse (And now finally switched them to IntelliJ IDEA). The fact that you are in a different environment is a “cue” to your brain to do stuff your own way, I suppose.

That said, I still hope to eventually get back into developing Prehender. It’s fun to learn new stuff and even deal with OpenGL while still being able to rely on my skills with C#. And it would have the advantage of being a completely different playing field from anything else I do or have done.

Speaking of “Prehender” who’s name’s origin I’ve forgotten, but there was a reason for it- I ought to write a blog post on the gameplay ideas I had for that. It’s not revolutionary but it’s simple enough to be something I feel capable of even with my limited 3-D experience while still being something I think would be a playable and engaging game experience.

Posted By: BC_Programming
Last Edit: 14 Oct 2013 @ 12:21 AM

EmailPermalinkComments Off on Software Development: What makes it fun?
Tags
Tags: , , ,
Categories: .NET, C#, Programming
 06 Oct 2013 @ 6:39 AM 

Within my Updating component, Each Element is given a little Progress Bar right within the ListView. It’s drawn using a Gradient background. I’ve given passing thought to the idea of figuring out how to draw the standard Themed Progress bar within the ListView. Today I decided to delve into the seedy underbelly that is the Theme API and sort out how to do exactly that.

The Theme API

The Theme API resides in uxtheme.dll. Using a ‘theme’ component involves three steps:

  1. using the OpenThemeData() function to get a handle to the Theme.
  2. using the DrawThemeBackground() and DrawThemeText() functions to draw applicable parts of that theme element.
  3. closing the theme

At it’s core, Themes are really just groups of images; OpenThemeData() grabs a block of images, and you use the parameters of DrawThemeBackground() and DrawThemeText() to select which portion of the image to use. The Theme API refers to these as “parts” and “States”. The first step to using the Theme API is, of course, to declare the Functions you will be using. Unfortunately while the Functions themselves are well-documented, what is less available is the actual Constants for using the functions; so while I was able to grab some useful declarations from PInvoke.NET I had to use the Windows SDK to recreate the enumerations. Since I am only interested in the Progress Bar portions at this time, I only recreated the appropriate enumerations for them. Here is the class I came up with.

It’s worth noting there is actually a lot in common between some of the Theme Elements, so it might actually make sense for a “full API” sort of implementation that handles all the different cases in a early-bound fashion that let’s you choose the appropriate component. They all use the same functions so it would be a matter of separating each specific type into a different implementation of an abstract class of some sort. But that is for another post for sure. Here we are focused on the progressbar. This code draws the ProgressBar on the given Graphics Object by grabbing it’s DC and using the Theme API functions. Also note the “Default” action which attempts to draw a progressbar the theme API, by simply drawing a plain-style progressbar box. This is to make sure it still works if Themes are disabled on the system it runs on.

So now the question is how do we utilize this for something like a ListView Subitem? The answer is surprisingly… (or perhaps relatively) easily- we simply set the ownerDraw property and override the appropriate functions. I created this sample project which simply advances and shrinks the progressbar values over time by different degrees in a number of List Items. To do this I created a relatively simple Wrapper class around some simple data. This is my preferred pattern when working with the ListView- I typically attach a Data Class to each ListItem through the Tag Property, which allows me to add all sorts of useful data- this can be particularly useful in cases where Delegates or actions are passed ListViewItem’s.

With that out of the way, I could work on the bulk of things. The Sample Program is to eliminate the surrounding faff if I was to simply release the Updater as is, which doesn’t really work well as a simple demonstration of the ProgressBar functionality. Here is the Code behind for the Form itself:

It is reasonably short. In short the Form Creates a thread that iterates over all items and advances their progress (or decrements it) each time, and then forces it to refresh. The Actual drawing logic is in lvwDisplay_DrawSubItem, which basically just draws the item with the given progress within the set bounds for Index 1.

lvwpbar

Behold! The result! Beautiful, really. The ThemePaint Class can also paint the Error and Paused Progressbars. One thing I’ve tried was getting the “marquee” effect properly- there appears to be a way to do so but I’ve yet to work out the best way to get the appropriate effect. Perhaps in a future post I will generic-ize the ThemePaint class to a set of classes for the various Them-able things that can be drawn, though I think such a class may have reasonably dubious value, it might be a good exercise.

Posted By: BC_Programming
Last Edit: 06 Oct 2013 @ 06:39 AM

EmailPermalinkComments Off on Drawing Themed Progress Bars manually
Tags
Tags: , , , ,
Categories: API, C#, Programming
 16 Aug 2013 @ 2:17 AM 

Discussions about programming and programming languages are frequent. They can get heated. One interesting notion is the notion of language “prettiness”; what makes it interesting is that it is heavily subjective. For example, a Programmer may say (hopefully with hyperbole) that “BASIC makes my eyes bleed”; while another programmer could easily say the same thing about C.

I’ve give nthis topic some notional thought, and I thought I would share some of the conclusions on this matter. First, it’s clear that the notions are entirely subjective. This is clear because if they were in fact objective, there would be some strong factual basis and measurement for “code beauty” or “prettiness”. Since there isn’t one, and since not everybody agrees on what is or is not pretty, it’s subjective.

Let’s consider a very short program, written in a few languages. This isn’t anything as complicated as the Anagrams Search Program that I’ve written about before for various languages; this one will simply count from one to 100 and print out every 6th even number. Let’s start with Minimal BASIC, which is a restricted subset of BASIC. We’ll ignore the fact that we could simply get the same output by counting by 12.

What can we say about that? Early Programming languages such as those that would require this Limited subset were from an earlier time. A lot of the focus of Software was simply getting things to work; we didn’t yet have the luxury of fancy structured programming techniques, OO techniques, or anything of the sort. We had to get by with what we had; sort of like primitive humans had a vocabulary of guttural noises and thus didn’t have the verbal dexterity to express themselves accurately, or purvey certain nuances; of course we still have that with modern languages, but we can do a lot better at expressing our thoughts than with a series of grunts now. As such the Logic here is basically lowest common denominator; this was a dialect that lacked even IF…THEN, FOR…NEXT, WHILE…WEND; in the above, loop and IF block functionality is “emulated” using GOTO. I added comments in a few locations but that doesn’t help completely. Another problem is that in those early implementations, all Lines had to be numbered, so we had the issue where if you had to insert lines of code between two other lines, you needed to either renumber every single line and every single GOTO that referred to those lines from that line downward, or you used a lax line numbering mechanic that allowed lines of code to be added later. Most conventions go with 10, I went with 100 for no particularly strong reason.

Let’s upgrade this to a more Modern dialect- Structured BASIC programming:

It’s hard to argue that this is not more clear and easier to read and understand than the previous example. BASIC- even in the form of Visual Basic or Visual Basic.NET Today- get’s something of a stigma associated with it. Part of this is due to Edsger Dijkstra’s “Goto Considered Harmful”. However, I think it’s fair to point out that the dialects of BASIC often mentioned with such chagrin are typically what the original creators of BASIC refer to as “Street” BASIC. What this means is that they aren’t truly the type of language the creators had in mind. When many people consider old versions of BASIC, some of the first things they think of are things like Line Numbers and excessive GOTO usage, but this was not the vision the creators had. It’s simply what ended up becoming a “standard”.

As we move forward through different Programming Languages, we see that they all modernize in different ways. Like a evolutionary tree, some languages die off; others find their own evolutionary niche and prosper there. Still others are able to expand and “dominate” the general programming consciousness, simply because of their reliability and general purpose applications.

A good example would be to show the above example written in C#:

Arguably, this isn’t even easier to read than the original BASIC version. Though in practice it is slightly easier to follow, once you understand the functions and constructs being used in each. In this case, it uses LINQ (Language-Integrated Query) which I believe I’ve used liberally in my other blog posts covering C# topics. Here the code uses Enumerable.Range() to create a sequence of elements from 0 to 100; then it uses the Where() extension method to transform that sequence into a sequence that only returns every 2nd element; then it calls Where() again on that result to take every 6th element.

When it comes to programming language analysis, one will find that the languages they currently know (to a reasonable, functional degree) will heavily influence both their opinions on other languages as well as how they try to learn those languages. This is something that needs to be taken into account when looking at Programming languages and trying to come to a reasonable description and analysis. This post itself was primarily inspired by a forum post I saw that decried another user for using FreeBASIC. Their argument consisted primarily of “it’s ugly”; but that’s purely subjective. I’ve seen others issue complaints about languages being ugly because they lack square brackets, or vice versa. I would argue that such a consideration and analysis- steeped heavily in one’s own experiences- needs to be combatted at the rational level. For example I work primarily with Java, and C#, but I also started with Visual Basic. Therefore, I can understand the considerations on each end. When I was using VB I thought C-style language and block statements were ugly; when I first started using C# the main thing that cropped up was I wasn’t putting semicolons at the end of statements. Now, when I go back to VB.NET or VB, the lack of semicolons and braces can be weird; instead of End If I will try to put in a }, for example; or I accidentally insert semicolons at the end of statements without thinking.

Even so, I can still see the capabilities and interesting design decisions that went into Visual Basic, VB.NET, and languages like FreeBASIC, and try to capitalize on their strengths. The big problem when it comes to people learning new programming languages appears to be as a result of them trying to write their favourite language style into that new language. This is a refusal to adapt to that new language, trying to cling to their familiar language idioms by translating them into the new language without actually understanding the specific capabilities that new language gives.

In conclusion- don’t be a single-language programmer, and don’t restrict yourself to some specific set of languages. Always look to expand the language understandings you have as well as seek out new programming languages to study and learn from. You can bring a lot of the things you learn into the languages you already know. I ironically learned a lot about C# by learning a bit about Scala and F#, for example.

Posted By: BC_Programming
Last Edit: 16 Aug 2013 @ 02:17 AM

EmailPermalinkComments Off on Programming Languages and Subjectivity
Tags
 03 Aug 2013 @ 7:08 PM 

“Zebra-Striping” is the name for a common technique for reports or long lists of items where each row is given a colour distinct from those adjacent to it. The most common is for rows to alternate gray and white backgrounds. The Windows Forms ListView Control does not come with this ability built in, so you have to add it yourself. The problem is that the brute-force approach of setting the background and foreground (if desired) of every item in the control is fraught with peril, because future changes to the ListView such as sorting it will result infunky colourations, Or at the very least you will need to perform the same logic again to colour everything correctly.

One way around this is to exploit the ListView’s OwnerDraw functionality. This can allow you to change the background and foreground of an item based on it’s positional index, but the change will only be made when necessary. Then you can set it to draw the default method and forget about it all.

 

This is the basis for the ZebraStriper class.

The given class allows you to zebra stripe your ListView instances. Here are some examples and what they look like. lvwSortTest is the ListView Control, and zs is a “ZebraStriper” member variable:

This looks like this:

ListView Zebra Stripes: Example 1

ListView Zebra Stripes: Example 1

But how do we customize it? What if we want it to switch between Red,green, and yellow, for that christmasy theme we all love? We got you covered, though we won’t be held liable for your garish design choices:

Which gives us:

Zebra Striped listView Example 2

Zebra Striped listView Example 2

Of course in general it’s a good idea to choose colours that are not so high-contrast. a light gray with a white; a dark gray with black, etc. as well as making sure the text is readable (here, the white text sometimes appears on a yellow background due to the way the sequences line up, which is of course hard to read).

Either way, this particular method works a lot better than simply looking through the control; it only fires for controls that need to be drawn, so it doesn’t actually loop through every item. It also segregates the logic into a separate, reusable class, which can be helpful.

Posted By: BC_Programming
Last Edit: 03 Aug 2013 @ 07:08 PM

EmailPermalinkComments Off on Zebra-Striping your Windows Forms ListView
Tags
Tags: , ,
Categories: .NET, C#
 21 May 2013 @ 2:37 AM 

Sometimes you need to create temporary files. Usually, you can discard those temporary files by opening them in a fashion so they are deleted when they are closed. However, in some cases, you are dealing with a library or other class that is very picky about what you give it. Other times, you create an entire directory and want that directory to be deleted when you application is closed.

Whatever the case, there are several approaches to this. The first and most obvious (to me) was to try to use C#/.NET’s Disposable interface pattern. By creating a static List of those objects, we can ensure their Dispose()/Finalizers are run when the application is terminated (static variables and fields are disposed when the application is being torn down). Then the logic to delete and attempt to delete the file can be placed in the Dispose() method as needed. My implementation originally encountered problems with sharing violations- since the application, early on, may have many handles open. Primarily, this likely occurs because of the non-deterministic nature of how static objects are disposed; so if a File is opened and is in another static member, it might not have been disposed when our dispose method is called. As a result I’ve added a Delayed invoke method which, if the delete fails initially, will call itself again after a second (trying up to five times).

This implementation also gives up after a few tries and then tries to schedule the file for reboot deletion. I considered a rather insane mechanic whereby the class would store a data file in the Temporary folder, then when first constructed (eg. static constructor) it can check for that file and either create and dispose a DeletionHelper for each filename stored in that data file, or add those files to the existing list for deletion when the application terminates. However, after considering it I figured such a feature might make things more complicated than necessary.

The other idea for automatic deletion would be to use the CreateFile() API with full share permissions, and pass the FILE_FLAG_DELETE_ON_CLOSE flag to it, then close that file in the Dispose method. Here is one possible implementation:

And so, that gives us two implementations. I currently use the first in BASeBlock for deleting the temporary files and folders that are sometimes created during start-up, particularly if it finds a Zip file (which may have additional content that it checks for). Since those extracted files may be used during the run, I use the class to make sure they are deleted when the Application exits; or at least make an effort to do so.

Posted By: BC_Programming
Last Edit: 21 May 2013 @ 02:37 AM

EmailPermalinkComments Off on DeletionHelper: Queue Files for deletion when your app exits.
Tags
Tags: , ,
Categories: .NET, C#
 16 May 2013 @ 1:42 AM 

With the runaway success of Visual Basic 1.0, it made sense for future versions to be released improving incrementally on the existing version. Such was how we ended up with Visual Basic 2.0. In some ways, Visual Basic 2.0 changed Visual Basic from a backwater project like Microsoft Multiplan into something that would become a development focus for years to come- like Microsoft Excel.

Visual Basic 2.0 was quite a leap forward; it improved language features, the IDE, and numerous other things. One of the biggest changes was the introduction of the “Variant” Data type. A full list of new features, from the Visual Basic 2.0 Programmers Guide, with annotations on each by me.

  • Improved Form design tools, including a toolbar and a Properties Window

    Visual Basic 2.0 adds the ability to select multiple controls by dragging a box around them. It also adds a Toolbar, which replaces the area used by the Property modification controls in Visual Basic 1.0. The new Properties Window moves the Property Editing to a separate Window, which is a massive improvement since you can more easily inspect properties.

  • Multiple-Document interface Support

    Another rather big feature. MDI was and is the capability that allows a Window to have it’s own Child Windows. This has started to fall out of vogue and is all but forgotten. Earlier Office versions provided an MDI interface. The core of MDI was basically set by Program Manager itself, which was a MDI Application. Visual Basic 2.0 allows you to create MDI Forms, and MDI Applications. This is provided through a few Properties. I will cover MDI stuff that VB2.0 adds later in this Post.

  • New Properties, Events, and Methods

    Visual Basic 2.0 added several Properties, Events, and Methods to the available controls. It changes the “CtlName” of all Controls to the less stupid “Name”, and added multiple new Events, particularly surrounding Drag and Drop capabilities.

  • Object Variables and Multiple Form instances

    This is a pretty major shift. For one thing, it established Forms not as their own, distinct objects (as was the case in VB1.0) but rather as their own Class of Object. You were also capable of creating new form instances, inspect Object Types, and various other Object-Oriented capabilities. it was still relatively limited, but it was certainly a step forward and it added a wealth of capability to the language.

  • Variant Data Type

    This is another Big one. Visual Basic 1.0 had a number of Data Types, as you would expect; Integer, a 16-bit Integer value, Long, a 32-bit Integer value, Single, a 16-bit floating point value, Double, a 32-bit floating point value, Currency, a Scaled Integer value, and String. Visual Basic 2.0 shakes things up by not only adding Forms as their own ‘Data Type’ of sorts, but it also adds Variant, which is basically a Value that can represent anything.

    Variants are an interesting topic because while they originally appeared in Visual Basic 2.0, they would eventually seep into the OLE libraries. As we move through the Visual Basic versions, we will see a rather distinct change in the Language, as well as the IDE software itself to reflect the changing buttresses of the featureset. One of the additional changes to Visual Basic 2.0 was “Implicit” declaration. Variables that hadn’t been referred to previously would be automatically declared; This had good and bad points, of course- since a misspelling could suddenly become extremely difficult to track down. It also added the ability to specify “Option Explicit” at the top of Modules and Forms, which required the use of explicit declarations. Visual Basic 1.0 also allowed for implicit declarations, but you needed to use some of the ancient BASIC incantations (DefInt, DefLng, DefSng, DefDbl, and DefCur) to set default data types for a range of characters. It was confusing and weird, to say the least.

  • Shape,Line, and Image controls

    The Shape, Line, and Image controls added to VB2 are a new feature known as “windowless” controls, in that they do not actually use a Window handle. One of the larger benefits from this was that the controls were lightweight; the second was that they could be used for simple graphics on a VB2 Form.

  • Grid Custom Control

    Visual Basic 2.0 comes with a Grid Custom Control. I swear this thing has what feels like an entire chapter devoted to it in the Programmers guide. I’m not even Joking- “Chapter 13: Using the Grid Control”. The Grid control is rather awkward to use for those more accustomed to modern programmatic approaches and better designed control interfaces.

  • Object Linking & Embedding Custom Control

    OLE (pronounced “O-Lay” I was pronouncing it as Oh-Ell-Eee for the longest time and don’t think I’ll ever live down the embarassment. The basic idea was to allow one application to be “embedded” inside another. functionality- to the user- it would simply look like inserting a document, which was part of the purpose. For example, you can insert an Excel spreadsheet inside a Word document and then edit that Excel Spreadsheet- from within Word- as if it was Excel. What happened? Well it was bloody confusing. While it was (and still is) a very powerful feature, it was far from intuitive and was something far more likely to be used by power users.

  • Added Debugging Features, including Watch variables and a Calls Window.

    It’s amazing the stuff we did without in older Programming environments, isn’t it? Visual Basic 1.0 provided very simplistic Debugging support. This was not uncommon among the IDE tools of the time. Visual Basic 2.0 added some debugging helpers and in some ways added a new “mode” to the Program; Immediate Mode. Visual Basic 1.0 had similar capabilities, in that it did have something of an “immediate” mode; particularly shown by the Immediate Window. However, Visual Basic 1.0’s implementation was far simpler, and it didn’t support Watch Variables, which is one of the primary new features added in VB 2.0. This paired with with the Toolbar controls that almost emulate “playback” of the application gave rise to the idea of Three Modes; The first, you write code and design forms. The second is where you run the application, and the third. Immediate Mode, is when you are debugging; eg. Your application is Stopped but you can inspect it’s running state.

  • ASCII representation of Forms

    As far as I’m concerned, this is the single best feature added to Visual Basic 2.0. Historically, many applications- including things like Visual Basic as well as other Language interpreters or editors, saved their source in a proprietary, binary format. This was done not so much to protect it, but for space-saving reasons. When you only have a 160K disk, a difference of a single Kilobyte can be important. Additionally, for text formats it takes longer to load and save (at least with the paltry memory and processing power of the time in comparison to today). Visual Basic 1.0 as well as it’s QuickBASIC predecessor allowed for saving files as text, but this was not the default option. Visual Basic 2.0 adds the ability to save not only source code- as the Visual Basic 1.0 Code->Save Text Option did- but also to save the Form design in a text format. this was a massively useful feature since it allowed external tools to manipulate the form design as well as the code, as well as making your software development less dependent on an undocumented format.

  • 256-Color support for bitmaps and color palettes.

    Back in those days, Colour was a trade-off. Video Adapters usually had limited Video Memory, so you usually had a trade-off between either higher resolution and fewer colours, or lower resolution and more colours. Today, this isn’t an issue at all- 32-bit and 24-bit Colour has been the standard for nearly two decades. As this was developing, however, we had the curious instance of 256-colour formats.

    256-colour modes uses a single byte to index each colour, and palette entries are stored separately. The index them becomes a lookup into that table. This had some interesting effects; Applications could swap about the colours in their palette and create animations without really doing anything; the Video Adapter would simply change mappings itself. This was a very useful feature for DOS applications and Games.

    Windows, however, complicated things. Because Windows could run and display the images from several Applications simultaneously, 256-color support was something of a tricky subject. Windows itself reserved it’s own special colours for things like the various element colours, but aside from that 8-bit colour modes depended on realized palettes. What this means is that Applications would tell windows what colours they wanted, and Windows would do what it could to accomodate them. The Foreground application naturally took precedence, and in general when an application that supported 8-bit colour got the focus, it would say “OK, cool… realize my palette now so this owl doesn’t look like somebody vomited on an Oil Painting”. With Visual Basic 1.0, this feature was not available for numerous reasons, most reasonable among them being a combination of it just not being very important paired with the fact that VB was designed primarily as a “front end” glue application for other pieces of code. Visual Basic 2.0 however adds 256-color support. This adds quite a few properties and methods that are used. VB itself manages the Palette-relevant Windows Messages, which was one of the reasons VB 1.0 couldn’t even be forced to support it.

Visual Basic 2.0 Manuals

The Three Visual Basic 2.0 Manuals in their native environment of a random Wooden Table. The “Professional Features” guide is about twice as thick as the other two. Note the use of the old cover style that was typical of MS Documentation of the time.

Visual Basic 2.0 Editing a Command Button's Click Procedure

Visual Basic 2.0 Editing a Command Button’s Click Procedure

As we can see above, Visual Basic 2.0 adds Syntax highlighting over VB1; an additional side effect of this is that the colours can also be customized. I recall I was a fan of using a green background and yellow text for comments to make them stand out, myself.

On a personal Note, Visual Basic 2.0 is dear to me (well, as dear as a software product can be), since it was the first Programming Language I learned and became competent with to the point where I realized that I might have a future with software development. Arguably, that future is now, but it hasn’t actually become sustainable (I may have to relocate). But more so than this is the fact that I was given a full, legal copy of the software. This in itself isn’t exceptional, but what is is the fact that it had all the manuals:

Dog-eared, ragged, and well-used, these books- primarily the Programmers Guide and Language Reference- became the subject of meticulous study for me.

Posted By: BC_Programming
Last Edit: 13 May 2016 @ 06:58 PM

EmailPermalinkComments (1)
Tags

 Last 50 Posts
 Back
Change Theme...
  • Users » 47469
  • Posts/Pages » 388
  • Comments » 105

PP



    No Child Pages.

Windows optimization tips



    No Child Pages.

Soft. Picks



    No Child Pages.

VS Fixes



    No Child Pages.

PC Build 1: “FASTLORD”



    No Child Pages.