19 Oct 2019 @ 3:22 PM 

Over the last few years – more than a decade, really – it seems that, somehow, *nix- and Linux in particular, has been tagged as being some sort of OS ideal. It’s often been cited as a "programmers OS" and I’ve seen claims that WIn32 is terrible, and people would take the "Linux API" over it anyday. That is a verbatim quote as well.

However, I’ve thought about it some and after some rather mixed experiences trying to develop software for Linux I think I feel something of the opposite. Perhaps, in some ways, it really depends what you learned first.

One of the advantages of a Linux-based (or to some extent, UNIX-based) system is there is a lot of compartmentalization. The user has a number of choices and these choices will affect what is available to the applications.

I’d say that generally, The closest thing to a "Linux API" that applications utilize would probably just be the Linux Kernel userspace API.

Beyond that, though, as a developer, you have to start making choices.

Since the user can swap out different parts, outside of the Linux Kernel userspace API, pretty much nothing is really "standardized". Truth be told most of that kernel userspace API isn’t even used directly by applications- usually they get utilized through function calls to the C Standard library.

The Win32 API has much more breadth but tends to be more "simple", which tends to make using it more complicated. Since it’s a C API, you aren’t going to be passing around OO instances or interfaces; typically more complicated functions accept a struct. Hard to do much better than that with a C API.

However, with Windows, every Window is created with CreateWindowEx() or CreateWindow(). No exceptions. None. Even UWP Windows use CreateWindow() and have registered Window classes. Even if perhaps it’s not the most pleasant base to look at at least there is some certainty that, on Windows, everything is dealt with at that level and with those functions.

With Linux, because of the choices, things get more complicated. Since so many parts are interchangable, you can’t strictly call most of what is made available and used by applications a "Linux API" since it isn’t going to exist on many distributions. X11, for example, is available most of the time, but there are still Linux distributions that use Wayland or Mir. Even using just X11, it only defines the very basic functions- It’s a rare piece of software that actually interacts directly with X11 via it’s server protocol, usually software is going to use a programming interface instead. But which one? For X11 you’ve got Xlib or xcb. Which is better? I dunno. Which is standard? Neither. And then once you get down to it you find that it only actually provides the very basics- what you really need are X11 extensions. X11 is really only designed to be built on top of, with a Desktop Environment.

Each Desktop environment provides it’s own Programming Interface. Gnome as I recall uses "dbus"; KDE uses- what was it? kdetool? Both of these are CLI software that other programs are supposed to call to interact with the desktop environment. I’m actually not 100% on this but all the docs I’ve found seem to suggest that at the lowest level aspects of the desktop environment is handled through calls to those CLI tools.

So at this point our UI API consists of calling a CLI application which interacts with a desktop environment which utilizes X11 (or other supported GUI endpoints) to show a window on screen.

How many software applications are built by directly interacting with and calling these CLI application endpoints? Not many- they are really only useful for one-off tasks.

Now you get to the real UI "API"; the UI toolkits. Things like GTK+ or Qt. These abstract yet again and more or less provide a function-based, C-style API for UI interaction. Which, yes- accepts pointers to structs which themselves often have pointers to other structs, making the Win32 API criticism that some make rather ironic. I think it may arise when those raising the criticism are using GTK+ through specific language bindings, which typically make those C Bindings more pleasant- typically with some sort of OO. Now, you have to choose your toolkit carefully. Code written in GTK+ can’t simply be recompiled to work with Qt, for example. And different UI toolkits have different available language bindings as well as different supported UI endpoints. Many of them actually support Windows as well, which is a nice bonus. usually they can be made to look rather platform native- also a great benefit.

It seems, however, that a lot of people who raise greivance with Win32 aren’t comparing it to the direct equivalents on Linux. Instead they are perhaps looking at Python GTK+ bindings and comparing it to interacting directly with the Win32 API. It should really be no surprise that the Python GTK+ Bindings are better; that’s several layers higher than the Win32 API. It’s like comparing Windows Forms to X11’s server protocol, and claiming Windows is better.

Interestingly, over the years, I’ve come to have a slight distaste for Linux for some of the same reasons that everybody seems to love about it, which is how it was modelled so heavily on UNIX.

Just in the last few years the amount of people who seem to be flocking to OS X or Linux and holding up their UNIX origins (obviously more so OSX than Linux, strictly speaking) as if that somehow stands on it’s own absolutely boggles my mind. I can’t stand much about the UNIX design or philosophy. I don’t know why it is so constantly held up as some superior OS design.

And don’t think I’m comparing it to Windows- or heaven forbid, MS-DOS here. Those don’t even enter this consideration at this point- If anything can be said it’s that Windows wasn’t even a proper competitor until Windows NT anyway, and even then, Windows NT’s kernel definitely had a lot of hardware capability and experience to build off that UNIX never had in the 70’s- specifically, a lot of concepts were adopted from some of the contemporaries that UNIX competed against.

IMO, ITS and MULTICS were both far better designed, engineered, and constructed than any UNIX was. And yet they faded into obscurity. often People point at Windows and say "The worst seems to get the most popular!" But if anything UNIX is the best example of that. So now we’re stuck with people who think the best OS design is one where the the shell is responsible for wildcard expansion and the underlying scheduler is non-preemptive. I wouldn’t be surprised if the UNIX interrupt-during-syscall issue was still present, and instead of re-entering the syscall it returned an error code, making it the application’s responsibility to check for the error and re-enter the syscall.

It seems to me that one of the axioms behind many of the proclamations that "*nix is better designed" seems to be based on definitions of "better designed" that correspond to how *NIX does things- conclusion before the reason, basically.

Posted By: BC_Programming
Last Edit: 19 Oct 2019 @ 03:22 PM

EmailPermalinkComments (0)
 23 Mar 2019 @ 2:15 PM 

There are a lot of components of Windows 10 that we, as users, are not “allowed” to modify. It isn’t even enough when we find a way to do so, such as by disabling services or scheduled tasks by using command prompt running under the system account. This is because when you next install updates, those settings are often reset. There are also background tasks and services intended specifically for “healing” tasks, which is a pretty friendly way to describe a  trojan downloader.

One common way to “assert” control is using the registry and the Image File Execution Options key, found as:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options

By adding a Key here with the name of the executable, one can add additional execution options. The one of importance here is a string value called debugger. When you add a debugger value, Windows will basically not start the executable and will instead launch the executable listed for the “debugger” value, with the executable that was being run as a parameter.

We can use this for two purposes. The most obvious is that we can simply swap in an executable that does nothing at all, and basically prevent any executable from running. For example, if we add “C:\Windows\System32\systray.exe” as the debugger value for an executable, when the executable in question is run, instead the systray.exe stub will run, do nothing, and exit, and the executable that was being launched will not. As a quick aside- systray.exe is a stub that doesn’t actually do anything- it used to have built-in notifications icons for Windows 9x, and it remains because some software would actually check if that file existed to know whether it was running on Windows 95 or later.

The second way we can use it is to instead insert our own executable as the debugger value. Then we can log and record each invocation of any redirected program. I wanted to record the invocations of some built-in Windows executables I had disabled, so I created a simple stub program for this purpose:


I Decided to separate the settings for future editing. For my usage, I just have it hard-coded to C:\\IMEO_Logs right now and create the folder beforehand. The bulk of the program of course is the entry point class:

I’ve used this for a few weeks by manually altering the Image File Execution Options to change my existing settings that redirected some executables (compattelrunner.exe, wsqmcons.exe, and a number of others) to systray.exe to instead redirect to this program- It then logs all the attempts to invoke that executable alongside details like the arguments that were passed in.

Posted By: BC_Programming
Last Edit: 23 Mar 2019 @ 02:15 PM

EmailPermalinkComments Off on Taking Control of Windows 10 with Image File Execution Options
 15 Aug 2015 @ 4:18 PM 

Windows 10 has been out for a few weeks now. I’ve upgraded one of my backup computers from 8.1 to Windows 10 (the Build I discussed a while back).

I’ve tried to like it. Really. I have. But I don’t, and I don’t see myself switching my main systems any time soon. Most of the new features I don’t like, and many of those new features that I don’t like cannot be shut off very easily. Others are QOL impacts. Not being able to customize my Title bar colour and severely removing all customization options, for example, I cannot get behind. I am not a fan of the Start Menu, nor do I like how they changed the start screen to mimick the new Start menu. I understand why these changes were made- primarily due to all the Windows 8 naysayers- but that doesn’t mean I need to like them.

Windows 10 also introduces the new Universal App Framework. This is designed to allow the creation of programs that run across Windows platforms. “Universal Windows Application” referring to the application being able to run on any system that is running Windows 10.

If I said “I really love the Windows 8 Modern UI design and Application design” I would be lying. Because I don’t. This is likely because I dislike Mobile apps in general and having that style of application not only brought to the desktop but bringing along the same type of limitations I find distasteful. I tried to create a quick Win8 style program based on one of our existing winforms programs but hit a brick wall because I would have had to extract all of our libraries and turn it into a web service, then have it running in the background of the program itself. I wasn’t able to find a way to say “I want a Windows 8 style with XAML, but I want to be able to have the same access as a desktop program”. It appears that this may have been rectified with the Windows 10 framework, as it is possible to target a Universal app and make it, errm- not universal- by setting it to be a Desktop Application. I hope- though have as of yet been unable to determine if that is possible and it is looking more and more like it isn’t. This makes my use case- to provide a Modern UI ‘App’ that makes use of my company’s established .NET Class Libraries – impossible. This is because for security reasons you cannot reference standard .NET Assemblies that are not in the GAC. I was thinking they might work if they are signed in some fashion, but I wasn’t able to find anything that would indicate that is the case.

the basic model, as I understand it, mimicks how typical Smartphone “apps” work. Typically they have restricted local access, and will access remote web services in order to perform more advanced features. This is fairly sensible since most smartphone apps are based off of web services. Of course, the issue is that this means porting any libraries that use those sorts of features to portable libraries which will access a web service for the required task. (For a desktop program, I imagine you could have the service running locally)

I’m more partial to desktop development. Heck right now, my work involves Windows Forms (beats the mainframe terminal systems the software replaces!) and even moving to WPF would be a significant engineering effort, so I keep my work with WPF and new Modern UI applications ‘off-the-clock’.

Regardless of my feelings regarding smartphone ‘apps’ or how it seems desktop has been taking a backseat or even being replaced (it’s not, it’s just not headline worthy), Microsoft has continued to provide excellent SDKs and Developer tools and documentation, and is always working to improve both. And even if there is a focus on the Universal Framework and Universal Applications, their current development tools still provide for the creation of Windows Forms applications, allowing the use of the latest C# features for those who might not have the luxury of simply jumping to new frameworks and technologies willy-nilly.

For those interested in keeping up to date who also have the luxury of time to do so (sigh!) The new Windows development Tools are available for free. One can also read about What’s New within the Windows development ecosystem with Windows 10, And there are also Online courses regarding Windows 10 at the Microsoft Virtual Academy, as well as videos and tutorials on Channel9.

Posted By: BC_Programming
Last Edit: 30 Dec 2015 @ 08:01 PM

EmailPermalinkComments Off on Windows 10
 04 Feb 2013 @ 9:24 PM 

Is XNA Going Away?

The following consists of my opinion and does not constitute the passing on of an official statement from Microsoft. All thoughts and logic is purely my own and I do not have any more ‘insider’ information in this particular topic than anybody else

I’ve been hearing from the community a bit of noise about Microsoft’s XNA Framework- a Programming library and suite of applications designed to ease the creation of Games- being cut. A google reveals a lot of information, but a lot of it is just plain old rumours. The only one I could find that was based on actual information still makes a lot of assumptions. It is based on this E-mail:

Our goal is to provide you the best experience during your award year and when engaging with our product groups. The purpose of the communication is to share information regarding the retirement of XNA/DirectX as a Technical Expertise.

The XNA/DirectX expertise was created to recognize community leaders who focused on XNA Game Studio and/or DirectX development. Presently the XNA Game Studio is not in active development and DirectX is no longer evolving as a technology. Given the status within each technology, further value and engagement cannot be offered to the MVP community. As a result, effective April 1, 2014 XNA/DirectX will be fully retired from the MVP Award Program.

Because we continue to value the high level of technical contributions you continue to make to your technical community, we want to work with you to try to find a more alternate expertise area. You may remain in this award expertise until your award end date or request to change your expertise to the most appropriate alternative providing current contributions match to the desired expertise criteria. Please let me know what other products or technologies you feel your contributions align to and I will review those contributions for consideration in that new expertise area prior to the XNA/DirectX retirement date.

Please note: If an expertise change is made prior to your award end date, review for renewal of the MVP Award will be based on contributions in your new expertise.

Please contact me if you have any questions regarding this change.

This is an E-Mail that was sent out- presumably- to XNA/DirectX MVPs. I say presumably because for all we know it was made up to create a news story. If it was sent out, I never received it, so I assume it would have been sent to those that received an MVP Award with that expertise. It might have been posted to an XNA newsgroup as well. Anyway, the article that had this E-mail emblazoned as “proof” that MS was abandoning XNA seemed to miss the ever-important point that it actually says nothing about XNA itself, but actually refers to the dropping of XNA/DirectX as a technical Expertise. What this means is that there will no longer be Awards given for XNA/DirectX development. It says nothing beyond that. Now, it could mean they plan to phase it out entirely- but to come to that conclusion based on this is a bit premature, because most such expertise-drops actually involved a merge. For example, in many ways, an XNA/DirectX expertise is a bit redundant, since XNA/DirectX works using a .NET Language such as VB.NET and C# and very few XNA/DirectX MVPs truly can work with XNA in any language at all, it might make sense to just clump them with us lowly Visual C# and Visual Basic MVPs.

To make the assumption that XNA is being dropped based on this E-mail is a bit premature. In my opinion, I think the choice was made for several reasons. I guess some of the misconceptions might be the result of misconceptions about just what a Microsoft MVP is. First, as I mentioned before, a lot of the expertise of XNA/DirectX involves an understanding- and expertise- in some other area. Again, Visual C#, Visual Basic, Visual C++, etc. So in some ways they might have considered a separate XNA/DirectX expertise redundant. Another reason might have to do with the purpose of an MVP. MVP Awards are given to recognize those who make exceptional community contributions in the communities that form around their expertise. For example, my own blog typically centers around C#, solving problems with C# and Visual Studio, and presents those code solutions and analyses to the greater community by way of the internet, as well as sharing my knowledge of C# and .NET in those forums in which I participate. MVP awardees don’t generally receive much extra inside information- and that they do get is typically covered by a NDA agreement. The purpose of the award is to also establish good community members with which Microsoft can provide information to the community. MVPs are encouraged to attend numerous events where they can, quite literally, talk directly to the developers of the Products with which they acquainted. in some way you could consider MVPs “representatives” of the community, who are chosen because their contributions mean they likely have a good understanding of any prevalent problems with the technologies in question, and interacting with MVPs can give the product teams insight into the community for which their product is created. Back to the particulars here, however- as the E-mail states, XNA Game Studio is not under active development. Now, following that, it seems reasonable to work with the assumption that either that product has no Product Team, or those that are on that Product Team are currently occupied in other endeavours, or other products for which their specific talents are required.

It’s not so much that they are “pulling the plug in XNA”- the product is currently in stasis. As a direct result of this, it makes sense that without an active Product Team, having specific MVP Awardees for that expertise isn’t particularly useful for either side- MVPs gain from personal interactions with the appropriate Microsoft Product team as well as fellow MVPs, and Microsoft gains from the aggregate “pulse of the community” that those MVPs can provide. Without a Product Team for a expertise, that expertise is redundant, because there is nobody to get direct feedback. This doesn’t mean the XNA community is going away, just that, for the Moment, there is no reason for Microsoft to watch it’s pulse, because the product is “in stasis” as the OS and other concerns surrounding the technology metamorphize and stabilize (The Windows 8 UI style, Windows Store, and other concerns in particular). Once the details and current problems with those technologies are sussed out, I feel they will most certainly look back and see how they can bring the wealth of game software written in XNA to the new platform. Even if that doesn’t happen, XNA is still heavily used for XBox development- which is also it’s own expertise.

I hope this helps clear up some of the confusion that has been surrounding XNA. It doesn’t exactly eliminate uncertainty- this could, in fact, be a precursor to cutting the technology altogether. But there is nothing to point to that being the direction, either.

Posted By: BC_Programming
Last Edit: 04 Feb 2013 @ 09:24 PM

EmailPermalinkComments Off on Is XNA going away?
 03 Jul 2012 @ 3:23 AM 

I don’t know how but somehow I’ve been awarded the Microsoft MVP award for my contributions to C# technical communities (C# MVP). Of course I am very surprised at this, but I guess I have a short memory. I do have a number of posts and blog entries regarding C#, as well as a lot of forum posts across my various profiles that assist with it. My initial response was actually self-deprecating- “I guess they give them to anybody these days” Which is of course not true.

I cannot help but feel like I got it “by accident”. Most MVPs really are industry professionals with professional expertise, a college education, and a myriad of other qualifications. I feel like an imposter, since I don’t have any post-secondary education and certainly no formal education in any of the domains that I am essentially being awarded for, nor have I actually worked in the industry (well, arguably, that’s not true, if my failing attempt to start a company counts).

That isn’t necessarily to say I don’t deserve the award- I imagine the people responsible for the MVP program are a lot more qualified to make that decision than me.

At this point I’m forced to wonder how it helps me. It does make a very nice thing to put on a resume, but the thing is, I have no place to submit that resume where that award is going to matter. At my last job I think the most my skills were actually used was when I told the manager that, “yes, the monitor needs to be plugged in to work”, or something to that effect. I quit my last job nearly a year ago (Last October) Because I wanted to find something working with computers. The closest things to this are still retail (places like Staples, Best Buy (*Shudder*) and so forth. I applied at every single one I could find, and even got a few interviews, but nothing came of it. Arguably it’s equally likely the fact that shortly after the day I had all those interviews my phone got cut off made follow-ups impossible, so I have absolutely no clue if they ever tried to call me after that (in fairness they did have my E-Mail addresses and I’ve not received anything about it, though it’s more likely they tried to phone, and then just went to the next applicant).

Regardless, let’s be honest. Even that is below my pay grade. I wrote about “getting one’s foot in the door” previously, and this just goes to show how damned impossible it seems to be. The idea of a person who received a MVP Award for sharing C# technical expertise working a minimum wage crap job- or even those above- is almost laughable, but there is absolutely nothing else around here, with one exception.

There is, however, one place I haven’t tried. Pelican Software (which is actually owned by Northwest Forest Products, if memory serves). Well, that’s not quite true, I did in fact try them back when I was a spunky kid whose expertise was pretty much just VB6 and feeling smugly superior… More recently, I did have some dealings with them regarding a Freelance program I had written, “BCJobClock” since it is very similar in many ways to their product, “Tallys”. Things were looking up in that regard but the eventual decision they reached was that BCJobClock was too similar to it. (With the exception that it’s UI is not confusing and it doesn’t cost several thousand dollars). I never actually applied there since to my understanding they really aren’t doing to well and I doubt they’d take the business risk of hiring more staff in their situation. But I may try that anyway. It’s known statistic that companies that employ at least one MVP Award winner are more successful.

At this point I sort of have two options: I can either pursue this BASeCamp thing and try to market BCJobClock (which currently has not appeared on my site at all) for a nominal price, by integrating the existing ProductKey code that I already wrote and used for BASeBlock. But the thing is that the BASeBlock situation really tells me everything I need to know- it’s pointless. Nobody has actually bought a registered copy. And there are very few downloads. It’s online, but in many ways it may as well not be online at all. It just represents 3 years of my spare time that I’ve essentially wasted on a bloody game. It’s still “my product” and I’m proud of it and all that, but pride doesn’t pay bills. And I don’t want to lock away the editor behind the requirement for registration because the Editor is perhaps the part I like the most about the entire thing. Honestly when I was dealing with NWFP regarding the program I just wanted to sell the entire thing and get rid of it. I was sick of it and in some ways I still am. Come to think of it, I’d be more than happy to sign something that gives the complete IP to BCJobClock to NWFP as a condition of working there. Of course it probably wouldn’t get used, but this really would be the only guarantee that I won’t at some point be in direct competition with them, which could very well happen- and this guarantee might be worth it. (I would say so- my program is a heck of a lot easier to use and if I do release it in some manner it’s going to be a lot cheaper, too; though despite their notations it won’t be cutting into any of their market anyway- but in that case it will still be my market share, and not theirs.

Of course, BCJobClock is aimed at a different market. In some ways it’s a Time Management application. I suppose I haven’t discussed the program much since I hadn’t decided what I was going to do with it (well actually there was a page on the main landing site that was a little exuberant on the entire thing at some point, but I removed it when reality punched me in the face with BASeBlock). To Summarize, it basically manages workers and orders for a Repair shop or similar shop. This can be automotive, like the client I originally wrote it for (Somewhere in Iowa, to my understanding) Or it could easily be used for Repair shops or other locations that need a Worker< ->Task management system. The Client program allows employees to clock into and out of orders using a touch-screen interface (naturally I don’t provide the hardware, just the software here), which is done through a WPF C# Application. This program interfaces with a remote MySQL Server using the SQL/Connector which allows the use of ADO.NET Connection and similar objects to work with the MySQL Remote database, which manages all the… data… involved. The Administrator program allows the addition/removal of users, inspection of all orders and users and the time taken on each order as well as each user in total, and all sorts of other information. There is also another little “Watcher” program that is designed for use by people tasked to surpervise work orders and assign tasks to other employees, but aren’t able to have full access to the administrator panel for adding and removing users, getting reports, and all that. Because it is designed for watching users, it also shows Notifications when Users become available for work or when Users or tasks are being “ignored”, and little coloured indicators to show when users/orders are working/being worked on.

It still needs a bit of work to streamline some speed problems that have been encountered by the sole user of the program (which we hacked away with a few INI file changes for their immediate use case), which is related to the fact that the admin program tries to keep it’s view “up to date” by refreshing from the database on a given delay. Unfortunately it picks up a lot of data in the process. Ideally, it would only proceed to actually carry out the “refresh” from the database when it actually knew there was a change, but I’m not really sure how to implement that. Working with databases is frustrating, in that these seemingly basic capabilities seem impossible. (Q.How do I detect when the results of a query changed? A. you perform the query and look through the entire resultset). Of course at that point if you find no changes you just wasted that entire time, so it’s just begging the question.

Actually, with some thought, there is another solution. Relocation. There is simply nothing around here for the type of person who has skills and abilities relevant to a C# MVP Award, so in many ways having it as a bullet point echoes as hollow as the sepia-toned aged mention of my High-School awards from almost ten years ago. So, Maybe it’s time to leave Nanaimo. There simply aren’t any tech jobs here (or I’ve become blind). Not even some sort of more general IT job dealing with servers or the network of a office building or what-have-you.

As I noted however, I never actually inquired NWFP for a career or job, since that wasn’t really my intention at the time. In fact it never even occurred to me. The MVP Award I think helps me here; those aren’t exactly given away freely, there are only two recipients in Nanaimo, Me, and a fellow whose expertise lies in SQL Server; I think there are a dozen on Vancouver Island (though I cannot check).

And if that doesn’t work- well, I guess I’ll have to relocate. On the bright side, My website will still be in the same place 😛

Posted By: BC_Programming
Last Edit: 03 Jul 2012 @ 03:25 AM

EmailPermalinkComments (3)
 21 Jun 2012 @ 11:50 AM 

Call me old fashioned, or possibly slow, but for some reason I never seem to be using the latest version of a piece of software. Until recently I was doing all my .NET work with Visual Studio 2008; this was because VS2010, bless it’s heart, felt sluggish to me.

With the pending release of Visual Studio 2012, which as I write this is available for a free download as a Release Candidate, I decided I’d bite the bullet and start switching. This was also because I wanted to dip into XNA, and As near as I could tell the latest version only worked in conjunction with VS2010. I had to reinstall Resharper to get proper VS2010 support, since I had installed Resharper before I installed VS2010, and after applying my own preferences to both Visual Studio as well as Resharper, I was able to get back into coding. (Am I the only person who hates the preferences IDE’s have to automatically complete parentheses and braces and stuff? I always find myself typing the ending parenthesis, ending up with double, so I delete the extra ones, then I forget where I was in the nesting; and if you get used to that behaviour, suddenly you find yourself not typing ending parentheses in plain-text editors. You can’t win! I’m not a big fan of that sort of autocomplete; Actually, I don’t really like any form of autocomplete, but that’s sounds like material for another post altogether.

The End result is BCDodgerX, which is available on my main downloads page. It is essentially a rewrite of BCDodger, with an unimaginative X added onto the end that means pretty much nothing.

Overall, VS2010 is actually quite good. Call it a belated review; I almost purposely fall several versions behind for some reason. I cannot say I’m overly fond of the use of 3-D Acceleration within a desktop application, but at the same time all the Controls still have the Windows Look and Feel (which is my main beef with Java’s Swing libraries, which have a look and feel all their own), and the desktop itself is accelerated with Aero anyway so I suppose it’s only a natural progression. (Besides, I don’t play games very often and this 9800GT should get some use…).

The tricky question now is when I should start migrating my VS2008 projects to 2010, and whether I should switch to the latest framework. I can switch to VS2010 without using the latest framework, of course, but I wonder what benefits I will see? One day I’m sure I’ll just say “screw it” and open say, BASeBlock in VS2010 and jump in; I’m falling behind, after all (What with the aforementioned release of 2012 on the horizon). And VS2010 is definitely an improvement both tool and functionality wise over 2008, so there isn’t really a good reason not to switch now. No doubt I’ll keep making excuses for myself. Oh well.


At first, I thought I hated XNA; but now I know that what I actually hate is 3D programming. I imagine this is mostly because I got extremely rusty at it; additionally, I had never truly done 3-D programming, at least in the context of a game. My experience at that point was pretty much limited to the addition of 3-D graphic capabilities to a graphing application that I wrote (and never posted on my site because it hasn’t worked in ages, is old, and uses libraries/classes/modules I have updated that are no longer source compatible etc.). Of course that didn’t have constantly changing meshes, used DirectX7, and it was shortly after I had finished that feature that I abandoned the project, for whatever reason. I had never dealt with 3-D in a gaming capacity.

The purpose of XNA is to attempt to simplify the task of creating games- both 3D and 2D, for Windows as well as XBox 360. And it definitely does this; however you can really only simplify it so much, particularly when dealing with 3D Programming. My first quick XNA program was basically just to create a bunch of cubes stacked on one another. This is a very common theme given the popularity of games like Minecraft, but my goal was to eventually create a sorta 3-D version of Breakout (or, rather, BASeBlock 3D).

I was able to get the blocks visible, after a lot of cajoling, and doing the work on paper (Visualizing 3-D space and coordinates are not my forte). But it ran at 10fps! This was because I was adding every single block’s vertices to the VertexBuffer; for a set of blocks in a “standard” arrangement of, around 1920 blocks (which is probably a number that would make the 2-D version go around 10fps, to be absolutely fair here), that is over 11520 faces, each of which actually consist of a triangle list of 6 points (I tried a triangle fan but it didn’t seem to even exist (?), oh well) meaning that I was loading the VertexBuffer with over 69120 texture-mapped vertices. That’s a lot to process. The big issue here is Hidden Surface Removal; obviously, if we had a cube of blocks like this, we don’t need to add the vertices of blocks that aren’t visible. I’ll admit this is the part I sort of gave up on that project for the time being; that would involve quite a bit of matrix math to determine what faces were visible on each block, which ones needed to be added, etc based on the camera position, and I kind of like to understand what I’m doing, and I, quite honestly, don’t have a good grasp over how exactly Matrices are used in 3-D math, or dot products (at least in 3-D), and I prefer not to fly blind. So I’ve been reading a few 3-D programming books that cover all the basics; the book itself I believe goes through the creation of a full 3-D rasterization engine and has a lot of in-depth on the mathematics required; this, paired with concepts from Michael Abrash’s “Graphics Programming Black Book” should give me the tools to properly determine which blocks and faces should be added or omitted.

Anyway, scrapping that project for the time being, I decided to make something 2-D; but since I was more or less trying to learn some of the XNA basics, I didn’t want too much of the concepts of the game itself getting in the way, so I chose something simple- I just re-implemented BCDodger. I added features, and it runs much better this way, but the core concept is the same.

Some minor peeves

XNA is quite powerful- I have no doubt about that. Most of my issues with it are minor. One example is that XACT doesn’t seem to support anything other than WAV files, which is a bit of a pain; this is why BCDodgerX’s installer is over twice the size of BASeBlock’s, despite having far less content). Another minor peeve is that there is no real way to draw lines, or geometry; everything has to be a sprite. you can fake lines by stretching a 1×1 pixel as needed, but that just feels hacky to me. On the other hand, it’s probably pretty easy to wrap some of that up into a class or set of classes to handle “vector” drawing, so it’s probably just me being used to GDI+’s lush set of 2-D graphics capabilities. Another big problem I had was with keyboard input- that is, getting text entry “properly” without constant repeats and so forth. Normally, you would check if a key was down in Update(), and act accordingly. This didn’t work for text input for whatever reason, and when it did it was constrained to certain characters. I ended up overriding the Window Procedure and handling the Key events myself to get at Key input data as needed, then hooked those to actual events and managed the input that way.


Overall, I have to conclude that XNA is actually quite good. There are some intrinsic drawbacks- for example it isn’t cross platform (to Linux or OSX), and the aforementioned basic problems I had, which were probably just me adjusting my mindset. It’s obviously easier than using Managed DirectX yourself, or DirectX directly (if you’ll pardon the alliteration), and it is designed for Rapid creation of Games. With the exception of the High-Score entry (Which took a bit for me to get implemented properly) BCDodgerX was a single evening of work.

Posted By: BC_Programming
Last Edit: 21 Jun 2012 @ 11:50 AM

EmailPermalinkComments Off on VS2010, XNA, and BCDodgerX potpourri
 13 Dec 2011 @ 10:04 AM 

I don’t know how helpful this will be, but it sort of surprised me.

Basically, my brother has managed to go through three PS3 consoles. Each time, being the hardware expert he is – the type that would, when my 486 wasn’t booting up, open it up and make sure every connection was plugged into something – decided he could fix it himself. I think the issue was it wasn’t reading discs or something. Of course my advice was to send the bloody thing to Sony, but hey it was his warranty to void. What ended up happening of course was he ripped the entire thing apart, had absolutely no idea what he was doing and he ended up having to buy a new one since that one was no longer applicable for service. Anyway, I stumbled on the picked apart carcass of his old PS3- and I remembered that they have hard drives. So I opened up the HD access panel, took out the HD, and to my surprise I found it was just a 2.5″ SATA drive. To confirm this I plopped it into my laptop and installed Mint 12 on it. It’s mine now, heh. I’m not sure where his other picked part carcasses are, though. It’s a shame this laptop only allows for the installation of one Hard Drive, too.

Anyway, I didn’t know that they were so interchangable with PC parts in this manner, so maybe others might not be aware of it either. And I know quite a few people with dead consoles (PS3/XBox 360, etc) that they have basically shelved and forgotten about so if somebody needs an emergency Hard Drive this could be a useful nugget of info.

On a related Note, Mint 12 is extremely impressive… Although it primarily Reminded me just how heavily I customized the Mint 10 installation I was used to using on my laptop. The changes were mostly UI and I couldn’t figure out how to get my beloved Emerald working with a few quick googles so I swapped the drives back over. Now I could have messed about with Mint 12 by simply using the Live CD, but the Live CD is always somewhat slow and hardly really shows the OS at it’s true potential. And of course you can’t really add anything or make many changes to it, since it’s booting from a Read-Only medium.

Regarding Console Systems, though; is it just me, or are they basically just re-purposed PCs? The Xbox and Xbox360 are quite literally PC hardware specially built for handling gaming tasks, with specific software and also firmware “locks” to try to keep nosey people from finding out it’s really just a PC. This isn’t so bad, but it’s sort of stupid- I mean, really, the original XBox is essentially a Pentium 3 PC; The controller ports are just freakazoid USB connectors that they purposely changed just so they won’t be USB connections,and possibly to make them stay in better, USB ZIF slots aren’t what I would call the greatest for controllers. On the other hand, why change the entire pinout configuration- why couldn’t they have simply added some sort of additional mechanical connection that made them stay in better? And all the fancy crap about locking the Hard Drives from being changed by the user, and so forth is sort of silly. It doesn’t make a whole lot of sense to artificially limit what the device is capable of simply because you charge less for it than an equivalently configured PC.

And with all the add-ons for Console machines, such as keyboards, support for USB controllers, Hard Drives, Ethernet; the only real difference between consoles and PCs is that consoles always have the exact same hardware (things like GPU and CPU) that software developers can expect, whereas PCs have widely varying hardware; also, the Consoles are purposely locked down for reasons I can only guess.

This is all well and good, but as I noted, my Brother has gone through at least 3 Playstation 3 consoles. He wasn’t throwing them around the room or anything, I doubt he was abusive to them at all. And yet- they stopped working in one way or another. The failures of Xbox machines is no less of a problem. Meanwhile, my Super Nintendo is 20 years old and still works perfectly fine. A commonly cited “excuse” is that the machines are more complicated. Well, these people need to take a good hard look at the schematics for the various SNES ASIC chips and perhaps re-evaluate their definition of complicated. The only change is that newer consoles have more mechanical parts and they generate more heat and are squashed into as small a form-factor as possible. It has nothing to do with them being “more complicated” and everything to do with them being built out cheaper components than a PC (to justify the lower price point) and makes all hardware issues “non-user servicable”, unlike, say, a PC. This was a acceptable policy for things like the SNES or the Sega Genesis or earlier consoles of that nature; most of the issues that those consoles have are the result of loose connections that typically require Soldering knowledge to fix properly. But now, that sort of policy is sort of silly, since a lot of the problems with modern consoles are relatively simply in comparison, and many enthusiasts who know what the issue is could fix it themselves, if the machines themselves weren’t put together in a way that dissuades attempts to dissassemble- things like special screws (Torx); again, warranted when the device innards were generally something that wasn’t user-servicable to the typical enthusiast, but now it’s just a artificial barrier to make the machines seem less user-servicable than they are. And, more to the point, the fact is that they simply fail more often now, and it seems like it would be in the company’s best interest to make them more user-servicable since that would mean fewer warranty repairs. (Obviously they can keep their old “take it apart and void the Warranty” thing.

Posted By: BC_Programming
Last Edit: 13 Dec 2011 @ 10:04 AM

EmailPermalinkComments Off on Broken Consoles Means “free” Hard Drives :D
 27 Nov 2010 @ 1:21 AM 


It’s unheard of to find a person who hasn’t at least used a Microsoft product; it’s even less likely to find somebody who hasn’t been exposed to it. As it stands now, there are essentially three “camps”:

1. People who think MS is successful not by chance or by “copying” anything, but by coming up with good ideas as well as creating good implementations of other ideas;

2. Open Source zealots, who spend much of their time criticizing microsoft for copying Apple and then turn around and copy both MS and apple in creating their desktop environments; Additionally, the Open Source zealots who can’t write a line of code and push the “Open Source” concept because it basically means “free software”

3. Generation 2 Apple Users; the type who think the Mac Classic sucks and apparently don’t realize that OSX is pretty much just a desktop environment for BSD; I cannot think of a single reason to ever buy a mac today, personally. The Original Macintosh Versus the PC-DOS had clear advantages in that it posessed a GUI, wheras DOS was a Command line interface; this beckoned the higher price tag for the product. Today, OSX offers no features that cannot be found easily on either windows, or a free Linux desktop environment; the claim is that you are paying for “quality hardware” that “just works” But truly you’re simply paying a tax to become a member of an exclusive club; It’s not the machine or the functionality Mac users are after anymore, it’s the symbol of success that it essentially provides. “hey, I have lots of disposable income to spend on overpriced toys” is the message it sends.

The common argument is that Microsoft got to it’s dominant market position via “strong-arm” tactics, and by “copying” ideas. First, when you run a company, and an opportunity arises, you don’t think “golly gee, I sure hope this doesn’t hurt my competitors”. The word “competition” especially with regards to software has somehow lost all meaning; people like to think that there is no competition, and there certainly is less of it today. But it’s not Microsoft’s fault that nobody is coming out with products that can compete with theirs; Just as it wouldn’t have been Apple’s fault if MS had not been able to launch windows to compete with the macintosh on the PC; it’s called business.

“Copying” is an interesting word that people like to use to describe Microsoft’s business strategy; however, there are two flaws with this approach:

it implies that they “stole” something, when in fact they saw a good idea, and implemented it themselves. One could posit the question, “if they weren’t supposed to copy, merge, and combine features, what the hell are we working towards?” In fact, the bitter irony here is that this line is often uttered by Linux users, who seem to forget that their OS of choice has lagged behind both Apple and Microsoft and has “copied” features from both; in fact, one could say that the entire concept of building upon each others code is the very concept that Open Source Software pushes; so hearing Linux users say this is sort of ironic in that they are implying that their Open Source philosophy is somehow only a good one when applied to Open Source.

Did windows “copy” a lot of features of Apple’s Macintosh? of course they did. When you are building a car to compete with other cars, you use the same shape for wheels; you don’t redesign the wheel; additionally, when somebody says that Microsoft steals “ideas”, the term is really useless. despite the aura around intellectual property, just thinking about something doesn’t suddenly mean that somebody else creating an implementation of your idea is stealing; an idea takes an armchair and a few minutes, and absolutely no physical effort. Implementing an idea is the hurdle that any technologist, during any era of computing had to get across; an idea is useless without an implementation. If I was to think up some new type of program, but did fuck all to create any prototypes or anything to that degree, I can’t in all fairness say that somebody “copied my idea” when they come up with an implementation; there was nothing to copy. ideas are physical objects. Some may say “but the Apple was an implementation of an idea” And yes, of course it is. But consider this; Windows runs on the IBM PC; the Mac OS environment runs on the Macintosh; consider for a moment that if apple had won the litigation against Microsoft, the IBM PC’s potential for showing a graphical environment would never have been realized. One could breakdown into a number of alternate history theories about what could have happened that go in all sorts of directions, but the truth is, it’s impossible to truly say what would have happened, simply because it didn’t. And now, the concept of a GUI that uses the same metaphorical approach is essentially the common denominator; what Microsoft naysayers are implying is that this is a bad thing; they are implicitly supporting the older paradigm where every single machine was managed in some completely separate way; That doesn’t help anybody.

Another thing that MS is criticized for is lack of innovation. To be perfectly frank, this is absolute bull shit. First off, if this was the case I don’t see how other companies aren’t equally guilty; and the fact is that it’s not the case.

Take, for example, the Windows 95 start menu; no other GUI implemented anything of this sort; the taskbar was an innovation because it made it possible to manage all the various running tasks in a always visible location; this was done through observation of their customer base, who would complain that their programs would “go away” because you no longer had a visual indication of them running (another window covered them, and they are essentially gone). Take the Windows Vista Start menu; the search bar is not something I had seen established in any major competing Graphical User interface before that. It addresses the previous criticisms of the Start menu whereby the various folders and icons would often fill the screen as you install/uninstall applications. However, nobody saw it like that; instead they decided to focus on the negatives, such as the higher system requirements. Err, HELLO, each version of windows has higher system requirements then the last. This is hardly surprising, and the fact that Vista implemented a new Desktop composition system (“Stolen” from apple, despite the fact that this was sort of a natural extension to the desktop given the ubiquitous availability of 3-D hardware on even the most value-oriented computers), as well as the larger gap between the XP and Vista release pretty well explain that.

Another example: take the Office Ribbon. Despite it’s detractors, it has become hugely successful and people have in fact found themselves more productive with it; this is because rather then thinking about the problem for a few seconds and then dismissing the current solution as “we shouldn’t change it because I don’t like change”, they actually looked at what they had, and realized, “holy shit, we have too many menus/toolbars and crap here” And they came up with a solution. The thing is, the ribbon made users and developers alike rethink the sort of common user-interface paradigms that we have become accustomed to, such as menus, buttons, and so forth.The heirarchal Pull down menu system was an extension on the “basic” pull down menu, where each menu title only had a single set of options; there was no concept of submenus within those menus (known as heirarchal menus). However, at some point, that model stopped working; the menus hd way too many options. The natural method was of course to group those options heiarchally; here are the options for Inserting an object, here are the options for how to format cells, and so on. The ribbon is a testament to the fact that there is no magic bullet method that works well in all situations; a program with three options can work well with just three buttons in a window; however if you have 10 options, you better use a menu, and with 50 or so options, you’ll need to arrange that heirarchally.

It’s important to realize that Microsoft is not pulling the industry on it’s coat-tails by mistake; the fact is that even their competitors are playing catch-up with their technologies, and before they can release a product that even attempts to compete with them, MS has already released another version. It’s not a lack of innovation on Microsoft’s part that is causing this, it’s a lack of innovation on the competitions part.

Much of this is different when you look away from desktop applications and operating systems and instead look to the world-wide web. Instead, we find Google has essentially cornered almost every facet of the internet; however, they carefully crafted their approach so despite them essentially doing the exact same thing to the web as Microsoft did to the OS and desktop applications markets, they are still regarded as “good guys” which is a particularly intriguing revelation.

This brings me to another point: Internet Explorer.

Web Developers – including myself- hate trying to work with Internet Explorer- it doesn’t work like the other browsers. People like to blame MS for this. But it’s actually the W3C.

Take for example some of the early draft specs for HTML4 and CSS and the DOM. W3C said “alright, we might make it like this, but no promises.

And all the browsers ran out and implemented it. Then the w3c went to ratify the specification and decided “hey, you know what? All the stuff we have in that spec that only IE has implemented so far… let’s rip those out. And they did. So now IE suddenly had “non-standard” features that were in fact originally in the spec and simply not implemented by Netscape or whatever the other browsers were at the time, because only IE bothered to implement those particular portions according to the specification. Which brings me to another point- the specifications are about as vague as possible. If your specifications are open to any sort of interpretation, they aren’t specifications, they’re handwavey suggestions. IE was the first browser to implement the CSS Box Model according to the specification. Then W3C ripped out that entire page of the spec. Now, they pretty much said that, but what is most interesting was that almost every single thing they took out of the spec was only implemented by IE and every single thing they added to the spec that wasn’t before were non-spec stuff that was added by other browsers. Seems a bit unfair.

Now, it’s gotten better in recent years, but it’s also gotten worse. MS refuses to implement any feature that is non-standard or not in the spec- because they know the w3c is some sort of demon spawn that purposely messes around the spec as much as possible just to fuck with IE’s implementation. meanwhile, the w3c is all friendly with Firefox and Opera and all the other implementations. It’s like a god damned love circle.

And then you have that Anti-trust nonsense. I’ve never really understood that. I mean, ok… we’ve got Netscape (with err… Netscape) and Microsoft with Internet Explorer. When IE was being charged for it was all cool.

But then they started giving it away free with the operating system! HORRORS OF HORRORS! Obviously they are TRYING to suffocate Netscape! I mean, that might have been a secondary reason, but for fuck’s sake, why the hell was Netscape their only god damned product to begin with? I mean, how many years were they in business with a single product? And many people say “well, golly, why would they spend money to make IE and then release it for free?” I don’t know. why the hell did they spend money to redesign paint in Windows 7? The way I see it, Microsoft looked at the internet, saw- hmm, this is becoming as ubiquitous as simple text editing, word processing, basic bitmap editing and recording short sound clips, we should distribute a way to do this with the OS. And that’s what they did. But suddenly it’s a big no-no because the slow company that had a single product that did the same thing that they charged for were all “hey, no fair, we don’t know how to sell more then one product so that’s Anti-trust!” It would be like a company that sold a basic text editor claiming anti-trust when Microsoft *GASP* included a text editor with MS-DOS 5! the NERVE of the company! How dare they include basic tools that increase the usability of the Operating system! DAMN THEM!

I mean, Anti-trust stuff is supposed to protect the [i]public[/i] from a monopoly. Not slow to change companies that don’t know how to create more then one product from other companies that happen to be able to create that same relatively simple to create (browsers were hardly that complex) applet and include it with the OS.

And nowadays the hubbub is all “OMG! they should let you choose your browser when you install windows!”

What the FUCK is that? should they let you choose from a set of other free text editors you can use instead of notepad? No, because if you want another editor you download another editor. should they offer other free alternatives to Paint Or Wordpad or Sound recorder (which actually transformed into useless with the latest ver. in win Vista/7)? No. that would be stupid. But apparently they are supposed to quite literally present a choice amongst their competitors in the browser market. Why only browsers though?

Posted By: BC_Programming
Last Edit: 24 Dec 2010 @ 12:44 PM

EmailPermalinkComments Off on Microsoft and why the mob-thinking is wrong.

 Last 50 Posts
Change Theme...
  • Users » 47469
  • Posts/Pages » 397
  • Comments » 105


    No Child Pages.

Windows optimization tips

    No Child Pages.

Soft. Picks

    No Child Pages.

VS Fixes

    No Child Pages.

PC Build 1: “FASTLORD”

    No Child Pages.