24 Aug 2016 @ 10:32 PM 

User Account Control, or UAC, was a feature introduced to Windows in Windows Vista. With earlier versions of Windows, the default user accounts had full administrative privileges, which meant that any program you launched also had full administrator privileges. The introduction of UAC was an attempt to solve the various issues with running Windows under a Limited User Account to make the more advanced security features of Windows far more accessible to the average user. The effective idea was that when you logged in your security token, which was effectively “given” to any software you launched, would be stripped of admin privileges. In order for a process to get the full token, it would require consent, this consent was implemented via the UAC dialog, allowing users to decide whether or not to give or deny that full security token.

It was a feature that was not well received; users complained that Vista was restricting them, and making them ask for permission for everything- something of a misinterpretation of the feature and how it works, but an understandable one somewhat. Nowadays, it is practically a staple of Windows, being present in the default user accounts through 7, 8, and now 10. Even so, it has had some design changes over the years.

One interesting aspect of the UAC consent dialog is that it will differentiate between a “Verified”, or signed, executable, and an unsigned one, displaying slightly different designs based on the evaluation of the executable. A signed executable effectively includes a digital signature which is able to verify that the program has not been altered by a third party- so if you trust the certificate authority as well as the publisher, it should be safe.

Windows Vista

We start our tour, perhaps unsurprisingly, with Vista.

Vista_Verified

Vista UAC Dialog, shown for an executable with a verified signature.

Vista_Verified_expanded

Vista UAC Dialog, shown for an executable with a verified signature, after expanding the Details option.

When the executable is verified, we see a relatively straightforward request. Expanding the dialog, as shown in the second image, provides access to the application path; There is no way, within the UAC dialog, to inspect the publisher’s certificate- that needs to be checked via other means.

Interestingly, once we start looking at unverified executables, however, we see quite a different presentation:

Vista_Unverified

Windows Vista UAC Dialog displayed for a Unverified executable.

Vista_Unverified_expanded

Windows Vista UAC Dialog shown for an unverified executable, after expanding the details option.

Rather than the more subdued appearance as seen when the application is verified, the dialog displayed for an unverified application is more bold; the options are presented as TaskDialog buttons, and the entire dialog has a very “Task Dialog” feel; additionally, the colour scheme uses a more bold yellow. Interestingly, Expanding the “Details” really only adds in the file location to the upper information region. Kind of an odd choice, particularly since the UAC dialog will usually be on it’s own secure desktop and thus screen real-estate is not as valuable as it might otherwise be.

Windows 7

On Vista, elevation tended to be required more frequently and thus UAC dialogs were rather common for standard Windows operations. Users needed to give consent for many standard Windows tasks such as adjusting Windows settings. Windows 7 adjusted some of the default behaviour and it does not by default present consent dialogs for many built-in Windows operations. The design of the UAC dialog also was adjusted slightly:

Win7_Verified

Windows 7 UAC dialog on a verified/signed executable.

Win7_Verified_Expanded

Windows 7 UAC dialog on a verified executable, expanded.

For verified executables, the dialog is rather unchanged; The biggest changes we see are in the title copy “Windows needs your permission to continue” changes to an ask regarding whether the user gives permission to a particular program. The dialog now includes a hyperlink in the lower-right that takes you right to the UAC settings, and publisher certificate information is now available when the details are expanded.

Win7_Unverified

Windows 7 UAC Dialog for an unverified Program.

Win7_unverified_expanded

Windows 7 UAC dialog for an unverified program, expanded

The Unverified dialog is quite a departure from the Vista version. It takes it’s design largely from the “Signed” version of the same dialog; perhaps for consistency. It dumps the “TaskDialog” style presentation of the options, instead using standard Dialog buttons, as with the “Signed” Appearance.

 

Windows 8

Win8_Unverified

UAC dialog on Windows 8 for an unverified executable.

Win8_Unverified_expanded

UAC Dialog on Windows 8 for an unverified executable, expanded.

Win8_Verified

UAC Dialog on Windows 8 for a Verified executable.

Win8_Verified_Expanded

UAC Dialog on Windows 8 for a Verified executable, Expanded.

 

 

For the sake of completeness, I’ve presented the same dialogs as seen on Windows 8. There have been no changes that I can see since Windows 7, excepting of course that the Win8 Windows Decorator is different.

Windows 10

Win10_Nov_Unverified

UAC Dialog from the Windows 10 November Update, running an Unverified executable.

Win10_Nov_Unverified_Expanded

UAC Dialog from the Windows 10 November Update, running an unverified executable, showing details.

Win10_Nov_Verified

UAC Dialog running a Verified executable on the Windows 10 November Update.

Win10_Nov_Verified_Expanded

UAC Dialog from the Windows 10 November Update, running a Verified executable, showing Details.

 

Yet again, included for completeness, the UAC dialogs shown by Windows 10 in the November Update. These are again identical to the Windows 8 and Windows 7 version of the same, providing the same information.

 

This all leads into the reason I made this post- the Anniversary Update to Windows 10 modified the appearance of the User Account Control dialogs to better fit with UWP standards:

 

Win10_Anniversary_Unverified

Windows 10 Anniversary Update UAC dialog for an Unverified Executable.

Win10_Anniversary_Unverified_expanded

Windows 10 Anniversary Update UAC dialog for an unverified Executable, after pressing “Show Details”.

Win10_Anniversary_Verified

Windows 10 Anniversary Update UAC Dialog for a Verified application.

Win10_Anniversary_Verified_Expanded

Windows 10 Anniversary Update UAC Dialog for a Verified Application, after pressing Show Details.

 

As we can see, the Windows 10 Anniversary Update significantly revised the UAC dialog. It appears that the intent was to better integrate the “Modern” User Interface aesthetic present in Windows 10. However, as we can see, the result is a bit of a mess; the hyperlink to display certificate information appears for unverified executables, but in that case, clicking it literally does nothing. The information is presented as a jumble of information with no text alignment, whereas previously the fields were well defined and laid out. I’m of the mind that updating the dialog to UWP should have brought forward more elements from the original, particularly the information layout; The “Details” hyperlink in particular should be more clearly designated as an expander, since as it is it violates both Win32 and UWP Platform UI guidelines regarding Link Label controls. I find it unfortunate that parsing the information presented in the dialog has been made more difficult than it was previously, and hope that future updates can iterate on this design to not only meet the usability of the previous version, but exceed it.

 

 

 

 

Posted By: BC_Programming
Last Edit: 24 Aug 2016 @ 10:35 PM

EmailPermalinkComments (0)
Tags
 13 Nov 2015 @ 8:27 PM 

A few days ago, Microsoft released an update for Windows 10, the “Windows 10 Threshold 2” update. In some ways, this update is practically a new OS version; in others, it is, well, an update. This is definitely Microsoft’s approach moving forward- more frequent releases of new versions, mirroring in some ways Android and iOS updates. How well that applies to desktop PCs, however, is arguably another question. I’m certainly no fan of the approach myself. It is in fact this new approach which leads to the telemetry and diagnostics tracking, which cannot be shut off, that has been so ill-received by Windows users considering upgrading their systems, including myself.

Threshold 2 does address some of my own “holdbacks” on upgrading my primary systems.

Colored Title bars are back

One of my cosmetic issues with Windows 10 was that customization options were removed. To be fair, the trend of removing options arguably started with Vista- the move from the Luna interface and scrapping of the Classic theme in particular eventually phased out some of the more powerful customization options. Windows 10, rather curiously, took it a step further, and effectively forced all Titlebars to be white. I’m not certain of the logic or design considerations behind that decision. It was possible to override this and use your own color but I found the solutions- which involved hacking a theme file- to be less-than-stellar solutions which had their own side effects. With Threshold 2, we are given this ability to customize the colour again.

Windows 10 Threshold 2 now provides the capability to customize titlebar colours.

Windows 10 Threshold 2 now provides the capability to customize titlebar colours.

This “New” feature (which is actually an old feature present in earlier Windows 10 builds) is accessed by adjusting the Accent color.

Perhaps even more interesting, is that one can directly access the colour selection options from Windows 8, which allowed customizing the color, rather than choosing a pre-selected palette. This can be accessed by using Start->Run and running “control color” (without the quotes). Quite an interesting if oddly hidden capability.

A lot of the other features are far more meh. I wasn’t really able to find much concrete. I expect quite a few internals may have been revised and other things may have been adjusted based on feedback, though. (oh, and, of course, I’m sure a lot of changes were made based on the wonderful telemetry they received?)

Posted By: BC_Programming
Last Edit: 13 Nov 2015 @ 08:27 PM

EmailPermalinkComments (0)
Tags
Categories: Microsoft, Windows
 12 Nov 2015 @ 1:26 AM 

Recently, I overheard a very curious statement. This statement was made in regards to Visual Studio versions, and it was a discussion that many developers use Visual Studio 2012 or 2013. The quote went something like this:

… those people likely have a shitty job; they are maintaining legacy code of some kind. There are still some people who have to use Visual Studio 2010. Feel bad for those people, they are probably dealing with real, serious, every-day frustrations beyond just code. Like their company is run by cheap bastards who won’t let them upgrade to newer tech

The perspective being shared here is that, essentially, if somebody is not using the latest-and-greatest version of an IDE, they must work for a shitty company and shitty people and everybody else should feel bad for them.

Notable, however- was that there was never an actual reason given. it is basically accepted as a given- if you aren’t using the latest version of a piece of software, you are clearly working with “legacy software”.

As with any version of a program, you shouldn’t be awed by the marketing rhinestones.

  • Does this new version have features or capabilities that make my work easier?
  • Does this new version have shortcomings that make my work harder?
  • Is the net affect of the two listed considerations worth the cost of switching?

This applies to any software version. Merely presuming that the latest version is going to be objectively better for every task than the previous version is merely ignorance. In the case of Visual Studio 2015, it works fine for my own projects, but I’ve found it simply doesn’t work for work projects. And even in the former case, it’s a situation of “Well, I may as well use it while I can” rather than “oh wow I am so happy to use this latest and greatest version because of X,Y, and Z!”. It doesn’t work for Work projects not because the work projects use ancient code that relies on, say, .NET 1.1 or something. The main issue is that we simply don’t gain anything by using Visual Studio 2015- so there is no point in doing so, since switching to VS2015 would represent a non-zero effort by every developer.

it is frustrating trying to relate to modern software development. I’m 28 and I already feel that I’ve lost touch because I’m no longer interested in using the latest whiz-bang feature Microsoft has put out. Aside from the fact that they cannot seem to make up their damned minds about WHAT they are going to support and for how long, Each new technology is heralded as completely changing how we will develop applications but realistically it just means you are making all the same applications, you are just doing it differently. Or, depending on the tech, it might simply not make sense for all application types. It would be foolish to create a game using WPF, for example. In WPF’s case, it replaced Windows Forms. it is very strange to hear developers talk about Windows Forms as if it is ancient history, and reflect on how they are so glad they no longer have to deal with it. Personally, I’m fine with Windows Forms, and it is quite a different scenario from my situation with VB6. With VB6, I stuck with it because I was ignorant of new alternatives. With WPF, I’ve used it and it’s nice, but it has enough of it’s own idiosyncrasies to make me question any developer who considers it objectively superior. It avoids a few issues WinForms has by design (the DPI consideration for Windows Forms, where it saves design time DPI settings, is one such instance). The main appeal of WPF seems to be that it helps you write less C#/VB.NET etc code. And that is true- you will have less code-behind if you do it properly. But you’ll just have a bunch of weird XAML instead, and Instead of writing code to load and display Customers, you’ll be writing the same amount of code, but you’ll be writing it in the implementation of new classes and new IValueConverter implementations to allow XAML to understand your data in the context of bindings.

Is this bad? No, not at all. The issue, from where I’m standing, is that it really only works best if you are able to design the database and the software simultaneously. You aren’t going to be able to directly use XAML on a ISAM database converted directly to Tables without a rather significant code-behind to construct a reasonable representation of instances of your data “objects” based on that data.

Another- arguably more onerous- issue is that writing against WPF forces your software to run on Windows. With Windows Forms, it is possible to run your software on other platforms using technology such as Mono, but WPF relies specifically on Windows-only technologies such as DirectX which simply are not available on those other platform. The main issue being that you simply cannot really re-use what you wrote in WPF for any sort of cross-platform application, as it is coupled very strongly with WPF.

Realistically, moving to a new technology- Windows Forms to WPF, WPF to- whatever-the-hell Microsoft wants us to use for Windows 10, etc. comes with a significant oppurtunity cost. Additionally, the real issue is that in order to “do it correctly” you tend to need to completely obliterate what has already been built. This introduces significant problems. If you have a customer that is using the old database and has created hundreds of thousands of invoices for an equally large number of customers, with massive gobs of data that they want to keep, you aren’t going to be able to sell them your rewritten product if it means they will have to rekey all of that information. it doesn’t matter what features the software has at that point, because customers simply aren’t going to spend the time to retrain their employees, rekey all the data into your new system, and pay you for the chance to do so. They will continue to use the existing, functional system. This puts the onus on you- the developer- to be able to bring that old data into the new system in at least a semi-automatic way, which means that, at least on some level, the data will need to be compatible. Just having that necessity limits your ability to redesign how things are done, since you are effectively forced to continue to do them the same way.

Fundamentally, making that sort of switch only makes sense for designing a new product, which you will need to sell. If you have customers using your existing software, you may very well *drive them away* just by suggesting that you’ve moved to a new system- they’ll find another software package that works like the one you previously provided and supported. Only technical sycophants are wowed and awed when you say “it uses the latest and greatest technology” because everybody else recognizes that for the marketing garbage that it is. A Piece of software being written in WPF doesn’t magically make it a “better product” than a similar program written against Windows Forms, and it sure as hell doesn’t mean that the WPF program wins any points for simply being newer. If a product doesn’t meet a need that another product does, it doesn’t matter if the former is made out of cyborg amazo quantum computing library deluxe edition- because they don’t need that.

A significant issue with switching is that there is a huge cost involved, and in the long run there tends to be very little benefit. Switching to WPF, for example, for a project that contains decades of effort, is not an undertaking that can be done over a weekend. it requires a significant amount of effort, and all of that effort is being put towards something that simply isn’t going to provide any revenue for years. And when it does- it will be providing the exact same amount of revenue as the current product. meanwhile, that current product no longer receives enhancements and stagnates, and all your customers move to something else, because those other companies have continued to improve their product while you were screwing around with IValueConverter implementations.

And then, when you’re done- you’re just in time to move the product to some new whizbang UI Framework that all the Microsoft sychophants swear is the second coming of Christ in framework form.

It’s actually somewhat interesting to consider that as time has gone forward, even the simplest applications have gotten more and more complicated, and then we get frameworks and other garbage to wrap around that complexity, but we never think about ways to actually reduce that complexity- instead we merely try to hide it behind more and more abstractions, so now you’re building software by stacking metaphors, not dealing with computer code or dealing with computer science concepts or algorithms. You’re stapling a few ready made components into a DataBinding, and maybe writing a bit of glue code to put it all together, and calling yourself a software developer while patting yourself on the back with some sort of weird chimney brush which has no place in this overextended simile.

Posted By: BC_Programming
Last Edit: 12 Nov 2015 @ 01:26 AM

EmailPermalinkComments (0)
Tags
Categories: .NET, API, C#, Microsoft, Programming
 07 Nov 2015 @ 9:27 PM 

Windows 8 introduced the concept of a Windows “App”. This has moved forward through Windows 8.1 and Windows 10.

Effectively, these “Apps” are what was formerly referred to as “Metro” and is now called the Modern UI. They use something of a different interface paradigm, with different controls and with elements typically sized for easier touch-screen use. That’s all well and good.

With Windows 8, 8.1, and 10, using these Apps tends to be optional. For the most part, there are equivalents you can use. A good example is Control panel; there is a “Settings” App which has some options, but for the most part there is a overlap with the “old style” Control Panel.

Recently, however, I needed to open an App for whatever reason. Or maybe I opened it by accident. Rather than the app opening, me being annoyed, and then closing the App, it instead said “This app can’t open” and suggested that I perform a Refresh to fix it. This sent me down something of a rabbit hole- Searching online for fixes, trying them, getting weird results, etc.

Actually, I’ve jumped in the ring to wrestle these issues a few times- I’ve had it on at least one my systems for ages and it recently appeared on another. Being unable to make some changes to the system was annoying enough that I decided to fix the issue- which, again, sent me down the rabbit hole. Try this command. Try this other one. Didn’t work? use this Troubleshooter that doesn’t do anything useful. Didn’t work? I don’t know. maybe try refreshing your PC after all?

Eventually, I stumbled, almost by accident, on the solution. Many of the attempts were encountering an error about “The package repository is corrupted”. I found nothing addressing that except some statements about registry key permissions, which I checked and were fine. So I decided to find where this package repository was- C:\ProgramData\Microsoft\Windows\AppRepository- and nuke it completely. I deleted the entire contents of the folder, then ran the command again. I expected a different error or something, but that seems to have done the trick, and now those Apps all work again.

Effectively, the Windows Store/App stuff is something of a “Package Manager” and stores the package information in that folder. However it also has an index of the package information in a smaller repository file, and it seems that file can get corrupted. I tried deleting that as well but it never fixed it. I ended up going with the nuke-it-from-orbit option.

My full list of steps was:

  1. Delete contents of C:\ProgramData\Microsoft\Windows\AppRepository
    Deleted all the files inside this folder. Quite satisfying.

  2. Ran an arbitrary non-obvious command from an administrator command prompt

    This effectively “re-registers” the Windows Store itself.

  3. Ran an arbitrary non-obvious command from an administrator command prompt

    Like the above, but this re-registers the “Settings” App.

  4. Ran a final non-obvious program from the command prompt
    After all this, other apps were still causing problems, like the useless Music app or the useless mail app or the various other useless apps that are provided and available. I’m not one to leave a hippo in vinegar, so I ran one more thing- I opened Windows Search and typed “wsreset” which brought up wsreset, then I right-clicked it and selected to run as administrator. After doing so, all the apps I had started working properly again.

I’d like to pause for a moment, however- to really admire how poorly engineered something has to be for almost any problem with it to declare that the user should try nuking everything and starting over. Microsoft calls it a “Windows Refresh” but it is a reinstall, and suggesting users reinstall an OS to fix these issues is absolutely ridiculous. Another very comical aspect to this is that in the “Windows versus Linux” argument, Windows diehards will complain that Linux requires arcane terminal commands to fix issues. Now, it’s hard to argue that- some issues in Linux distributions could require dropping to the terminal to fix issues with particular commands. But given the above- it doesn’t look like Windows is any stranger to that anymore.

Posted By: BC_Programming
Last Edit: 07 Nov 2015 @ 09:39 PM

EmailPermalinkComments (0)
Tags
 11 May 2013 @ 6:29 PM 

In what will hopefully be a recurring series on older Development Tools, Languages, and Platforms, I will be covering some information on my old flame, Visual Basic.

Visual Basic has a relatively long history, going back to around 1991. BASIC itself, of course, has an even longer history, starting at Dartmouth College in the 70’s.

The question is- What made Visual Basic Special? One of the big things going for it at the time were that it was the easiest way to develop applications for the booming Windows Desktop. It made many development tasks easy, provided form designers where you would otherwise need to write layout code, and in general made the entire process of Windows Application Development a lot more accessible.

Visual Basic was one of the earliest examples of a “RAD” Tool. RAD- or “Rapid Application Development” essentially allowed a company or other entity to get a feel for how a idealized Application might look and feel with minimal effort. By making UI design something that requires very little to no code, the task requires less expertise- that is, the Company doesn’t need to use their Development Team Resources for that part of the design process. Another huge boon of the technology was that it made Application Development on Windows far more accessible- at least to those with big wallets, but considering the cost of a computer and software in those days finding a person with a computer that wasn’t in a financially secure situation was not very likely. A full synopsis of the history of Visual Basic and it’s origins isn’t really appropriate here, and has been covered far better by those actually involved in the process. You can read one such overview Here.

In this series, I will first cover Visual Basic Versions from 1.0 through 6.0; and possibly go through the various .NET implementations as well. As I flow through one version to the next, we will see the Language and it’s surrounding Toolset (IDE) evolve.

The First Split

Visual Basic 1.0 for Windows was released in 1991. It was part of a move by many companies to get Powerful development Tools on the increasingly popular Windows Desktop Environment. Visual Basic itself grew out of it’s spiritual predecessor, QuickBASIC. It made some cosmetic changes to the language, and changed it to be Event Driven. Visual Basic also introduced two modes of operation, Design-Time and Run-Time. At Design Time you would edit your code and your forms. During Run-time, you could stop the application (using either the Stop statement or a breakpoint).

Visual Basic 1.0 for Windows running under Window 3.11

Visual Basic 1.0 for Windows running under Window 3.11

As we see above, Visual Basic 1.0, Running on Windows 3.11, showing the Form View. I’m not sure why I went with BRICKS.BMP, usually I’m more a MARBLE.BMP guy but hey no accounting for taste. One thing that may surprise those used to later versions of Visual Basic might be that you can still see the Desktop. the MDI interface wouldn’t be around until Visual Basic 5, at which point it was optional for VB5 as well as VB6. (you can invoke both with the /sdi command line argument to switch to SDI and /mdi to switch to MDI, in addition to configuration options). The “Project Explorer” if we can call it that- is shown on the right. Right now it’s showing Global.bas- a Module inserted into every application, and also the only module in VB1 allowed to have Globals, which may be responsible for the name, and FRMLSTVI.FRM, which is the form file I created; that form is shown to the left, and to it’s left is the Toolbox, from which you can choose a variety of Visual Basic Controls. One of the interesting features of Visual Basic was it’s support for Custom Controls. Custom controls were specially coded DLL files with the .VBX Extension, which stands for “Visual Basic eXtension” because I guess using X’s in extensions was the cool thing to do back then- we didn’t know any better. These had to be written in C or another language capable of using the appropriate Visual Basic Control Development Kit, or CDK. There were a lot of Custom Controls Available for VB 1.0 that added a lot to VB Applications.

Along the Main Window (shown at the top) we notice two things: first, it lacks the Toolbar we would normally come to expect, and the Properties Editor is rather spartan, and takes up that area of the window.

Visual Basic 1.0 however did not forget Menus. A Window->Menu Design Window Item exists which gives you this dialog for editing the Menu layout of the Active Form:
Visual Basic 1 Menu Design Window

We’ll watch this Dialog evolve Through each version of Visual Basic- (and eventually disappear, be sure to stay tuned for that exciting conclusion). One interesting thing about This first version of Visual Basic is that it actually was released primary for Windows 3.0. That isn’t particularly amazing in and of itself, but it meant that it had to “hand-weave” scarves that Windows 3.1 gave Applications, because it might be on Windows 3.0. Behold:

Visual Basic 1 File Open Dialog

A hand-built, Custom File Open Dialog. And, a fairly terrible one, at that. Of course the one included with Windows 3.1 wasn’t exactly the pinnacle of UI ease of use either. We will see this dialog evolve over time as well with new versions.

Visual Basic 1 showing the Code of a Command Button

Visual Basic 1 showing the Code of a Command Button

Here we see the code Editor. I’ll be frank: it’s terrible, even for the standards of the time. It’s little more than a glorified Notepad, really. It also forces you to edit your source files one procedure at a time. unpleasant, really. It does auto-correct your casing, which I suppose is something. Breakpoints are shown in bold. The Form Designer doesn’t let you select multiple controls by dragging a box around them, but you can use Control+Click to do that.

One other curious limitation of Visual Basic 1.0 is the lack of the ability to save in text format; All the files are saved in a proprietary, undocumented text format. You can export and import to and from text files, but allow me to be the first to note that this is a gigantic pain in the ass. Using the Code->Export Text Menu option, this is the result:

To be fair this isn’t much different than the same Program would probably look in VB6; with a few exceptions that we will obviously touch on as we move up through the versions.

Visual Basic 1.0 was a runaway success. For some reason this popularity apparently impacted the DOS toolset teams, since the successor to QuickBASIC PDS 7.1 was a new version of QuickBASIC that did not have the same name- Visual BASIC 1.0 for DOS.

Visual BASIC 1.0 for DOS in Form Design View.

Visual BASIC 1.0 for DOS in Form Design View.

Visual BASIC for MS-DOS in Code View

What makes Visual BASIC 1.0 for MS-DOS interesting is that’s it’s only kinda sorta a Version of Visual Basic. For one, all the literature uses BASIC in capitals for whatever reason, and working with it feels very much the same as working with QuickBASIC 7.1 PDS, but with a special Character-Set GUI thing. In many ways it was more of a stopgap version for transitioning QuickBASIC users to Visual Basic for Windows.

In the next entry in this series, we’ll take a look at Visual Basic 2.0; what does it improve over 1.0 and How does it move RAD forward? Stay tuned and find out!

Posted By: BC_Programming
Last Edit: 13 May 2013 @ 05:29 PM

EmailPermalinkComments (1)
Tags
 03 Jul 2012 @ 3:23 AM 

I don’t know how but somehow I’ve been awarded the Microsoft MVP award for my contributions to C# technical communities (C# MVP). Of course I am very surprised at this, but I guess I have a short memory. I do have a number of posts and blog entries regarding C#, as well as a lot of forum posts across my various profiles that assist with it. My initial response was actually self-deprecating- “I guess they give them to anybody these days” Which is of course not true.

I cannot help but feel like I got it “by accident”. Most MVPs really are industry professionals with professional expertise, a college education, and a myriad of other qualifications. I feel like an imposter, since I don’t have any post-secondary education and certainly no formal education in any of the domains that I am essentially being awarded for, nor have I actually worked in the industry (well, arguably, that’s not true, if my failing attempt to start a company counts).

That isn’t necessarily to say I don’t deserve the award- I imagine the people responsible for the MVP program are a lot more qualified to make that decision than me.

At this point I’m forced to wonder how it helps me. It does make a very nice thing to put on a resume, but the thing is, I have no place to submit that resume where that award is going to matter. At my last job I think the most my skills were actually used was when I told the manager that, “yes, the monitor needs to be plugged in to work”, or something to that effect. I quit my last job nearly a year ago (Last October) Because I wanted to find something working with computers. The closest things to this are still retail (places like Staples, Best Buy (*Shudder*) and so forth. I applied at every single one I could find, and even got a few interviews, but nothing came of it. Arguably it’s equally likely the fact that shortly after the day I had all those interviews my phone got cut off made follow-ups impossible, so I have absolutely no clue if they ever tried to call me after that (in fairness they did have my E-Mail addresses and I’ve not received anything about it, though it’s more likely they tried to phone, and then just went to the next applicant).

Regardless, let’s be honest. Even that is below my pay grade. I wrote about “getting one’s foot in the door” previously, and this just goes to show how damned impossible it seems to be. The idea of a person who received a MVP Award for sharing C# technical expertise working a minimum wage crap job- or even those above- is almost laughable, but there is absolutely nothing else around here, with one exception.

There is, however, one place I haven’t tried. Pelican Software (which is actually owned by Northwest Forest Products, if memory serves). Well, that’s not quite true, I did in fact try them back when I was a spunky kid whose expertise was pretty much just VB6 and feeling smugly superior… More recently, I did have some dealings with them regarding a Freelance program I had written, “BCJobClock” since it is very similar in many ways to their product, “Tallys”. Things were looking up in that regard but the eventual decision they reached was that BCJobClock was too similar to it. (With the exception that it’s UI is not confusing and it doesn’t cost several thousand dollars). I never actually applied there since to my understanding they really aren’t doing to well and I doubt they’d take the business risk of hiring more staff in their situation. But I may try that anyway. It’s known statistic that companies that employ at least one MVP Award winner are more successful.

At this point I sort of have two options: I can either pursue this BASeCamp thing and try to market BCJobClock (which currently has not appeared on my site at all) for a nominal price, by integrating the existing ProductKey code that I already wrote and used for BASeBlock. But the thing is that the BASeBlock situation really tells me everything I need to know- it’s pointless. Nobody has actually bought a registered copy. And there are very few downloads. It’s online, but in many ways it may as well not be online at all. It just represents 3 years of my spare time that I’ve essentially wasted on a bloody game. It’s still “my product” and I’m proud of it and all that, but pride doesn’t pay bills. And I don’t want to lock away the editor behind the requirement for registration because the Editor is perhaps the part I like the most about the entire thing. Honestly when I was dealing with NWFP regarding the program I just wanted to sell the entire thing and get rid of it. I was sick of it and in some ways I still am. Come to think of it, I’d be more than happy to sign something that gives the complete IP to BCJobClock to NWFP as a condition of working there. Of course it probably wouldn’t get used, but this really would be the only guarantee that I won’t at some point be in direct competition with them, which could very well happen- and this guarantee might be worth it. (I would say so- my program is a heck of a lot easier to use and if I do release it in some manner it’s going to be a lot cheaper, too; though despite their notations it won’t be cutting into any of their market anyway- but in that case it will still be my market share, and not theirs.

Of course, BCJobClock is aimed at a different market. In some ways it’s a Time Management application. I suppose I haven’t discussed the program much since I hadn’t decided what I was going to do with it (well actually there was a page on the main landing site that was a little exuberant on the entire thing at some point, but I removed it when reality punched me in the face with BASeBlock). To Summarize, it basically manages workers and orders for a Repair shop or similar shop. This can be automotive, like the client I originally wrote it for (Somewhere in Iowa, to my understanding) Or it could easily be used for Repair shops or other locations that need a Worker< ->Task management system. The Client program allows employees to clock into and out of orders using a touch-screen interface (naturally I don’t provide the hardware, just the software here), which is done through a WPF C# Application. This program interfaces with a remote MySQL Server using the SQL/Connector which allows the use of ADO.NET Connection and similar objects to work with the MySQL Remote database, which manages all the… data… involved. The Administrator program allows the addition/removal of users, inspection of all orders and users and the time taken on each order as well as each user in total, and all sorts of other information. There is also another little “Watcher” program that is designed for use by people tasked to surpervise work orders and assign tasks to other employees, but aren’t able to have full access to the administrator panel for adding and removing users, getting reports, and all that. Because it is designed for watching users, it also shows Notifications when Users become available for work or when Users or tasks are being “ignored”, and little coloured indicators to show when users/orders are working/being worked on.

It still needs a bit of work to streamline some speed problems that have been encountered by the sole user of the program (which we hacked away with a few INI file changes for their immediate use case), which is related to the fact that the admin program tries to keep it’s view “up to date” by refreshing from the database on a given delay. Unfortunately it picks up a lot of data in the process. Ideally, it would only proceed to actually carry out the “refresh” from the database when it actually knew there was a change, but I’m not really sure how to implement that. Working with databases is frustrating, in that these seemingly basic capabilities seem impossible. (Q.How do I detect when the results of a query changed? A. you perform the query and look through the entire resultset). Of course at that point if you find no changes you just wasted that entire time, so it’s just begging the question.

Actually, with some thought, there is another solution. Relocation. There is simply nothing around here for the type of person who has skills and abilities relevant to a C# MVP Award, so in many ways having it as a bullet point echoes as hollow as the sepia-toned aged mention of my High-School awards from almost ten years ago. So, Maybe it’s time to leave Nanaimo. There simply aren’t any tech jobs here (or I’ve become blind). Not even some sort of more general IT job dealing with servers or the network of a office building or what-have-you.

As I noted however, I never actually inquired NWFP for a career or job, since that wasn’t really my intention at the time. In fact it never even occurred to me. The MVP Award I think helps me here; those aren’t exactly given away freely, there are only two recipients in Nanaimo, Me, and a fellow whose expertise lies in SQL Server; I think there are a dozen on Vancouver Island (though I cannot check).

And if that doesn’t work- well, I guess I’ll have to relocate. On the bright side, My website will still be in the same place 😛

Posted By: BC_Programming
Last Edit: 03 Jul 2012 @ 03:25 AM

EmailPermalinkComments (3)
Tags
 12 Jan 2012 @ 3:11 PM 

In some of my recent posts, I’ve covered the topic of accessing and parsing an INI file for configuration data in a C# Application.

Some may wonder why. After all; the “norm” for C# and .NET applications is to use XML files for configuration information, isn’t it? Well, yes. But to be honest, XML files are a fucking pain in the ass. They aren’t human readable to your average person the same way an INI file is, and getting/setting values is tedious. Primarily, the reason I use INI files is that they are:

  1. Human Readable: Anybody can understand the basic structure of the sections and Name=Value syntax.
  2. Accessible: You don’t need a special editor
  3. Portable: since the entire thing is interpreted using Managed code, it will act the same on any platform (Mono or the MS CLR).

Mostly, I feel that XML, and in many ways other configuration options, are more or less driven by fad. Another option for configuration settings on Windows is the Registry, which is in fact often the recommended method; but this is anything but accessible to the user. Would you rather guide a user to edit a INI file or to fiddle with registry settings?

With that said, INI Files do have their own issues. For example, their data is typically typeless; or, more precisely, the Values are all strings. Whereas using a .NET XML Serializer, for example, you could easily(relatively speaking) serialize and deserialize a special configuration class to and from an XML file and preserve it’s format, with my INI file class there will typically be some work to parse the values.

It was with the idea of turning my string-only INIFile configuration settings into something that can be used for nearly any type that I created the INItemValueExtensions class, which is nothing more than a static class that provides some extension methods for the INIDataItem class. I covered this in my previous post.

The prototypes for the two static functions are:

How would one use these extension methods? Well, here’s an Example:

Woah, hold the phone! What’s going on here? We’re loading DateTime values directly from the INI File? How does that work?

All the “magic” happens in the getValue generic extension method. The first thing the routine does is check to see if the Type Parameter has a static TryParse() method; if it implements ISerializable and have a TryParse method, than the routine will read the string from the INI file, decode it via Base64, and throw it in a MemoryStream, and then try to deserialize the Object Graph for a Type T using that stream.

If it does implement a TryParse() routine, (like, for example, DateTime) it doesn’t try quite as hard. It takes the string from the INI file and hands it to the Type’s TryParse() routine, and then returns what that gives back. Naturally, the inverse function (setValue) does something somewhat opposite; it checks the Base64 logic, and if so it sets the value of the item to the Base64 encoded value of the serialized object. Otherwise, it just uses toString().

This typically works, particularly with DateTime, because usually ToString() is the inverse of TryParse(). In the case of DateTime, this has a few edge cases with regards to locale, but usually it works quite well. And more importantly, the introduction of allowing any object that implements ISerializable to simply be thrown as an INI value via a Base64 encoded string is useful too, although with large objects it’s probably not a good idea for obvious reasons.

But… I still want to access other settings!

Of Course, an INIFile is only one of any number of ways to store/retrieve configuration settings. And while they don’t typically lend themselves to the same syntax provided by the INIFile class, it would be useful to have some sort of common denominator that can handle it all. That was the original intent of the relatively unassuming ISettingsStorage interface:

This uses a concept known as a “category” which is pretty much the same idea as an INI File section. What makes it different is that, for implementors that use other storage mechanisms, it could have additional meaning; for example, a fictitious XML implementation of ISettingsStorage could use the “Category” string as an XPath to an element; and the Value could be stored/retrieved as a Attribute. a Registry implementation might use it as a Registry path, and so on.

The problem is, even though the INIFile class implements this interface, it’s too basic, and doesn’t provide nearly the syntactic cleanliness that just using the INIFile does. Stemming from that, and because I wanted to try to get a way to store settings directly in a DB, I introduced two events to the INIFile class; one that fires when a Value is retrieved, and one when a value is saved. This way, the event could be hooked and the value saved elsewhere, If desired. Now, to be fair, this is mostly a shortcoming of my interface definition; as you can see above, there is no way to, for example, inspect category or Value names. I toyed with the idea of adding a “psuedo” category/value combination that would return a delimited string of category names, but that felt extremely silly. The creation of a generic interface- or abstract class- that provides all the conveniences I currently enjoy using my INIFile class but allowing me to also use XML, Registry, or nearly any other persistent storage for settings will be a long term goal. For now, I’m content with accessing INI files and having a unclean event to hack in my own behaviour.

My first test of the above feature- whereby it allows values to be TryParse’d and ToString’d back and forth from a given type on the fly- was the creation of a FormPositionSaver class.

The proper way to save and restore a window’s position on Windows is using the GetWindowPlacement() and SetWindowPlacement() API Functions. These use a structure, named, quite aptly, “WINDOWPLACEMENT” to retrieve and set the window position and various attributes. Therefore, our first task is to create the proper P/Invoke’s for these functions:

I also include OffsetRect(), but I’ll get to that in a bit. Now the “big one” is the definition of the WINDOWPLACEMENT structure and it’s various aggregate structures. Why? well, in the interest of leveraging the INIFile’s static extensions, Why not define a static TryParse() and a toString() method on the structure that can set and retrieve the member values:

WHEW! that’s quite a bit of code for a structure definition, but we’ll make up for it with the brevity of the actual FormPositionSaver class itself. First, my design goal with this class was to make it basically do all the heavy lifting; it hooks both the Load and Unload event, and saves to and from a given INIFile Object in those events. Since the application I was working on at the time didn’t actually get a Valid INI object until during it’s main form’s Load event, and since there is no way to say “Invoke this event first no matter what” I also added a way for it to be told that hooking the load event would be pointless since it already occured, at which point it will not hook the event and instead set the form position immediately. Values are stored

Alright, so maybe I lied a bit. It’s not super short. Although a lot of it is comments. Some might note that I only sporadically add doc comments, even though I ought to be adding them everywhere. Well, sue me. I just add them when I feel like it. When I’m concentrating on function, I’m not one to give creedence to form.

This is where I explain OffsetRect(). Basically, if your application is run twice, and you load the form position twice, the second form will open over the first one, and the screen will look pretty much the same. So we detect previous instances and offset by an amount to make it’s position different from any previous instances as necessary. That’s pretty much the only purpose of OffsetRect.

I have packaged the current versions of cINIFile.cs and the new FormPositionSaver.cs in a zip file, it can be downloaded from here.

Posted By: BC_Programming
Last Edit: 12 Jan 2012 @ 03:11 PM

EmailPermalinkComments (0)
Tags
 25 Dec 2011 @ 2:05 PM 

As I posted previously here, Sorting a Listview can be something of a pain in the butt.

In that article, I covered some basics on providing a class that would essentially give you sorting capabilities for free, without all the messy code that would normally be required. A lot of the code required for sorting is mostly boilerplate with a few modifications for sorting various types. As a result, the generic implementation works rather well.

However, as with any class, adding features never hurts. In this case, I got to thinking- why not have right-clicking the ColumnHeaders show a menu for sorting against that Column? Seems simple enough. I quickly learned that apparent simplicity often is misattributed.

I faced several issues. The first thought was that I could hook a Mouse event for Right-Clicking a column header. Unfortunately, I soon discovered two facts about the .NET ListView control. First, was that there was no event for right-clicking a header control. Second, no even was fired at all by the ListView control when you right-clicked a header.

This left me stymied. How the heck do I implement this feature? I discovered something of a “hack” however, in that when the ListView’s ContextMenuStrip property is set, that ContextMenu Strip will be shown regardless of the location the ListView is clicked. This at least gave me something to work with. Since a ContextMenuStrip’s “Opening” event can be easily hooked, we can use that as an entry point and perform needed calculations to determine if we are indeed on a columnheader.

Which brings me to the next problem, which is determining when a columnheader was in fact the item that was clicked. This requires determining the rectangle the Header control occupies, first. The Header Control is a child control of the ListView; as such, a platform Invoke using the EnumChildWindows() API was required, something like this:

Quite a bit of boilerplate to add in. Basically, the idea is that we will hook the contextMenu Opening event of the Listview, (and we add a context menu to hook if the listview in fact doesn’t have one) in our constructor; and then when we receive the event we need to determine if the click occured within the area of the header control of the listview, if so, we cancel the event (which stops the default context menu from appearing) and show our own menu for the columnheader, which we can acquire using a bit of math and the static “GetOrderedHeaders” function, which retrieves the array of columnheaders of a ListView in order of appearance Left to Right (since the user could rearrange the Columns).

So First, we need to add code to the GenericListViewSorter’s Constructor. We also have a few private variables that are added; in this case, we need a ContextMenuStrip variable called “_ghostStrip” which we will use if we need to create a context menu for the control, since we don’t want that to appear in the default case. Of course we create our own ContextMenuStrip which we will show in the event instead of the default when appropriate. so we add this beneath the existing code in the constructor:

Of course we need to add the two referenced event handlers, too. The ContextMenuStripChanged being a rather simple implementation designed to keep changes in the contextmenu of the listview from causing us to balls up and stop showing ours (since we are now hooking a orphaned context menu not being shown by the listview).

Now the meat of the code is in the ContextMenuStrip_Opening() routine. This will need to determine wether its applicable to show the Column menu, or the already present menu (which it doesn’t show either if it happens to be the _ghoststrip). This is accomplished by use of the GetCursorPos() API routine paired with the already present GetWindowRect() implementation, which we update by calling EnumWindows.

The events for the two buttons basically sort based on the columnheader in their tag, nothing particularly special there. the actual details can be seen in the source file itself, really.

It actually works quite well, I’m using it in a production application, and it’s working quite well.

Some obvious enhancements, of course, include making it possible to customize the shown menu, to present other options; perhaps a delegate or event that can be hooked that is given the Strip and the clicked column, and any number of other parameters? This would essentially give the equivalent of a ColumnHeaderRightClicked type event, too.

Posted By: BC_Programming
Last Edit: 05 May 2012 @ 10:21 PM

EmailPermalinkComments (0)
Tags
 27 Nov 2010 @ 1:21 AM 

Microsoft.

It’s unheard of to find a person who hasn’t at least used a Microsoft product; it’s even less likely to find somebody who hasn’t been exposed to it. As it stands now, there are essentially three “camps”:

1. People who think MS is successful not by chance or by “copying” anything, but by coming up with good ideas as well as creating good implementations of other ideas;

2. Open Source zealots, who spend much of their time criticizing microsoft for copying Apple and then turn around and copy both MS and apple in creating their desktop environments; Additionally, the Open Source zealots who can’t write a line of code and push the “Open Source” concept because it basically means “free software”

3. Generation 2 Apple Users; the type who think the Mac Classic sucks and apparently don’t realize that OSX is pretty much just a desktop environment for BSD; I cannot think of a single reason to ever buy a mac today, personally. The Original Macintosh Versus the PC-DOS had clear advantages in that it posessed a GUI, wheras DOS was a Command line interface; this beckoned the higher price tag for the product. Today, OSX offers no features that cannot be found easily on either windows, or a free Linux desktop environment; the claim is that you are paying for “quality hardware” that “just works” But truly you’re simply paying a tax to become a member of an exclusive club; It’s not the machine or the functionality Mac users are after anymore, it’s the symbol of success that it essentially provides. “hey, I have lots of disposable income to spend on overpriced toys” is the message it sends.

The common argument is that Microsoft got to it’s dominant market position via “strong-arm” tactics, and by “copying” ideas. First, when you run a company, and an opportunity arises, you don’t think “golly gee, I sure hope this doesn’t hurt my competitors”. The word “competition” especially with regards to software has somehow lost all meaning; people like to think that there is no competition, and there certainly is less of it today. But it’s not Microsoft’s fault that nobody is coming out with products that can compete with theirs; Just as it wouldn’t have been Apple’s fault if MS had not been able to launch windows to compete with the macintosh on the PC; it’s called business.

“Copying” is an interesting word that people like to use to describe Microsoft’s business strategy; however, there are two flaws with this approach:

it implies that they “stole” something, when in fact they saw a good idea, and implemented it themselves. One could posit the question, “if they weren’t supposed to copy, merge, and combine features, what the hell are we working towards?” In fact, the bitter irony here is that this line is often uttered by Linux users, who seem to forget that their OS of choice has lagged behind both Apple and Microsoft and has “copied” features from both; in fact, one could say that the entire concept of building upon each others code is the very concept that Open Source Software pushes; so hearing Linux users say this is sort of ironic in that they are implying that their Open Source philosophy is somehow only a good one when applied to Open Source.

Did windows “copy” a lot of features of Apple’s Macintosh? of course they did. When you are building a car to compete with other cars, you use the same shape for wheels; you don’t redesign the wheel; additionally, when somebody says that Microsoft steals “ideas”, the term is really useless. despite the aura around intellectual property, just thinking about something doesn’t suddenly mean that somebody else creating an implementation of your idea is stealing; an idea takes an armchair and a few minutes, and absolutely no physical effort. Implementing an idea is the hurdle that any technologist, during any era of computing had to get across; an idea is useless without an implementation. If I was to think up some new type of program, but did fuck all to create any prototypes or anything to that degree, I can’t in all fairness say that somebody “copied my idea” when they come up with an implementation; there was nothing to copy. ideas are physical objects. Some may say “but the Apple was an implementation of an idea” And yes, of course it is. But consider this; Windows runs on the IBM PC; the Mac OS environment runs on the Macintosh; consider for a moment that if apple had won the litigation against Microsoft, the IBM PC’s potential for showing a graphical environment would never have been realized. One could breakdown into a number of alternate history theories about what could have happened that go in all sorts of directions, but the truth is, it’s impossible to truly say what would have happened, simply because it didn’t. And now, the concept of a GUI that uses the same metaphorical approach is essentially the common denominator; what Microsoft naysayers are implying is that this is a bad thing; they are implicitly supporting the older paradigm where every single machine was managed in some completely separate way; That doesn’t help anybody.

Another thing that MS is criticized for is lack of innovation. To be perfectly frank, this is absolute bull shit. First off, if this was the case I don’t see how other companies aren’t equally guilty; and the fact is that it’s not the case.

Take, for example, the Windows 95 start menu; no other GUI implemented anything of this sort; the taskbar was an innovation because it made it possible to manage all the various running tasks in a always visible location; this was done through observation of their customer base, who would complain that their programs would “go away” because you no longer had a visual indication of them running (another window covered them, and they are essentially gone). Take the Windows Vista Start menu; the search bar is not something I had seen established in any major competing Graphical User interface before that. It addresses the previous criticisms of the Start menu whereby the various folders and icons would often fill the screen as you install/uninstall applications. However, nobody saw it like that; instead they decided to focus on the negatives, such as the higher system requirements. Err, HELLO, each version of windows has higher system requirements then the last. This is hardly surprising, and the fact that Vista implemented a new Desktop composition system (“Stolen” from apple, despite the fact that this was sort of a natural extension to the desktop given the ubiquitous availability of 3-D hardware on even the most value-oriented computers), as well as the larger gap between the XP and Vista release pretty well explain that.

Another example: take the Office Ribbon. Despite it’s detractors, it has become hugely successful and people have in fact found themselves more productive with it; this is because rather then thinking about the problem for a few seconds and then dismissing the current solution as “we shouldn’t change it because I don’t like change”, they actually looked at what they had, and realized, “holy shit, we have too many menus/toolbars and crap here” And they came up with a solution. The thing is, the ribbon made users and developers alike rethink the sort of common user-interface paradigms that we have become accustomed to, such as menus, buttons, and so forth.The heirarchal Pull down menu system was an extension on the “basic” pull down menu, where each menu title only had a single set of options; there was no concept of submenus within those menus (known as heirarchal menus). However, at some point, that model stopped working; the menus hd way too many options. The natural method was of course to group those options heiarchally; here are the options for Inserting an object, here are the options for how to format cells, and so on. The ribbon is a testament to the fact that there is no magic bullet method that works well in all situations; a program with three options can work well with just three buttons in a window; however if you have 10 options, you better use a menu, and with 50 or so options, you’ll need to arrange that heirarchally.

It’s important to realize that Microsoft is not pulling the industry on it’s coat-tails by mistake; the fact is that even their competitors are playing catch-up with their technologies, and before they can release a product that even attempts to compete with them, MS has already released another version. It’s not a lack of innovation on Microsoft’s part that is causing this, it’s a lack of innovation on the competitions part.

Much of this is different when you look away from desktop applications and operating systems and instead look to the world-wide web. Instead, we find Google has essentially cornered almost every facet of the internet; however, they carefully crafted their approach so despite them essentially doing the exact same thing to the web as Microsoft did to the OS and desktop applications markets, they are still regarded as “good guys” which is a particularly intriguing revelation.

This brings me to another point: Internet Explorer.

Web Developers – including myself- hate trying to work with Internet Explorer- it doesn’t work like the other browsers. People like to blame MS for this. But it’s actually the W3C.

Take for example some of the early draft specs for HTML4 and CSS and the DOM. W3C said “alright, we might make it like this, but no promises.

And all the browsers ran out and implemented it. Then the w3c went to ratify the specification and decided “hey, you know what? All the stuff we have in that spec that only IE has implemented so far… let’s rip those out. And they did. So now IE suddenly had “non-standard” features that were in fact originally in the spec and simply not implemented by Netscape or whatever the other browsers were at the time, because only IE bothered to implement those particular portions according to the specification. Which brings me to another point- the specifications are about as vague as possible. If your specifications are open to any sort of interpretation, they aren’t specifications, they’re handwavey suggestions. IE was the first browser to implement the CSS Box Model according to the specification. Then W3C ripped out that entire page of the spec. Now, they pretty much said that, but what is most interesting was that almost every single thing they took out of the spec was only implemented by IE and every single thing they added to the spec that wasn’t before were non-spec stuff that was added by other browsers. Seems a bit unfair.

Now, it’s gotten better in recent years, but it’s also gotten worse. MS refuses to implement any feature that is non-standard or not in the spec- because they know the w3c is some sort of demon spawn that purposely messes around the spec as much as possible just to fuck with IE’s implementation. meanwhile, the w3c is all friendly with Firefox and Opera and all the other implementations. It’s like a god damned love circle.

And then you have that Anti-trust nonsense. I’ve never really understood that. I mean, ok… we’ve got Netscape (with err… Netscape) and Microsoft with Internet Explorer. When IE was being charged for it was all cool.

But then they started giving it away free with the operating system! HORRORS OF HORRORS! Obviously they are TRYING to suffocate Netscape! I mean, that might have been a secondary reason, but for fuck’s sake, why the hell was Netscape their only god damned product to begin with? I mean, how many years were they in business with a single product? And many people say “well, golly, why would they spend money to make IE and then release it for free?” I don’t know. why the hell did they spend money to redesign paint in Windows 7? The way I see it, Microsoft looked at the internet, saw- hmm, this is becoming as ubiquitous as simple text editing, word processing, basic bitmap editing and recording short sound clips, we should distribute a way to do this with the OS. And that’s what they did. But suddenly it’s a big no-no because the slow company that had a single product that did the same thing that they charged for were all “hey, no fair, we don’t know how to sell more then one product so that’s Anti-trust!” It would be like a company that sold a basic text editor claiming anti-trust when Microsoft *GASP* included a text editor with MS-DOS 5! the NERVE of the company! How dare they include basic tools that increase the usability of the Operating system! DAMN THEM!

I mean, Anti-trust stuff is supposed to protect the [i]public[/i] from a monopoly. Not slow to change companies that don’t know how to create more then one product from other companies that happen to be able to create that same relatively simple to create (browsers were hardly that complex) applet and include it with the OS.

And nowadays the hubbub is all “OMG! they should let you choose your browser when you install windows!”

What the FUCK is that? should they let you choose from a set of other free text editors you can use instead of notepad? No, because if you want another editor you download another editor. should they offer other free alternatives to Paint Or Wordpad or Sound recorder (which actually transformed into useless with the latest ver. in win Vista/7)? No. that would be stupid. But apparently they are supposed to quite literally present a choice amongst their competitors in the browser market. Why only browsers though?

Posted By: BC_Programming
Last Edit: 24 Dec 2010 @ 12:44 PM

EmailPermalinkComments (0)
Tags

 Last 50 Posts
 Back
Change Theme...
  • Users » 41491
  • Posts/Pages » 347
  • Comments » 104
Change Theme...
  • VoidVoid « Default
  • LifeLife
  • EarthEarth
  • WindWind
  • WaterWater
  • FireFire
  • LightLight

PP



    No Child Pages.

Windows optimization tips



    No Child Pages.

BC’s Todo List



    No Child Pages.