05 Dec 2016 @ 10:29 PM 

I’ve not been particularly partial to Macintosh computers, having gravitated largely towards the somewhat more “open” environment that is the typical PC. However while browsing eBay I saw a reasonably priced PowerMac G5 and decided to jump on it. My only experience with Macs is limited largely to the slot-loading iMac G3 and OS 8.6; however, I also toyed with programs like MiniVMac to emulate System 7. (I also had PearPC working somewhat with OSX but it didn’t run very well).

The system itself was not working as-is; it didn’t have a Hard Disk. I threw in a spare 1TB SATA SSHD, and was able to, after some fighting with it, get OS X 10.4 (Tiger) to install to it without issue. I would have gone with OS 9 first (just to fiddle with Classic OS stuff first) but it isn’t supported. Oddly, I found I had to initialize the drive on my Windows System before the G5 could get past the “partitioning” stage.

I’ve been experimenting with the system since. It’s a relatively base G5; it’s only a single core, and it had no Airport for Wifi; as a result I’ve been sneakernetting files over to it via a 128GB USB Flash Drive. I’ve actually been rather impressed with the system- I have a Pentium 4-based system which would be it’s era-contemporary and I’d argue that while both systems are getting long in the tooth in terms of software compatibility, the G5 certainly seems to bear it better, and is incredibly responsive under most circumstances.

I found the two systems about equivalent in terms of playing videos; both can play DVDs just fine, but struggle with 1080p mpeg-4s; reducing the resolution to 480p, however, and the videos will play without any problems. I’ve not put the G5 onto the Internet yet; it lacks the connectivity to do so, as it doesn’t have an Airport card. I got one off of eBay for about 10 bucks which should provide connectivity, and I’ve got TenFourFox, a fork of Firefox, which should work on the system for browsing; so how well that will work remains to be seen. I expect capabilities at least on par with the contemporary Pentium 4 system in terms of performance; which means it should be usable for browsing forums, E-mail, writing posts here, etc.

Mac OS X Desktop

Mac OS X Desktop

I’ve loaded the system up with a bunch of software I sneakernetted across, for now; It’s got USB 2.0 so while the 128GB drives transfer more slowly than on my main desktop and laptop systems which support USB 3, it’s nowhere near the ordeal that it would be with USB 1.1. The biggest annoyance so far is finding the correct versions of software; software like Netbeans IDE for example had versions which ran on OS X 10.4 Tiger, but I’ve been unable to find the old install files and their archive only goes up to 6.0.1 which requires Java 6 or 7 which I’ve also been unable to install, so I’ve hit a wall there. Mono installed, but doesn’t work due to Library issues; I found a slightly older version (one of the early .NET 2.0 releases of the CLR) but that one just claims there is nothing to install which is odd. Currently it doesn’t have an Internet connection as the system lacked the necessary Airport Extreme (or PCI-X) add-on; however I have one on the way from Ebay; and with TenFourFox (A FireFox fork which ports it to OSX PowerPC) I think it will work at least as well as my Pentium 4 system for browsing. In fact it may work even better; I’ve found it performs very well, with even Photoshop CS2 working only slightly slower on it then Adobe CS5 does on my more modern 4770K i7 system. I’ve loaded it up with useful software such as Office 2008 for Mac and older versions of software like TextWrangler; I’ve also got svn and git on it and presume I can connect to my work VPN which could make things interesting; It would be possible to develop using it by using a text editor and svn (obviously Visual Studio isn’t going to be running on the PowerMac PPC!). I’ve also found it outperforms the Pentium 4 system with games like Quake III:Team Arena which perhaps isn’t surprisingly- the Pentium 4 CPU was a bit of a mess in terms of performance compared to both AMD and PowerPC in many ways.

I bought it as “retro” system as at this point it very nearly is. And yet I’m finding myself impressed with the OS design, something which I never really expected. Naturally there are still some things I don’t like about OSX (I’m not a fan of the traffic light window management buttons, for example) But the responsiveness of the system has been quite stellar.

Posted By: BC_Programming
Last Edit: 05 Dec 2016 @ 10:29 PM

EmailPermalinkComments (0)
Tags
Tags: , , , ,
Categories: Macintosh

 24 Nov 2016 @ 10:03 PM 

There is a seemingly common affliction affecting some users of Windows where they find that their desktop icons receive old-style focus rectangles. This seems to affect Windows Vista and later.

Dotted Focus Rectangle.

After some investigation, I found the cause to be an Accessibility setting. inside Ease of Access in Control Panel, There is a “Change how the keyboard works” option. This option takes you to another page with “Underline keyboard shortcuts and access keys”. When this option is checked, Keyboard cues are enabled. This includes the underlined text of menus and buttons- but it also includes ListView Focus Rectangles, which means with the option enabled there is a Focus rectangle shown on the desktop rather frequently.

To change this setting, toggle it and reboot.

Posted By: BC_Programming
Last Edit: 24 Nov 2016 @ 10:03 PM

EmailPermalinkComments (0)
Tags

 09 Nov 2016 @ 8:30 AM 

I’ve previously written about making adjustments to the Windows Master Volume control programmatically. I alluded to the addition of possible other features such as being able to view the volume levels of other applications. I’ve gone ahead and made those changes.

The first thing to reiterate is that this makes use of a low-level .NET Wrapper for the Windows Core Audio API. This can be found here.

The first thing I decided to define was an object to represent a single Applications Volume Session info/properties. In addition, it will be provided a reference to the IAudioSessionControl interface representing that application’s Audio session, so it can be directly manipulated by adjusting the properties of the class.

Next, we need to declare a COM import, the Multimedia Device enumerator. Specifically, we need to import the class, as the Vannatech Library only provides interfaces, which we cannot instantiate:

Now that we have a starting point, we can create an enumerator method that retrieves all active audio sessions as “ApplicationVolumeInformation” instances:

A github repository with a more… complete… implementation of a working Console program can be found here.

Posted By: BC_Programming
Last Edit: 11 Nov 2016 @ 12:29 PM

EmailPermalinkComments (0)
Tags
Categories: .NET, C#, Programming, Windows

 27 Oct 2016 @ 12:39 PM 

This is part of a series of posts covering new C# 6 features. Currently there are posts covering the following:
String Interpolation
Expression-bodied Members
Improved Overload resolution
The Null Conditional operator
Auto-Property Initializers

Yet another new feature introduced into C# 6 are a feature called Dictionary Initializers. These are another “syntax sugar” feature that shortens code and makes it more readable- or, arguably, less readable if you aren’t familiar with the feature.

Let’s say we have a Dictionary of countries, indexed by an abbreviation. We might create it like so:

This is the standard approach to initializing dictionaries as used in previous versions, at least, when you want to initialize them at compile time. C#6 adds “dictionary initializers” which attempt to simplify this:

Here we see what is effectively a series of assignments to the standard this[] operator. It’s usually called a Dictionary Initializer, but realistically it can be used to initialize any class that has a indexed property like this. For example, it can be used to construct “sparse” lists which have many empty entries without a bunch of commas:

The “Dictionary Initializer” which seems more aptly referred to as the Indexing initializer, is a very useful and delicious syntax sugar that can help make code easier to understand and read.

Posted By: BC_Programming
Last Edit: 27 Oct 2016 @ 12:40 PM

EmailPermalinkComments (0)
Tags
Tags: , ,
Categories: .NET, C#, Programming

 19 Oct 2016 @ 12:00 PM 

Not strictly programming related, but sparked by some recent UX I had with a few programs, including some I wrote. These are not hard and fast rules, by any stretch of the imagination, but they provide a good checklist of basic things that are sometimes forgotten when it comes to Application development, and presentation. I’ll make additions to these “rules” as I consider them through testing or finding frequent trouble spots.

Don’t Pin yourself when you are installed

To name and shame- Firefox does this last I checked. Another program that pins itself when it is installed is PowerArchiver, an otherwise excellent archiving tool which only appears one other time in this post. “Pinned” programs have somewhat replaced the Quick Launch bar. Personally, I prefer the quick launch toolbar and don’t pin any programs; but the general idea is that the taskbar icon for pinned programs are always present, so you can “switch” to the program and if it’s not running it will start. (I’m oversimplifying it, mind you). Now, Pinning and unpinning is something which has no programming API; there is no Win32 API function to call to add a pinned program or remove one; the idea is that the user decides what is pinned by explicitly pinning it. My understanding of Firefox’s logic is that Internet Explorer is pinned by default (Edge in Windows 10) so they should pin themselves by default. I can sort of see where they are going with that logic- to be entirely fair, Internet Explorer shouldn’t be pinned in a default Windows install- but I don’t think two wrongs really make a right here. By the time I install Firefox, I’ve already unpinned Internet Explorer and the Windows Store and don’t have any pinned programs; I don’t want Firefox pinned either. And If somebody is a Firefox user or somebody sets up Firefox for somebody else (or Chrome, for that matter, though I don’t think Chrome automatically pins itself on install) then I’m sure they are capable of pinning Firefox themselves if they desire it that way.

The problem is that regardless of the circumstances, it presumes that your application is special. Moving to Powerarchiver, I recently updated to a newer version that was available, and after installing the new version, I found Powerarchiver had automatically pinned itself to the Taskbar. C’mon guys! Very few people use an Archiving tool in a manner where they will want to launch it standalone; speaking for myself, I use it through windows explorer to extract and very occasionally compress zip or 7z or other archive formats. I seldom launch it on it’s own, and I imagine that extends to most people. But even if that was not the case, to bust out a rhyme, ahem…”pinning yourself is forgetting yourself.”. That was a terrible rhyme, which fits with the behaviour. Let the user decide what to pin and whether your program is useful enough to them to pin. The fact that somebody had to grovel through internal Windows functions and structures to find out how they can force their program to be added to the taskbar as a pinned button I think just makes it worse, like nobody along the way said “Hey, guys, maybe our program isn’t the awesomest most useful program ever, perhaps not every single person will want it pinned?”

This extends to a lot of other annoying behaviours. Don’t steal file associations willy nilly- if your program wants file associations, present a prompt- or better yet- only associate unassociated file types, and provide an options dialog to allow users to associate other file types already associated with other programs. Setting as the default program for things like browsing fall into the same category.

Verify it functions properly at various DPI settings

When you run a program that doesn’t indicate it supports High DPI in it’s Manifest file on a High DPI display, Windows will try to scale it itself. It effectively does this by allowing the program to think it is running at a standard 72 (100%) DPI, and then stretching the image of the client area to the “actual” scaled size. For example, here is Recoder being displayed in that manner:

BASeCamp Recoder running without having declared DPI support on a high-DPI display.

BASeCamp Recoder running without having declared DPI support on a high-DPI display.

As we can see, this scaling feature allows programs that might not support higher DPI or have issues to remain compatible when run on high DPI displays, at the cost of looking rather blurry. If we add a call to SetProcessDPIAware() or if we declare DPI awareness in the manifest file, it looks much better.

Recoder running With High-DPI support on a High-DPI display.

Recoder running With High-DPI support on a High-DPI display.

The Caveat, of course, is that your program needs to be- well, DPI Aware. Since Windows isn’t going to do any work for you you’ll need to make sure that your Window Layout can be properly displayed regardless of the DPI of the user’s Monitor. This is particularly troublesome when using Windows Forms, as when you save a Form Designer, it saves pixel data that is dependent on your Development system DPI. On target systems, it attempts to scale based on that and the relative size on the system it is running on, but a bug means that if it attempts to scale to a DPI lower than the system on which the designer file was saved, then it completely borks a lot of the layout. The workaround is to save on a system with 100% DPI; for my work I’ve had to do that (we still use Windows Forms as the product is older and far to large to consider moving to a newer tech anytime soon) by using a separate system set for 100% DPI, selecting a element and moving it back and forth (to register a change) and saving and committing the designer.

If your program declares itself “DPI aware” then making sure it’s not a liar by verifying it works on non-standard DPI settings should be part of an exhaustive testing regimen.

Store data in the appropriate location

Windows as well as other Operating Systems establish conventions for where certain data should be stored. Data should be stored in these locations, such as the Application Data folder, the Common Application Data Folder, and so on. If nothing else, storing any additional data after installation to your programs installation directory is a strict no-no.

Uninstaller

One thing that might get neglected in testing is the behaviour of any program uninstallers. These should be verified to remove the program and, if an option is provided, application data. Ideally, an uninstallation would not delete program configuration data, except where an option is provided to do so, and it is checked off.

Verify multiple operating Modes

Sometimes software might have different operating modes. For example, it may work different or even do a completely different task when run with certain arguments; perhaps a program might display different UI elements, or it might perform operations differently. Regardless, when testing software, or making changes, it is a good idea to check that all these operating modes continue to function as expected. Even if an added feature is added in a way that should “magically” work with the other approaches, such as in a central routine used by all the specific “operating modes”, it should still, of course, be verified. This is easy to skip with the justification that it probably works but probably is not knowledge or verification!

Posted By: BC_Programming
Last Edit: 18 Oct 2016 @ 09:24 PM

EmailPermalinkComments (0)
Tags
Categories: Programming

 19 Oct 2016 @ 3:00 AM 

Previously I wrote about how the onward march of technology has slowed down, but the ‘stigma’ that surrounds using older hardware has not been reduced to correlate appropriately. Despite slowing down, technology has certainly improved, particularly as we look back further. This can make for very unique challenges when it comes to maintenance for older systems.

In particular, the Thinkpad T41 that I wrote about in the above previous post has a failing Hard Disk, which I believe I also mentioned. This presents itself as a unique challenge, as it is a Laptop EIDE drive. These are available on sites like Amazon and eBay, but this gives the choice of rather pricey (a few dollars a GB) for a new drive, or used and thus of unknown lifespan (eBay). I ended up purchasing a cheap 40GB drive off eBay. However, I discovered that was not my only option, As it turns out that products have been released that almost entirely address this issue.

I speak of CompactFlash adapters. These are adapters which connect to a Laptop 44-pin EIDE interface, and allow you to plug a CompactFlash card into the other side. The device it is plugged into basically just sees a standard HDD. This is an interesting approach because it is in some sense an SSD for older systems, perhaps without quite the speed benefit of an SSD, though still with the advantage of Solid State.

Since I had already purchased a cheap 40GB drive off eBay, I decided to grab an adapter and a CompactFlash card as well for Benchmark purposes. My expectation was that the CompactFlash card would run much faster.

The first step was what to use to compare. CrystalDiskMark was about as good an option as any, so I went with that. First I tested the 40GB drive I received, Then I tested the CompactFlash Adapter. The HDD is a Toshiba MK4036GAX. The Adapter is a “Syba Connectivity 2.5 Inch IDE 44-pin to Dual Compact-Flash Adapter SD-ADA5006” and the Card I’m using with it is a 32GB Lexar Professional 800x 32GB.

Test MK4036GAX (MB/s) CompactFlash Adapter
Sequential Read 29.543 88.263
Sequential Write 31.115 29.934
Random Read 4KiB 0.430 12.137
Random Write 4KiB 0.606 0.794
Sequential Read 24.116 87.230
Sequential Write 30.616 19.082
Random Read 4KiB 0.326 3.682
Random Write 4KiB 0.566 0.543

Looking at the table, we see that, unlike modern SSDs, the use of a CompactFlash drive has some trade-offs. They get much faster performance for typical read operations such as sequential reads and random reads, but they falter particularly for random write operations. Or, rather, this particular CF adapter and card had problems with that arrangement.

Another interesting issue I encountered was that neither Windows nor Linux are able to establish a pagefile/swap partition on the compact Flash card. This is a bit of a problem, though with few exceptions most programs I use on this laptop would tend to not tax the 2GB of total memory available. That said, a bigger issue that may or may not be related seemed to be that Windows XP cannot seem to install programs that use Windows Installer databases- instead they will endlessly prompt for a Disc- even when they don’t use a Disc or if the Disc being installed from is in the drive. I wasn’t able to discover the cause of this problem after investigating it, though I had no issues installing when using the standard HDD.

For now, I’ve got the system back on it’s “normal” HDD drive which as I noted in the linked post works just fine- so in that sense, my “upgrade” attempt has failed, which is unfortunate. The system runs well, for what can be expected of it; As mentioned it is quite snappy, considering it being considered “ancient” by many, it still works respectably for reading most Web content as well as writing blog posts, so the argument that it is out-of-date is hard to properly substantiate. I would certainly find it lacking, mind you, for replacing my everyday tasks, or doing things like watching youtube videos, but despite it’s age I’ve found it fits well in a niche of usefulness that keeps it from being completely obsolete, at least for me.

When it comes to computers, in general, I think you can make use of systems from any era. You can still use older systems largely the same for the same tasks they were originally designed for, the main difference is that more recent systems add additional capabilities; for example, you won’t be watching youtube on a Pentium 133 PC. But you wouldn’t be watching youtube on such a system when it was top-of-the-line, either. I find there is something appealing about the simplicity of older systems, while at the same time the limitations of those older systems (where present) can make for an interesting challenge to overcome, and finding the right balance between the software and hardware can be more nuanced than “throw the latest available version on”.

Another consideration is something like security. For example, you might make use of an older IBM PC that uses Floppy diskettes to boot as a central password manager, or to store other sensitive information. (With copies of course). This allows the old system to be used beyond just fiddling about, and fulfill a useful function. However it would still be far less convenience then, say, Keypass or Lastpass or software of that nature. On the other hand, nobody is going to hack into your non-Internet-Connected PC without physical access.

Posted By: BC_Programming
Last Edit: 18 Oct 2016 @ 09:19 PM

EmailPermalinkComments (0)
Tags
Categories: Hardware, Programming

 18 Oct 2016 @ 9:13 PM 

My most recent acquisition on this is a Tandy 102 Portable computer.

Tandy 102 PortableI’ve actually had a spot of fun with the Tandy 102 Portable. Writing BASIC programs on the Tandy 102 Portable gave me both an appreciation for the capabilities of modern languages as well as a more cynical perspective about some of the changes to development ecosystems. With this system you start BASIC and that’s it. You write the program then and there as line numbers, run it, save it, etc. You don’t build a project framework, or deal with generated boilerplate, or designers or inspectors or IDE software or test cases or mocking or factory interfaces or classes or any of that. When it comes to pure programming, the simplicity can be very refreshing. I’ve found it useful on occasion for short notes. usually I use Editpad or Notepad for this but have found the Tandy 102 Portable to be more “reliable” in that I won’t lose it or accidentally close the file without saving. (And power outages won’t affect it either, though arguably those are rare enough to not even matter). The Large text also makes it easy to read (with adequate light). Most interesting was plugging it into the “budget build” I blogged about previously and having the two systems communicate directly through the serial port. I was able to transfer files both to and from the system, though to say it was straightforward would be a bit of a fib.

Posted By: BC_Programming
Last Edit: 18 Oct 2016 @ 09:13 PM

EmailPermalinkComments (0)
Tags
Categories: Hardware

 08 Oct 2016 @ 4:38 PM 

One of the old standby’s of software development is manipulating bits within bytes. While it used to be that this was necessary- when you only have 8K of RAM, you have to make the most of it, which often meant packing information. Nowadays, it’s not quite as necessary, since there is so much more RAM on a typical system and it’s generally not worth the loss of performance that would come from packing unpacking bits for the tiny memory savings that would be afforded.

There are, of course, exceptions. Sometimes, trying to use conventional data types together with functions or other operations that work with compact data representations (For example, generating a Product Key) can be as awkward as Hitler at a Bar Mitzvah. In those cases, it can be quite useful to be able to pack several Boolean true/false values into a single byte. But in order to do so, we need to do some bit bashing.

Bit Bashing

“Bit Bashing” is the rather crude term for Bit manipulation, which is effectively what it says on the tin- the manipulation of the bits making up the bytes. Even the oldest Microcomputer CPUs- the Intel 4004, for example- worked with more than one bit at a time, in the case of the 4004, it worked with 4-bits at a time. The original IBM PC worked with a full byte. This means that operations work at a higher level even then the bit, so it takes some trickery to work at that level.

The core concepts are easy- you can use bitwise operators on a byte in order to set or retrieve individual bits of the byte.

Setting a Bit

Setting a bit is a different operation depending on whether the bit is being cleared or set (0 or 1). If the bit is being set, one can do so by using a bitwise “or” operation with a shifted value based on the desired bit to change:

Straightforward- create a byte based on the specified bit index (0 through 7) the bitwise or that against the original to force that bit to be set in the result.

The converse is similar, though a teensy bit more complicate. to forcibly set a bit to 0, we need to perform a bitwise “and” operation not against the shifted value, but against the bitwise complement of the shifted value:

Retrieving a bit

Retrieving a bit is as simple as seeing if the bitwise and between the value and the shifted byte is non-zero:

Being able to encode and decode bits from a byte can be a useful capability for certain tasks even if it’s necessity due to memory constraints may have long since passed.

Posted By: BC_Programming
Last Edit: 08 Oct 2016 @ 04:38 PM

EmailPermalinkComments (0)
Tags
Categories: .NET, C#, Programming

 08 Oct 2016 @ 12:15 AM 

Had a bit of a hiccup with the move to the VPS- Seems I had some WordPress plugins that weren’t PHP 7 compatible and still accessed MySQL via the bad old mysql_ functions. I would have been able to sort it out sooner but I didn’t have access via my FTP, as in the move credentials had changed. I was able to get credentials sorted through support (I had to use the root account with a password I already knew, but had never tried arbitrarily on the root account since it wasn’t associated with one before).

At any rate, The site is now officially running on a VPS!

Posted By: BC_Programming
Last Edit: 08 Oct 2016 @ 12:15 AM

EmailPermalinkComments (0)
Tags
Categories: News

 17 Sep 2016 @ 3:42 PM 

Previously I wrote about implementing a Alpha-blended form with VB.NET. In that implementation, I had an abstract class derive from Form and then the actual forms derive from that. This causes issues with using the Form in the designer.

In order to workaround that issue I’ve redone parts of the implementation a bit to get it working as it’s own separate class. Rather than rely on the CreateParams() to adjust the GWL_EXSTYLE when the form is initially created, it merely uses SetWindowLong() to change it at runtime. Otherwise, the core of what it does is largely the same- just refactored into a package that doesn’t break the designer.

It should be noted, however, that adding controls to the form will not function as intended, though- this is inherent in the Alpha Blending feature, as it effectively just draws the bitmap. This is why it works well for Splash Screens. Controls will still respond to events and clicks however they will be invisible; making them visible would require drawing them onto the Bitmap and then setting it as the new Layered Window bitmap each time controls change.

Here is the changed code:

In usage it is no more complex than before, really:

The end result is largely the same:

alpha_example

Posted By: BC_Programming
Last Edit: 15 Oct 2016 @ 01:51 PM

EmailPermalinkComments (0)
Tags





 Last 50 Posts
 Back
Change Theme...
  • Users » 40585
  • Posts/Pages » 339
  • Comments » 104
Change Theme...
  • VoidVoid « Default
  • LifeLife
  • EarthEarth
  • WindWind
  • WaterWater
  • FireFire
  • LightLight

PP



    No Child Pages.

Windows optimization tips



    No Child Pages.

BC’s Todo List



    No Child Pages.