02 Feb 2018 @ 5:50 PM 

CPU architectures and referring to them sit in a sort of middle ground. They need to be technically accurate but over time their terminology can change. I thought it would be interesting to look into the name origins of the two most widely used CPU architectures for desktop systems today. And, admittedly this is fresh on my mind from some research I was doing.


Nowadays, most 32-bit software for desktops and laptops is referred to as being  built for “x86”. What does this mean, exactly? Well, as expected, we have to go back quite a ways. After the 8086, Intel released the 80186, 80286, and 80386. The common architectures and instructions behind these CPUs came to be known as “80×86 instructions”- understandably. The 486 that followed the 80386 dropped the 80 officially from the name- inspection tools would “imply” it’s existence but Intel never truly called their 486 CPUs “80486”. It’s possible this is how the 80 got dropped. Another theory could be that it was simply dropped for convenience- x86 was enough to identify what was being referenced, after all.

The term survived up to today, even though, starting with the Pentium, the processors themselves never truly bore the “mark” of an x86 processor.


x64 is slightly more interesting in it’s origins. 64-bit computing had existed on other architectures before but x64 now references the “typical” x86-compatible 64-bit operating mode. Intel’s first foray into this field was with the Itanium processor. This is a 64-bit processor and it’s instruction set is called “IA-64” (as in Intel-Architecture 64”). This did not work well as it was not directly compatible with x86 and therefore required software emulation.

it was AMD who extended the existing x86 instruction set to add support for 64-bit through a new operating mode. Much as 32-bit instructions were added to the 80386 and compatibility preserved by adding a new “operating mode” to the CPU, the same was done here; 64-bit operations would be exclusive to the 64-bit Long Protected Mode, where 32-bit was still 32-bit protected mode, and the CPU was still compatible with real mode.

This AMD architecture was called “AMD64” and the underlying architecture that it implemented was “x86-64”.

Intel, as part of a series of settlements, licensed AMD’s new architecture and implemented x86-64. This implementation went through a few names- x86E, EM64T- but Intel eventually settled on Intel64. Intel64 and AMD64 aren’t identical, so software targets a subset- this subset is where we get  the name x64.

Posted By: BC_Programming
Last Edit: 02 Feb 2018 @ 05:50 PM

EmailPermalinkComments Off on x86 and x64 Name Origins
Tags: ,
Categories: Hardware
 30 Oct 2017 @ 6:52 AM 

A couple of weeks ago, I thought it would be neat to get a computer similar to my first PC; which was a 286. I’d actually been considering the prospect for some time, but the prices on a DTK 286 (DTK was the brand I had) were a bit high. However I stumbled on a rather cheap listing for a DTK 286 PC; it wasn’t identical to the one I had but was a similar model, which had a slightly reduced case size but seemed otherwise the same, so I snapped it up.

It arrived a little worse for wear from the journey- the front of the case, which was attached via plastic standoffs screwed into the metal case itself, had all those plasticsnaps come off- However this shouldn’t be too much of a problem as I’m sure I can get it to stay attached for presentation purposes.

When I opened it up to see if anything else had been damaged, I found the network card was out of it’s slot. So I pushed it in. Then I noticed the slot was PCI. 286 systems had 8-bit and 16-bit ISA, so already I knew something was up. That the Processor had a heatsink, and was a Socket 7 meant this was clearly not a 286 system.

Instead, the system ins a Pentium 133 (non-MMX) Socket 7, with 64MB of RAM, a 900MB hard drive, an ATI Mach 64, and 10/10 Ethernet. The Floppy diskette drive wasn’t working correctly so I swapped it for one of my other floppy drives. I also attached one of my CD-RW drives so I could burn data and install programs, to the Windows 95 install that was running on the system.


Now, arguably this could be a claim to be made against the seller but I think that it was sold this way by accident; It seems like it is using a specialized industrial motherboard intended to be placed in these sort of Baby AT cases- I don’t think a standard consumer case had Socket 7 and used the large, older Keyboard DIN connector. The motherboard is apparently quite uncommon and more so with the Socket 7 rather than Socket 5. It also has a motherboard Cache “card” installed which doesn’t look to be particularly difficult to find but goes for about half what I paid for the entire unit. The motherboard is unusual in that it seems to be missing things such as Shrouds around the IDE connections as well as having no serial number listed where specified in the center of the board.

My original intent was to fiddle with MS-DOS and Windows 3.1, so realistically this Pentium system could work for that purpose; I have a few older IDE Hard drives I could swap in and set up a dual-boot between MS-DOS/Windows 3.1 and Windows 95. The Mach64 is an older card but is well supported on both Windows 95 and Windows 3.1 as well as MS-DOS, so it seems like a good fit. It only has 1MB of RAM so higher resolutions drop the colour depth- 1024×768 is only doable with 256 color modes, for example- I might want to get some DIP chips to install and upgrade the VRAM, as it has two empty sockets. (Might be cheaper, ironically, to actually get another Mach64 with the chips installed altogether, which is odd) I was also able to add a Creative AudioPCI Card I had lying around without too much hassle; Though there are better options for ideal MS-DOS and Windows 95 audio I might explore later. My main limitation so far is the lack of a PS/2 connector for the mouse and I don’t have a serial Mouse- I found an old InPort Mouse with a Serial Adapter on eBay to serve that purpose, however- As having a mouse would be nice.

One thing I was struck by- much as with things like the iMac G3 I wrote about previously, is that despite being quite old, it still performs rather well with things like Office 97. Basically it just proves my theory that if you fit your software choices to the hardware, old hardware is still quite capable. I could write up documents in Word or create spreadsheets in Excel without too much bother and without really missing anything available on a newer system; and the system would work well with older MS-DOS games as well for most titles- and older titles are facilitated by the Turbo Switch, which oddly doesn’t actually do anything with the button but uses Control-Alt-Minus and Control-Alt-Plus to change the speed and the turbo switch light changes accordingly (it goes between 133Mhz and 25Mhz, making the latter about equivalent to a fast 386).

I might even experiment with connecting it to my network,  Perhaps even try to get Win95 able to work with shared directories from Windows 10 which would be rather funny. (Though I suspect I might need to open up security holes like SMBv1 to get that working….)

Posted By: BC_Programming
Last Edit: 30 Oct 2017 @ 06:52 AM

EmailPermalinkComments Off on The 286 that isn’t
 19 Oct 2016 @ 3:00 AM 

Previously I wrote about how the onward march of technology has slowed down, but the ‘stigma’ that surrounds using older hardware has not been reduced to correlate appropriately. Despite slowing down, technology has certainly improved, particularly as we look back further. This can make for very unique challenges when it comes to maintenance for older systems.

In particular, the Thinkpad T41 that I wrote about in the above previous post has a failing Hard Disk, which I believe I also mentioned. This presents itself as a unique challenge, as it is a Laptop EIDE drive. These are available on sites like Amazon and eBay, but this gives the choice of rather pricey (a few dollars a GB) for a new drive, or used and thus of unknown lifespan (eBay). I ended up purchasing a cheap 40GB drive off eBay. However, I discovered that was not my only option, As it turns out that products have been released that almost entirely address this issue.

I speak of CompactFlash adapters. These are adapters which connect to a Laptop 44-pin EIDE interface, and allow you to plug a CompactFlash card into the other side. The device it is plugged into basically just sees a standard HDD. This is an interesting approach because it is in some sense an SSD for older systems, perhaps without quite the speed benefit of an SSD, though still with the advantage of Solid State.

Since I had already purchased a cheap 40GB drive off eBay, I decided to grab an adapter and a CompactFlash card as well for Benchmark purposes. My expectation was that the CompactFlash card would run much faster.

The first step was what to use to compare. CrystalDiskMark was about as good an option as any, so I went with that. First I tested the 40GB drive I received, Then I tested the CompactFlash Adapter. The HDD is a Toshiba MK4036GAX. The Adapter is a “Syba Connectivity 2.5 Inch IDE 44-pin to Dual Compact-Flash Adapter SD-ADA5006” and the Card I’m using with it is a 32GB Lexar Professional 800x 32GB.

Test MK4036GAX (MB/s) CompactFlash Adapter
Sequential Read 29.543 88.263
Sequential Write 31.115 29.934
Random Read 4KiB 0.430 12.137
Random Write 4KiB 0.606 0.794
Sequential Read 24.116 87.230
Sequential Write 30.616 19.082
Random Read 4KiB 0.326 3.682
Random Write 4KiB 0.566 0.543

Looking at the table, we see that, unlike modern SSDs, the use of a CompactFlash drive has some trade-offs. They get much faster performance for typical read operations such as sequential reads and random reads, but they falter particularly for random write operations. Or, rather, this particular CF adapter and card had problems with that arrangement.

Another interesting issue I encountered was that neither Windows nor Linux are able to establish a pagefile/swap partition on the compact Flash card. This is a bit of a problem, though with few exceptions most programs I use on this laptop would tend to not tax the 2GB of total memory available. That said, a bigger issue that may or may not be related seemed to be that Windows XP cannot seem to install programs that use Windows Installer databases- instead they will endlessly prompt for a Disc- even when they don’t use a Disc or if the Disc being installed from is in the drive. I wasn’t able to discover the cause of this problem after investigating it, though I had no issues installing when using the standard HDD.

For now, I’ve got the system back on it’s “normal” HDD drive which as I noted in the linked post works just fine- so in that sense, my “upgrade” attempt has failed, which is unfortunate. The system runs well, for what can be expected of it; As mentioned it is quite snappy, considering it being considered “ancient” by many, it still works respectably for reading most Web content as well as writing blog posts, so the argument that it is out-of-date is hard to properly substantiate. I would certainly find it lacking, mind you, for replacing my everyday tasks, or doing things like watching youtube videos, but despite it’s age I’ve found it fits well in a niche of usefulness that keeps it from being completely obsolete, at least for me.

When it comes to computers, in general, I think you can make use of systems from any era. You can still use older systems largely the same for the same tasks they were originally designed for, the main difference is that more recent systems add additional capabilities; for example, you won’t be watching youtube on a Pentium 133 PC. But you wouldn’t be watching youtube on such a system when it was top-of-the-line, either. I find there is something appealing about the simplicity of older systems, while at the same time the limitations of those older systems (where present) can make for an interesting challenge to overcome, and finding the right balance between the software and hardware can be more nuanced than “throw the latest available version on”.

Another consideration is something like security. For example, you might make use of an older IBM PC that uses Floppy diskettes to boot as a central password manager, or to store other sensitive information. (With copies of course). This allows the old system to be used beyond just fiddling about, and fulfill a useful function. However it would still be far less convenience then, say, Keypass or Lastpass or software of that nature. On the other hand, nobody is going to hack into your non-Internet-Connected PC without physical access.

Posted By: BC_Programming
Last Edit: 18 Oct 2016 @ 09:19 PM

EmailPermalinkComments Off on Perceived Obsolescence Part II: Meeting up halfway
Categories: Hardware, Programming
 18 Oct 2016 @ 9:13 PM 

My most recent acquisition on this is a Tandy 102 Portable computer.

Tandy 102 PortableI’ve actually had a spot of fun with the Tandy 102 Portable. Writing BASIC programs on the Tandy 102 Portable gave me both an appreciation for the capabilities of modern languages as well as a more cynical perspective about some of the changes to development ecosystems. With this system you start BASIC and that’s it. You write the program then and there as line numbers, run it, save it, etc. You don’t build a project framework, or deal with generated boilerplate, or designers or inspectors or IDE software or test cases or mocking or factory interfaces or classes or any of that. When it comes to pure programming, the simplicity can be very refreshing. I’ve found it useful on occasion for short notes. usually I use Editpad or Notepad for this but have found the Tandy 102 Portable to be more “reliable” in that I won’t lose it or accidentally close the file without saving. (And power outages won’t affect it either, though arguably those are rare enough to not even matter). The Large text also makes it easy to read (with adequate light). Most interesting was plugging it into the “budget build” I blogged about previously and having the two systems communicate directly through the serial port. I was able to transfer files both to and from the system, though to say it was straightforward would be a bit of a fib.

Posted By: BC_Programming
Last Edit: 18 Oct 2016 @ 09:13 PM

EmailPermalinkComments Off on Vintage Hardware – The Tandy 102 Portable
Categories: Hardware
 26 Dec 2015 @ 1:43 PM 

When it comes to playing older game consoles, there are a lot of varying opinions. One of the common ones I see is that the only way to play old game consoles like the NES/SNES/Genesis/etc. ‘authentically’ is to play them on a CRT. I’ve never bought into that, personally. The general claim seems to revolve around some very particular scenarios, which I will mention- which are used to support an idea that the games were designed specifically for CRT technology. Let’s look into the facts. Then, we can do some experiments.

First, we’ll start with a comparison image that I commonly see used to support this. The image is a screenshot of a portion of a screen in FF6 (FF3 for the US) from the SNES. First, we have the image which is called the “Emulator” image:


FF6, Alleged Image from an Emulator

This is held as an example of how ‘pure’ emulated game imagery is “gross and blocky”. Stemming from that, the claim is that this is not “authentic”- that in order to do so, the game imagery is supposed to be blurred; this is claimed to be a direct side effect of the CRT technology. Then this image is typically provided:


FF6, Alleged image from a CRT Television.

This is often claimed to be “what the game looks like on a CRT TV” and, typically, claimed as what it is designed to look like. However, there are a few issues with the claim. the first one is that this is taking a relatively small portion of the game screen, and blowing it up immensely. The fact is that you aren’t going to be seeing any of the pixel detail of the first image unless you press your face right into your monitor. Another, and perhaps more glaring issue- is that the second image is taken from an emulator as well. The effect can be achieved by merely turning on bilinear interpolation in an emulator such as SNES9X. So the image doesn’t actually tell us anything- it shows us an image without an emulator feature, and an image with it enabled. It asserts the latter image is “accurate to what it looks like on a CRT” But is it? The image itself is hardly proof of this.

Some short debates got me thinking about it. In particular, one common discussion is about Nintendo’s Wii U Virtual Console. For their NES library, I will often argue that for whatever reason it applies a rather gross blur filter over everything. I am told something along the lines of that this is intended to “mimic the original CRT TVs which were always blurry”. I find this difficult to believe. So the desire to properly experiment with an actual CRT TV, and the fact that my ViewHD upscaler doesn’t support the ideal S-Video for my SNES and N64 systems led me to ebay to buy a CRT TV. They were expensive, so I said “Nope” and decided not to. As it turns out, however, the previous tenants of my house who had sort of ran off a few years ago to avoid paying several months of back-rent had also left behind a CRT television. I had never noticed because I had never actually gone out to the shed the entire time I’ve been here. Mine now, I guess. So I brought it inside. Once the spiders decided to leave, I was initially disappointed as it refused to turn on- then an hour later seemed to work fine, but was blurry as heck. I was able to fix that as well by adjusting the focus knob on the rear, such that it now works quite well and has quite a sharp picture.

Before we get too far, though, let’s back up a bit. There are actually quite a few “claims” to look at, here. With the appropriate equipment it should be possible to do some high-level comparisons. But first, let’s get some of the technical gubbins out of the way here.

Send me a Signal

The first stumbling block, I feel, is input method. With older game consoles, the signal accepted by televisions- and thus generated by most systems- was Analog. Now, when we get right down into the guts, A CRT’s three electron guns- one for each color- are driven through independent signals. Some high-end Televisions and Monitors, particularly PVM Displays, have inputs that allow the signal to be passed pretty much straight through in this manner. This is the best signal possible with such a setup- the signal sent from the device get’s sent straight to the CRT electron guns. No time for screwing about.

However, Other video signal formats were used for both convenience as well as interoperability. Older Black and White televisions had one electron gun, and thus one signal, Luma, which was effectively luminousity. This allowed for Black and White images. When Color Television was introduced, one issue was backwards compatibility- it was intended that colour signals should be receivable and viewable on Black and White sets.

The trick was to expand the channel band slightly and add a new signal, the Chroma signal. This signal represented the colour properties of the image- a Black and White TV only saw the Luma, and Color TVs knew about the Chroma and used that. (Conveniently, a Color TV not receiving a Chroma Signal will still show Black and White, so it worked both ways). This worked fine as well.

Moving swiftly along, TV’s started to accept a Coaxial input. This provided a large number of “channels” of bandwidth. Each channel was a signal with the Chroma information lowpass-filtered onto the Luma signal.

Composite worked similarly, but abandoned the channel carrier, effectively just sending the combined Luma & Chroma signal without any channel adjustment.

S-Video Sent the Luma and Chroma signals entirely separately- with no low-pass filtering or modulation at all.

In terms of fidelity, the order going from least-desired to best of these, is RF, Composite, and then S-Video.

Now, this is North American-centric- Europe and the UK had a slightly different progression. Over there, a somewhat universal connector, the SCART connector, became effectively the de-facto standard. SCART could support a composite signal, separated Luma/Chroma (S-Video) signals, or an RGB Signal. and RGB signal as effectively three separate signals, one for each of the Red, Green, and Blue electron guns in the television. This is effectively the best possible signal- the signal goes straight to the electron guns with very minimal processing, as opposed to Chroma and Luma, which require some modulating and other actions to turn into an RGB signal to send to the electron guns. RGB was available in North America, but the equivalent connection method used- Component Video- wasn’t used until fairly late- around the time that CRT displays were being replaced with flat-panel LCD and Plasma displays.

So with that out of the way, one of the factors in how good an image looks is how much information is lost. In the case of older game consoles, the choices- without modding- tend to be RF, Composite, or S-Video.

For the NES, the ideal output, without modifying the system, was Composite:

CRT displaying an image from a Composite signal from an NES.

CRT displaying an image from a Composite signal from an NES.

It is notable that we can still make out individual pixels, here; the dithered background doesn’t “mix” together. There is blurring, particularly along the horizontal scanlines, as well as dot skew along Megaman’s sprite, but those are not inherent properties of the CRT itself, but rather of the composite signal. As shown by running the same game via the Megaman Anniversary Collection on the Gamecube and using S-Video output:


A CRT Television displaying the S-Video output from a Gamecube.

A CRT Television displaying the S-Video output from a Gamecube.

This is a much clearer image. However, there is still some noticable blurring around Megaman. Could this be added by the Gamecube’s emulation? I don’t know. we’ll have to do more experiments to find out.

As I mentioned, Composite is inferior to S-Video; this is because Composite is the result of applying a low-pass filter to the Chroma signal, and “mixing” it with the Luma signal. The lowpass filter is so it doesn’t interfere with the Luma signal- but the effective result is that it doesn’t interfere with the Luma signal as much. The primary problem is that by having both signals as part of one signal, the demodulation will still pick up bits of the other signal due to crosstalk. Another possibility is that the signal being generated could be being generated in a less-than-optimal way- in the case of the NES for example it’s PPU generates a composite signal, but the composite signal is created from square waves, rather than

Now, since I have no immediate plans to try modding any sort of higher video output from my NES, the best solution for comparisons would be to actually use something that can be compared directly. I decided to go with Super Mario All Stars and the SMB3 World 1 Map screen. First, we can see it with Composite:

CRT displaying Mario All Stars via a Composite input.

CRT displaying Mario All Stars via a Composite input.


Next, we can switch it over to S-Video:



CRT displaying SNES Mario All Stars SMB3 via S-Video

Just compare those two- The S-Video is much better. This difference is entirely because of the separation of the Luma and Chroma into two signals; one can see a bit of “noise” in the composite version, whereas the S-Video output is very well defined. It is almost night-and-day. However, these differences are not purely due to the use of a CRT. S-Video signals can be accepted by any number of devices.


Design Intention Declarations

One common statement made regarding older consoles is that their art and sprites and design are intended for a CRT; and therefore, a CRT is necessary to have an “authentic” experience. This seems reasonable in it’s surface. However, it really is not possible to design for a CRT in a general fashion. CRT Televisions accept varying signal inputs, they use widely different technologies- Aperture Grille, Shadow Mask, etc- have widely different convergence, moire, dot pitch, and other characteristics. While it would be possible to tailor or use the side-effects of a particular Television set to achieve a specific effect, that effect would be lost on pretty much any other set; and even on the same set if adjustments are made.

However, one thing that does have well-defined aspects and side effects that can be utilized is the signal. In particular, for systems that use a composite signal (either via composite itself or through a carrier-wave RF), the artifacts can result in certain image characteristics. These characteristics, however, have no relevance to CRT technology at all, and are not innate features that present themselves on CRT television sets.

The most common example is in Sonic the Hedgehog. The game has waterfalls in the foreground- in order to allow you to see your character, and because the Genesis hardware doesn’t support translucency, the game dithers the waterfall by having it drawn with vertical stripes. When this is paired with a composite signal, it looks sort of translucent:


Sonic the Hedgehog-Composite

Well, OK, it doesn’t work great, since you can still see the lines- but the characteristics of composite lend themselves to some horizontal blending, which helps make it look transclucent. At any rate, the argument is that the game is designed for a CRT and it is designed for composite, based on this- therefore, not using a CRT or not using Composite aren’t “playing it properly”.


I challenge this claim, however. First, the effect is irrelevant to CRT, as I stated, so we can throw that one right out. Second, the fact that this has a useful side-effect with the most common video signal format doesn’t mean it was designed that way. The problem arises that there realistically wasn’t any other way for it to be implemented. Dithering is a very common method of attempting to simulate semi-transparency, and had been for some time.

Another issue is that Composite was not the only signal format available. The system also output S-Video, and, in supported regions, full RGB signals. With an S-Video connection, that same waterfall effect looks like this:

Sonic- SVideo


If the system was designed for Composite- why does it support signal formats with higher fidelity? There is simply no merit to the claim that games were designed to exploit composite blending. The fact of the matter is that in all instances where it has an effect, there wasn’t any other option for implementing what they wanted to do. Dithering is the most common situation and it is merely a result of writing game software on a device that doesn’t support translucency. That the typical connection signal blended dithered portions of an image together a bit more wasn’t an intended result, it was, in the words of Bob Ross, a “Happy Accident”.


Moving forward from that, however- and taking a step back to the Wii U Virtual Console. We’ve already established that CRT displays do not have inherent blurring characteristics. Furthermore, the blurring effect of composite itself is relatively slight. The best way to compare is to simply compare the two images directly. For example, I have Kirby’s Adventure on the Wii U VC. I also have it on my Everdrive N8, allowing it to run on the NES as it would with the original cartridge. Let’s compare the two.

First, the composite image captured on a CRT, using the NES’s Composite connection:


Kirby’s Adventure, running on an NES and being displayed via Composite to a CRT Television.

There is a bit of a moire pattern from when I took the picture and how the phospor’s are lining up, but those aren’t normally visible. There is some slight blurring, but it is mostly in the horizontal direction. Now here is the image from the Wii U VC, running on an LCD:


Kirby’s Adventure, running on the Wii U Virtual Console and being displayed on an LCD Screen through HDMI.

Here we see that they have merely blurred the output. For what purpose, I don’t know. Perhaps they are scaling the emulator output and it is using the default bilinear scaling, when they intended nearest neighbour. In the closeups here it actually looks like a reasonable approximation, but even within the images the image on the CRT is still more clear (particularly vertically). The main problem is that the CRT output appears very clear and crisp from a distance; whereas at any distance the Wii U VC Output on an LCD looks blurry. Stranger still, the Virtual Console on the Nintendo 3DS doesn’t exhibit any of these visual effects.

To conclude, I think that a lot of the attachment to CRT displays is rooted in confirmation bias being supported primarily by nostalgia factors. While there are benefits to the Native analog capability of a CRT display- in particular, resolution switches are way faster – those benefits don’t really line up with a lot of the claimed advantages. And those that seem reasonable, such as CRT’s having less input latency- have only been measured in time delays that are otherwise inperceptible. The biggest concern is less that CRT is ideal, and more that LCD panels tend to use very poor digitizers to turn the analog signal into a digital one for use by the display panel itself. These issues can be eliminated by using a breakout box, such as a framemeister or a ViewHD, which accepts various inputs and outputs HDMI.

Posted By: BC_Programming
Last Edit: 26 Dec 2015 @ 01:43 PM

EmailPermalinkComments Off on Old Game Consoles and CRTs
Categories: Games, Hardware
 17 Jul 2015 @ 10:09 PM 

Previously, I wrote about something of an ‘experiment’ I was trying, which involved seeing what sort of performance and ability I would get out of a relatively low-cost computer build. The parts finally arrived the other day (Nearly a Month after I ordered them, nicely done, TigerDirect…) and I built the system.

The system cost around $400 dollars, by my recollection; but certainly less than $500. So how well does it function?

Quite well. The first game I ran on it was Minecraft, expecting it to be quite jerky. However I’ve found that the framerates are quite playable, and I even played it for a good hour or so on the system. I’m just allowing Steam to download a few games to see how well it works with those. However given the price, the system performs quite admirably.

The Case is a small-form-factor case, in the sense that it is a Mini Tower. I was flummoxed about where I was supposed to install the HDD and SSD drives, until I did the unthinkable and looked at the cases small foldout manual, which showed where they go- they are screwed into a holding bracket. It was interesting putting the system together. I was also disappointed as I didn’t look into the Motherboard option close enough; it states USB3 support but that is only the hubs provided on the motherboard, so I have no place to install the case’s USB3 front header connection.

The system is also incredibly quiet- near silent, in fact. Quiet impressive. In terms of performance it feels snappier than my older desktop system, which cost twice as much (though, that was in 2008 as well). Even though that system uses a dedicated 9800GT card.

I’m trying to decide whether to put Windows 10 onto the system or stick with Windows 8.1, as well as whether to use it as a sort of test system, effectively replacing my old desktop in that capacity.

Posted By: BC_Programming
Last Edit: 17 Jul 2015 @ 10:09 PM

EmailPermalinkComments Off on Budget Computer: Results
Categories: Hardware
 01 Jul 2015 @ 9:01 AM 

A while ago, I noted in my post about remapping keys how I got a new laptop. Though at the time I had not used the system enough to feel it fair to provide any sort of review on the product, I’ve been using it for a month now and feel that should be enough to offer my thoughts on the product.


it is worth noting that the T550, like Lenovo’s other Thinkpad models, offers a lot of customization options. In my case, I configured it with a 2.6Ghz i7 5600U processor, 16GB of RAM, a 2880×1620 Multi-touch display, A Fingerprint reader, a 16GB SSD Cache,and a 500GB HDD. Since then I have replaced the Hard Disk with a 480GB Sandisk Ultra II SSD. It is somewhat notable that the system does not feature any sort of discrete graphics capability. My purposes for the machine was primarily for work tasks, so Visual Studio, Text Editors, pgadmin, Browsers, Excel, Skype, and so forth. “Gaming” would be off the table pretty much, though I imagine for some games it would run admirably, the lack of dedicated graphics means that desktop applications are the main benefactor.

I am quite impressed with the system and how well it holds up. It has amazing battery life- over twice the battery life of my previous laptop, which now serves the purpose of a clock on my nightstand. The high resolution of the screen makes it easy to have a lot of different applications open, and while I’ve found I needed to increase the DPI of the screen to be able to read anything, The added definition is amazing to see on a laptop system. It has a higher resolution than my desktop screen (which is 2560×1440) but is about a quarter of the area so pixel density is amazing.

I’ve taken to trying to use the system as my primary development system. This allows me to segregate some of my personal stuff and my work stuff. Realistically I’ve ended up using both my desktop and my laptop for development tasks- simply because it is faster to do so. I’ve also installed some prerelease VS versions for testing purposes, which I haven’t done on my desktop mostly due to disk space considerations (a 480GB SSD is only large if you don’t install a lot of stuff on it, it turns out)

Arguably one complaint I can think of would be how difficult it is to access the system’s innards. With my older Thinkpad 755CDV system, getting access to things like the Hard Disk was incredibly straightforward- the keyboard tray basically lifted up and you could remove and replace components toolessly. With this new T550, I had to release several captive screwes, spudger apart the bottom panel, and then it took quite a bit of force to remove it and get to the insides. Not a massive dealbreaker- as I don’t exactly intend to be constantly replacing components- but it was something of a surprise to see that accessibility has actually decreased with more recent models!

Of note perhaps is the expandability that requires said disassembly. Internally it can support up to 16GB of RAM, and has three M.2 slots. In my case, one has the Wireless card, and the other has the 16GB cache SSD, with the third remaining empty. This leaves some room for expansion, with the option of replacing or upgrading one of the existing M.2 cards and even adding a while new one. It should be noted that things are tightly packed and larger M.2 cards may not fit, though.

All in all I’ve found the Thinkpad T550 to be an excellent machine that while lacking a bit of Oomph compared to “Gaming” PCs it has excellent build quality and (most important to me) a Trackpoint. The Trackpoint has actually “ruined” me in the sense that using the AccuPoint on my old Toshiba feels odd simply because the Nub on the Toshiba is far smaller and has to be operated slightly different. With this more recent system I hold my finger over top, and gently push down and in the direction I want to move the cursor; with the Accupoint this sort of works but it lacks grip and typically you would push it from the side, or at an angle from one side depending on the direction you want to send the cursor.

Posted By: BC_Programming
Last Edit: 01 Jul 2015 @ 09:01 AM

EmailPermalinkComments Off on Thinkpad T550 Review
 16 Jun 2015 @ 1:52 PM 

My current PC is nice and fast and responsive and speedy-quick, so I really do not see a new Computer build in the foreseeable future, (to do so would be like wasting the money invested into this one!)

Despite this, I really do like messing with hardware and I have a hankering to build a new computer. I realized that this is entirely possible and not entirely unreasonable- if I create a build designed to be lower-cost but robust, rather than go for top-rung components, not only will I be able to build it, but it could also be a great gift for people who are struggling with older systems that they use lightly.

With that I set about creating a “Budget” build, designed to be cheaper but also reliable. I came up with the following build. I’ll note the components in my current system for comparison. For obvious reasons I have no intention of using this new build as my main system. I’m uncertain what function I will have it used for though NAS server seems as reasonable as any, once I get more hard disk drives for the purpose.


For the motherboard, I chose a Gigabyte GA-AM1M-S2P. This is a FlexATX Board which will provide on-board Graphics and Sound capabilities, which of course means we won’t need a dedicated graphics card (which on their own can be quite pricy).

The AM1-MS2P Motherboard showing off it's undocumented levitation feature. Spooky.

The AM1-MS2P Motherboard showing off it’s undocumented levitation feature. Spooky.

I Chose Gigabyte as a brand here mostly because I have found their motherboards to be quite trustworthy. I’ve gotten a lot of mileage out of my EP43-UD3L Motherboard, which I used for my build that I made in 2008 and is still going strong as my backup desktop system. Furthermore, the AM1-MS2P Motherboard appears to have a number of new features, such as USB3, which are nice to have on a new system, and add value to a build designed to be built down to a price.


For comparison, the motherboard in this system is a GA-Z87X-UD3H motherboard, which is a LGA1150 Board for Intel processors. It has comparable, or superior (IMO) features which you would expect from the higher price point.


Sorry, I mean “APU”- it’s odd that AMD wants to call their Processors by a different name, when they are still really CPUs. For the CPU I was originally going to get a cheap Sempron, but I ended up going a bit better and getting a 2.05Ghz Athlon 5350 Quad Core. The CPU/APU of a system can definitely slurp up a lot of the total budget for a system so the aim here was to keep it affordable. This is part of the reason I chose to go with AMD- that, and I’ve not had one since I had a K6-2, so their current CPU/architectures I’m not familiar with so may as well.


For comparison, my current CPU in this system is an Intel i7 4770K with a speed of 3.5Ghz. Aside from the obviously faster clock speed, I’m not entirely clear how much better it is than the Athlon, though as I understand the Athlon uses an older architecture as well, so I think the i7 is quite a bit better- though if course how my current components from yesteryear which cost a pretty penny compare to components from today that cost a smaller fraction of the price is part of the purpose of this experiment.


As a budget build, 4GB was what I was aiming for. The motherboard supports 32GB but- again, budget build. For this “experiment” I decided to go for 2 2GB Corsair XMS3 sticks. I have 4 8GB XMS3 sticks in this system and several XMS2 sticks in my older machines and they have proven reliable. Nowadays 4GB is sort of the “bare minimum” for a usable system, and- again, budget build means saving money wherever possible, which is part of my purpose for this experiment.


For comparison, I actually have the same type of RAM in this system, but in larger module sizes. Rather than 2 2GB XMS3 Sticks, this system has 4 8GB XMS3 sticks.


A Computer case- or as they used to be called, “Cabinet”, really has two purposes- The main purpose is of course to hold all your computer bits, and protect them from the outside world. The second purpose is to not look awful. A lot of cases typically fail in the second aspect. In the case of the Fractal Design Core 1100, it doesn’t try to be a hero. It’s a very standard case

The Fractal Design Core 1100. A nice, basic, case that doesn’t try to be anything special, and provides only what you need. No fancy windows, no goofy flourishes- just a case for a PC. It is also quite affordable- which again, was the entire point of this experiment. I went with Micro ATX here because I felt it might not be unreasonable to try to keep it’s volume footprint down.


My current Case is a Thermaltake G42 Commander, if my memory serves. It is actually quite an annoying case because the drive cages leave so little clearance for the SATA connectors- there are still two SATA cables that I simply cannot unplug that I’ve left inside the case. If I was forced to review the case I cannot think of many positive points compared to even my older Cooler Master Centurion case.

Power Supply

Corsair CX Series CP-9020058-NA 430W Modular Power Supply – 80+ Bronze, ATX, Modular Cabling, Active PFC, Single +12V Rail, Low Noise, Trouble-free Installation

For the power supply I went with a Corsair 430CX Power supply. I figure this system will not need a lot of power as I don’t intend to install a graphics card, and the modular supply will allow me to actually try to do something approximating not-crap cable management. I chose a Corsair unit mostly for the same reason I chose Corsair RAM; I’ve had good experiences so far with Corsair Power Supplies, so decided to continue that success.

What is the purpose of this build, one might ask? I actually am unsure. Currently I expect that I will set it up as a Linux system, or possibly as a dual-boot, but the main purpose is just so I can build a new system for the building itself, and less so about what I will do with the finished product. Furthermore, I find I can never have enough backup systems; I’ll have my current Desktop system, my relatively recent T550 Laptop, my older Satellite L300 laptop, my older desktop system (Quad Core Q8200 etc.) and this new build system. I’m toyed with the idea of a sort of NAS system though I ought to have thought that through more to get a motherboard with more SATA ports (nothing a SATA card couldn’t solve, though). Furthermore, since I aimed primarily for a low budget (rather than performance or capability as I did with my builds so far) It is a good experiment to see just how much value you get from a system that is nearly 20 times cheaper than the cost of a standard IBM PC did when it was first made available (accounting for inflation, of course).

Posted By: BC_Programming
Last Edit: 16 Jun 2015 @ 01:52 PM

EmailPermalinkComments Off on Budget Computer Building
 03 Apr 2015 @ 9:56 PM 

I have a habit of occasionally making exorbinant purchases that I cannot, under normal circumstances, justify. I have been considering a Sound Card as just such an exorbinant purchase for some time. Each time I managed to reason myself out of it. Well, until a few weeks ago when I took the plunge. The choice was between the Sound Blaster ZXR and a card in the ASUS Xonar Range. Xonar seems to be the go to Sound Card, reading reviews online, so, naturally, I went with the ZXR. I chose the ZXR over the ZX or the Z card in the series because I had more money than reasoning skills. If I had reasoning skills I likely wouldn’t have purchased it in the first place, but I decided to make an investment for this Blog… Yeah, that is what happened.

Sound Blaster ZXR Safe in it's box. This photo was taken before it realized that it wasn't going to be used by a independent professional, but rather by a bumbling pseudo-enthusiast.

Sound Blaster ZXR Safe in it’s box. This photo was taken before it realized that it wasn’t going to be used by a independent professional, but rather by a bumbling pseudo-enthusiast.

My experience with sound cards manages to somehow cover almost all the technologies. Starting with a ISA 16-bit Sound Blaster card, then a ISA SB AWE32, moving towards a Sound Blaster 16 PCI (which, as I learned recently, is really a rebranded Ensoniq); Then I bought an Audigy SE, because I thought it was an Audigy when in reality it featured no Audigy processor and was just a host-based processing card. Eventually I upgraded to a X-Fi XtremeGamer (before they renamed it- it has the X-Fi Processor chip and actually performed functions in hardware), and stuck with that until recently, when it caused a BSOD (somehow it managed to bork the system despite the WASAPI rearchitecture, oh well). I’ve been using the Motherboard Audio since for this system and it has functioned fine. I had literally no reason to buy a ZXR except to have a new thing, and of course so I could take pictures and share it on this blog for, again, no reason whatsoever.

Another Perspective on the Pristine Box. Notice the Box-like shape. This will become important at no point in this post.

Another Perspective on the pristine box. Notice the Box-like shape. This will become important at no point in this post.

The Packaging was about what you would expect. The transparent windows on the front of the box let you see the card and “Audio Control Module”. The standard smorgasbord of marketing guff covers the box like goto’s in a BASIC program. Inside I found the Sound Card, a daughterboard, a few cables and connectors, the Audio Control Module, a Software disc, and a small foldout manual.





The contents of the box. The key to unboxing is to slice lengthwise starting from the boxes anus.

The contents of the box. The key to unboxing is to slice lengthwise starting from the boxes Anus.

Now I had to install the bloody thing. My approach when it comes to building my PC is effectively just making everything plug in, which lends itself to rather messy cabling. And even if I try to be neat I usually end up undoing it at some other point. Generally speaking I don’t exactly sit and admire my build job so it’s not a big issue. I am always paranoid that somehow in the process of installing something I’ll somehow destroy the machine, which is a rather unrealistic fear.

Installation of the card, like for any other PCI Express card was rather straightforward- find a PCI Express slot to use, find a slot to use for the Daughterboard (in this case) and install the card. I ended up installing the DBPro daughterboard right beside it, though I could have also used an unconnected slot cover that was mounted horizontally as well if I wanted to do so.





The Card successfully installed. I realized later I could have installed the Daughterboard  horizontally in the weird slot that in this image is above the header block.

The Card successfully installed. I realized later I could have installed the Daughterboard horizontally in the weird slot that in this image is above the header block.

With the card installed, my Computer now had a piece of it’s rump suddenly festooned with brass and gold-coloured plating. This Sound card in particular uses the big fat TRS audio jacks as part of the actual jack, but the package provides an adapter so I was able to use my current speakers without issue. In the image showing my computer’s rear, We see I have only two slots left now.  Also if I was to purchase a second GTX 770 I would have to do some rearranging to get it installed, since it would require the 16 slot currently housing the sound card.

The new connectors the card exposes on my computer's rump. King Midas himself would be indifferent because he lived in a different time and wouldn't understand computers without an exhaustive educational program.

The new connectors the card exposes on my computer’s rump. King Midas himself would be indifferent because he lived in a different time and wouldn’t understand computers without an exhaustive educational program.


As for the card itself, I’ve found it to be an upgrade. I did a direct comparison between my motherboard Audio and the Creative card with my JBL headphones, and I feel like it is a bit better. Was it worth it? Not really, at least not so far. However it has got me thinking about Sound/Music and C# again, in particular, I waxed a bit about whether I should try my hand at using WASAPI directly, which seems doable. I’m quite annoyed with depending on third-party libraries- commercial, non-free and fairly expensive libraries to boot, but have always found the algorithms and data formats of compressed audio rather intimidating. Being able to play WAV streams is one thing, but writing a stream-based decompressor is another. On the bright side, since I don’t plan to go commercial with anything that I’ve used BASS.NET for, I probably won’t have any issues anyway. But I ramble.

Posted By: BC_Programming
Last Edit: 04 Apr 2015 @ 07:45 AM

EmailPermalinkComments Off on Creative Sound Blaster ZXR
Categories: Hardware
 03 Apr 2015 @ 5:49 AM 

In the early days of computers, “Sound reproduction” on a computer was typically limited to a few beeps and boops. A few early PCs had limited digital audio capabilities, but they were typically limited. The Macintosh was possibly one of the first computers which gained a good market and mind share that had rather advanced sound capabilities. The lowly IBM PC’s “sound capabilities” lagged behind with it’s single basic piezo-electric speaker designed entirely for beep-booping error messages to you like some kind of demented blues singer. The “Sound Card” can trace it’s history to devices like the Creative Music System, The AdLib, and, later, the Creative Game Blaster cards built for the PC. These utilized the Expansion bus to add new capabilities to the system in the form of less beeping and booping and more recognizable music and sound effects.

For quite a number of years Sound Cards were considered “high-end” gaming equipment. Most game titles supported the PC Speaker because it can be assumed present; but many games also supported the sound cards of the day, by using better fidelity sound and even music if a compatible sound card was present.

There is an interesting history in the various sound companies; Creative bought Ensoniq which put Creative in the position to provide their products pre-installed on PCs. In terms of Sound capabilities on PC the most interesting change came in the mid to late 90’s, where Sound card circuitry started to be integrated onto the system motherboard. Discrete sound cards were still better in terms of capabilities, but the built-in sound card included on most systems- even up to today- provides pretty much any sort of Sound capability a typical user may want to use.

In the late 1990’s and early to mid 2000’s, however, Sound Cards did provide features atop what you could find on-board on motherboards. Fundamentally, such sound cards had one of a few distinct markets/purposes:

  1. Gaming

    Games benefited from features such as 3-D Positional audio, hardware streams and mixing, and features such as on-board Sound RAM, used to store audio samples for playback either directly or as part of a Wavetable synthesizer for music.

  2. Professional

    Professional sound creation and mixing is a different beast entirely. These Cards focused on high-quality components used to provide a high Signal to Noise Ratio at a very high effective sample rate, typically providing strong hardware support to speed up processes involving sound processing and reproduction. These sound cards also have connectivity that allows the use of Professional Audio devices, or include high-grade headphone drivers that support high-impedance headphones.

  3. Software Emulation

    Though motherboard audio is fairly sophisticated today, back in the day many motherboards either had fairly basic Sound Card’s integrated into the motherboard or lacked one altogether. Some “value” sound cards fill this gap by providing many of the features of Professional Audio cards, and, more often, Gaming cards, primarily via Software drivers that emulate those features that are typically provided via Hardware capabilities on the card, and use the provided card to effectively just provide a place for the audio data to go.

Windows Vista turned the world of Hardware-based Audio processing a bit of a curve ball; Windows Vista introduced a User-mode sound mixer built into the OS known as “WASAPI”, or the Windows Audio Service API. The claim is that this redesign took place because a large portion of STOP Errors on XP and earlier were tracable to Sound Card Drivers, which were, like other drivers, running in Kernel Mode. This redesign effectively created a new Audio Stack. Unfortunately, this relegated Sound cards and audio devices to merely “endpoints”; all processing was effectively done by the built-in Audio stack, with the Sound card driver basically allowing WASAPI to send the results to it. What this means is that many features such as EAX are no longer possible to implement via hardware support on Vista or Later. However, these capabilities are available via the use of emulation software.

Therefore hardware advantages for Sound cards for a large portion of users dwindled; even the “Gaming” Sound cards on the market today do very little to actually improve the sound for games or provide game-related features via the hardware.

Professional Audio, however, still has some hardware capabilities. This is because like any hardware device, drivers can do what they want; the reason WASAPI throws a monkey wrench in the works is that it effectively prevents many features from being processed by the hardware, instead, Windows-based sound capabilities need to be provided a certain way, and without kernel-mode involved anywhere in the stack, hardware cannot be invoked. Professional Audio systems, however, typically have their own particular APIs and interfaces, and these have continued to stick around, so hardware capabilities can be exploited fully via hardware interfaces like ASIO.

The Software emulation market is gone- All Motherboards in production today contain Audio capabilities.

Posted By: BC_Programming
Last Edit: 03 Apr 2015 @ 05:49 AM

EmailPermalinkComments Off on Sound Hardware
Categories: Hardware

 Last 50 Posts
Change Theme...
  • Users » 47469
  • Posts/Pages » 397
  • Comments » 105


    No Child Pages.

Windows optimization tips

    No Child Pages.

Soft. Picks

    No Child Pages.

VS Fixes

    No Child Pages.

PC Build 1: “FASTLORD”

    No Child Pages.