Previously I wrote about how the onward march of technology has slowed down, but the ‘stigma’ that surrounds using older hardware has not been reduced to correlate appropriately. Despite slowing down, technology has certainly improved, particularly as we look back further. This can make for very unique challenges when it comes to maintenance for older systems.
In particular, the Thinkpad T41 that I wrote about in the above previous post has a failing Hard Disk, which I believe I also mentioned. This presents itself as a unique challenge, as it is a Laptop EIDE drive. These are available on sites like Amazon and eBay, but this gives the choice of rather pricey (a few dollars a GB) for a new drive, or used and thus of unknown lifespan (eBay). I ended up purchasing a cheap 40GB drive off eBay. However, I discovered that was not my only option, As it turns out that products have been released that almost entirely address this issue.
I speak of CompactFlash adapters. These are adapters which connect to a Laptop 44-pin EIDE interface, and allow you to plug a CompactFlash card into the other side. The device it is plugged into basically just sees a standard HDD. This is an interesting approach because it is in some sense an SSD for older systems, perhaps without quite the speed benefit of an SSD, though still with the advantage of Solid State.
Since I had already purchased a cheap 40GB drive off eBay, I decided to grab an adapter and a CompactFlash card as well for Benchmark purposes. My expectation was that the CompactFlash card would run much faster.
The first step was what to use to compare. CrystalDiskMark was about as good an option as any, so I went with that. First I tested the 40GB drive I received, Then I tested the CompactFlash Adapter. The HDD is a Toshiba MK4036GAX. The Adapter is a “Syba Connectivity 2.5 Inch IDE 44-pin to Dual Compact-Flash Adapter SD-ADA5006” and the Card I’m using with it is a 32GB Lexar Professional 800x 32GB.
|Test||MK4036GAX (MB/s)||CompactFlash Adapter|
|Random Read 4KiB||0.430||12.137|
|Random Write 4KiB||0.606||0.794|
|Random Read 4KiB||0.326||3.682|
|Random Write 4KiB||0.566||0.543|
Looking at the table, we see that, unlike modern SSDs, the use of a CompactFlash drive has some trade-offs. They get much faster performance for typical read operations such as sequential reads and random reads, but they falter particularly for random write operations. Or, rather, this particular CF adapter and card had problems with that arrangement.
Another interesting issue I encountered was that neither Windows nor Linux are able to establish a pagefile/swap partition on the compact Flash card. This is a bit of a problem, though with few exceptions most programs I use on this laptop would tend to not tax the 2GB of total memory available. That said, a bigger issue that may or may not be related seemed to be that Windows XP cannot seem to install programs that use Windows Installer databases- instead they will endlessly prompt for a Disc- even when they don’t use a Disc or if the Disc being installed from is in the drive. I wasn’t able to discover the cause of this problem after investigating it, though I had no issues installing when using the standard HDD.
For now, I’ve got the system back on it’s “normal” HDD drive which as I noted in the linked post works just fine- so in that sense, my “upgrade” attempt has failed, which is unfortunate. The system runs well, for what can be expected of it; As mentioned it is quite snappy, considering it being considered “ancient” by many, it still works respectably for reading most Web content as well as writing blog posts, so the argument that it is out-of-date is hard to properly substantiate. I would certainly find it lacking, mind you, for replacing my everyday tasks, or doing things like watching youtube videos, but despite it’s age I’ve found it fits well in a niche of usefulness that keeps it from being completely obsolete, at least for me.
When it comes to computers, in general, I think you can make use of systems from any era. You can still use older systems largely the same for the same tasks they were originally designed for, the main difference is that more recent systems add additional capabilities; for example, you won’t be watching youtube on a Pentium 133 PC. But you wouldn’t be watching youtube on such a system when it was top-of-the-line, either. I find there is something appealing about the simplicity of older systems, while at the same time the limitations of those older systems (where present) can make for an interesting challenge to overcome, and finding the right balance between the software and hardware can be more nuanced than “throw the latest available version on”.
Another consideration is something like security. For example, you might make use of an older IBM PC that uses Floppy diskettes to boot as a central password manager, or to store other sensitive information. (With copies of course). This allows the old system to be used beyond just fiddling about, and fulfill a useful function. However it would still be far less convenience then, say, Keypass or Lastpass or software of that nature. On the other hand, nobody is going to hack into your non-Internet-Connected PC without physical access.
My most recent acquisition on this is a Tandy 102 Portable computer.
I’ve actually had a spot of fun with the Tandy 102 Portable. Writing BASIC programs on the Tandy 102 Portable gave me both an appreciation for the capabilities of modern languages as well as a more cynical perspective about some of the changes to development ecosystems. With this system you start BASIC and that’s it. You write the program then and there as line numbers, run it, save it, etc. You don’t build a project framework, or deal with generated boilerplate, or designers or inspectors or IDE software or test cases or mocking or factory interfaces or classes or any of that. When it comes to pure programming, the simplicity can be very refreshing. I’ve found it useful on occasion for short notes. usually I use Editpad or Notepad for this but have found the Tandy 102 Portable to be more “reliable” in that I won’t lose it or accidentally close the file without saving. (And power outages won’t affect it either, though arguably those are rare enough to not even matter). The Large text also makes it easy to read (with adequate light). Most interesting was plugging it into the “budget build” I blogged about previously and having the two systems communicate directly through the serial port. I was able to transfer files both to and from the system, though to say it was straightforward would be a bit of a fib.
When it comes to playing older game consoles, there are a lot of varying opinions. One of the common ones I see is that the only way to play old game consoles like the NES/SNES/Genesis/etc. ‘authentically’ is to play them on a CRT. I’ve never bought into that, personally. The general claim seems to revolve around some very particular scenarios, which I will mention- which are used to support an idea that the games were designed specifically for CRT technology. Let’s look into the facts. Then, we can do some experiments.
First, we’ll start with a comparison image that I commonly see used to support this. The image is a screenshot of a portion of a screen in FF6 (FF3 for the US) from the SNES. First, we have the image which is called the “Emulator” image:
This is held as an example of how ‘pure’ emulated game imagery is “gross and blocky”. Stemming from that, the claim is that this is not “authentic”- that in order to do so, the game imagery is supposed to be blurred; this is claimed to be a direct side effect of the CRT technology. Then this image is typically provided:
This is often claimed to be “what the game looks like on a CRT TV” and, typically, claimed as what it is designed to look like. However, there are a few issues with the claim. the first one is that this is taking a relatively small portion of the game screen, and blowing it up immensely. The fact is that you aren’t going to be seeing any of the pixel detail of the first image unless you press your face right into your monitor. Another, and perhaps more glaring issue- is that the second image is taken from an emulator as well. The effect can be achieved by merely turning on bilinear interpolation in an emulator such as SNES9X. So the image doesn’t actually tell us anything- it shows us an image without an emulator feature, and an image with it enabled. It asserts the latter image is “accurate to what it looks like on a CRT” But is it? The image itself is hardly proof of this.
Some short debates got me thinking about it. In particular, one common discussion is about Nintendo’s Wii U Virtual Console. For their NES library, I will often argue that for whatever reason it applies a rather gross blur filter over everything. I am told something along the lines of that this is intended to “mimic the original CRT TVs which were always blurry”. I find this difficult to believe. So the desire to properly experiment with an actual CRT TV, and the fact that my ViewHD upscaler doesn’t support the ideal S-Video for my SNES and N64 systems led me to ebay to buy a CRT TV. They were expensive, so I said “Nope” and decided not to. As it turns out, however, the previous tenants of my house who had sort of ran off a few years ago to avoid paying several months of back-rent had also left behind a CRT television. I had never noticed because I had never actually gone out to the shed the entire time I’ve been here. Mine now, I guess. So I brought it inside. Once the spiders decided to leave, I was initially disappointed as it refused to turn on- then an hour later seemed to work fine, but was blurry as heck. I was able to fix that as well by adjusting the focus knob on the rear, such that it now works quite well and has quite a sharp picture.
Before we get too far, though, let’s back up a bit. There are actually quite a few “claims” to look at, here. With the appropriate equipment it should be possible to do some high-level comparisons. But first, let’s get some of the technical gubbins out of the way here.
The first stumbling block, I feel, is input method. With older game consoles, the signal accepted by televisions- and thus generated by most systems- was Analog. Now, when we get right down into the guts, A CRT’s three electron guns- one for each color- are driven through independent signals. Some high-end Televisions and Monitors, particularly PVM Displays, have inputs that allow the signal to be passed pretty much straight through in this manner. This is the best signal possible with such a setup- the signal sent from the device get’s sent straight to the CRT electron guns. No time for screwing about.
However, Other video signal formats were used for both convenience as well as interoperability. Older Black and White televisions had one electron gun, and thus one signal, Luma, which was effectively luminousity. This allowed for Black and White images. When Color Television was introduced, one issue was backwards compatibility- it was intended that colour signals should be receivable and viewable on Black and White sets.
The trick was to expand the channel band slightly and add a new signal, the Chroma signal. This signal represented the colour properties of the image- a Black and White TV only saw the Luma, and Color TVs knew about the Chroma and used that. (Conveniently, a Color TV not receiving a Chroma Signal will still show Black and White, so it worked both ways). This worked fine as well.
Moving swiftly along, TV’s started to accept a Coaxial input. This provided a large number of “channels” of bandwidth. Each channel was a signal with the Chroma information lowpass-filtered onto the Luma signal.
Composite worked similarly, but abandoned the channel carrier, effectively just sending the combined Luma & Chroma signal without any channel adjustment.
S-Video Sent the Luma and Chroma signals entirely separately- with no low-pass filtering or modulation at all.
In terms of fidelity, the order going from least-desired to best of these, is RF, Composite, and then S-Video.
Now, this is North American-centric- Europe and the UK had a slightly different progression. Over there, a somewhat universal connector, the SCART connector, became effectively the de-facto standard. SCART could support a composite signal, separated Luma/Chroma (S-Video) signals, or an RGB Signal. and RGB signal as effectively three separate signals, one for each of the Red, Green, and Blue electron guns in the television. This is effectively the best possible signal- the signal goes straight to the electron guns with very minimal processing, as opposed to Chroma and Luma, which require some modulating and other actions to turn into an RGB signal to send to the electron guns. RGB was available in North America, but the equivalent connection method used- Component Video- wasn’t used until fairly late- around the time that CRT displays were being replaced with flat-panel LCD and Plasma displays.
So with that out of the way, one of the factors in how good an image looks is how much information is lost. In the case of older game consoles, the choices- without modding- tend to be RF, Composite, or S-Video.
For the NES, the ideal output, without modifying the system, was Composite:
It is notable that we can still make out individual pixels, here; the dithered background doesn’t “mix” together. There is blurring, particularly along the horizontal scanlines, as well as dot skew along Megaman’s sprite, but those are not inherent properties of the CRT itself, but rather of the composite signal. As shown by running the same game via the Megaman Anniversary Collection on the Gamecube and using S-Video output:
This is a much clearer image. However, there is still some noticable blurring around Megaman. Could this be added by the Gamecube’s emulation? I don’t know. we’ll have to do more experiments to find out.
As I mentioned, Composite is inferior to S-Video; this is because Composite is the result of applying a low-pass filter to the Chroma signal, and “mixing” it with the Luma signal. The lowpass filter is so it doesn’t interfere with the Luma signal- but the effective result is that it doesn’t interfere with the Luma signal as much. The primary problem is that by having both signals as part of one signal, the demodulation will still pick up bits of the other signal due to crosstalk. Another possibility is that the signal being generated could be being generated in a less-than-optimal way- in the case of the NES for example it’s PPU generates a composite signal, but the composite signal is created from square waves, rather than
Now, since I have no immediate plans to try modding any sort of higher video output from my NES, the best solution for comparisons would be to actually use something that can be compared directly. I decided to go with Super Mario All Stars and the SMB3 World 1 Map screen. First, we can see it with Composite:
Next, we can switch it over to S-Video:
Just compare those two- The S-Video is much better. This difference is entirely because of the separation of the Luma and Chroma into two signals; one can see a bit of “noise” in the composite version, whereas the S-Video output is very well defined. It is almost night-and-day. However, these differences are not purely due to the use of a CRT. S-Video signals can be accepted by any number of devices.
One common statement made regarding older consoles is that their art and sprites and design are intended for a CRT; and therefore, a CRT is necessary to have an “authentic” experience. This seems reasonable in it’s surface. However, it really is not possible to design for a CRT in a general fashion. CRT Televisions accept varying signal inputs, they use widely different technologies- Aperture Grille, Shadow Mask, etc- have widely different convergence, moire, dot pitch, and other characteristics. While it would be possible to tailor or use the side-effects of a particular Television set to achieve a specific effect, that effect would be lost on pretty much any other set; and even on the same set if adjustments are made.
However, one thing that does have well-defined aspects and side effects that can be utilized is the signal. In particular, for systems that use a composite signal (either via composite itself or through a carrier-wave RF), the artifacts can result in certain image characteristics. These characteristics, however, have no relevance to CRT technology at all, and are not innate features that present themselves on CRT television sets.
The most common example is in Sonic the Hedgehog. The game has waterfalls in the foreground- in order to allow you to see your character, and because the Genesis hardware doesn’t support translucency, the game dithers the waterfall by having it drawn with vertical stripes. When this is paired with a composite signal, it looks sort of translucent:
Well, OK, it doesn’t work great, since you can still see the lines- but the characteristics of composite lend themselves to some horizontal blending, which helps make it look transclucent. At any rate, the argument is that the game is designed for a CRT and it is designed for composite, based on this- therefore, not using a CRT or not using Composite aren’t “playing it properly”.
I challenge this claim, however. First, the effect is irrelevant to CRT, as I stated, so we can throw that one right out. Second, the fact that this has a useful side-effect with the most common video signal format doesn’t mean it was designed that way. The problem arises that there realistically wasn’t any other way for it to be implemented. Dithering is a very common method of attempting to simulate semi-transparency, and had been for some time.
Another issue is that Composite was not the only signal format available. The system also output S-Video, and, in supported regions, full RGB signals. With an S-Video connection, that same waterfall effect looks like this:
If the system was designed for Composite- why does it support signal formats with higher fidelity? There is simply no merit to the claim that games were designed to exploit composite blending. The fact of the matter is that in all instances where it has an effect, there wasn’t any other option for implementing what they wanted to do. Dithering is the most common situation and it is merely a result of writing game software on a device that doesn’t support translucency. That the typical connection signal blended dithered portions of an image together a bit more wasn’t an intended result, it was, in the words of Bob Ross, a “Happy Accident”.
Moving forward from that, however- and taking a step back to the Wii U Virtual Console. We’ve already established that CRT displays do not have inherent blurring characteristics. Furthermore, the blurring effect of composite itself is relatively slight. The best way to compare is to simply compare the two images directly. For example, I have Kirby’s Adventure on the Wii U VC. I also have it on my Everdrive N8, allowing it to run on the NES as it would with the original cartridge. Let’s compare the two.
First, the composite image captured on a CRT, using the NES’s Composite connection:
There is a bit of a moire pattern from when I took the picture and how the phospor’s are lining up, but those aren’t normally visible. There is some slight blurring, but it is mostly in the horizontal direction. Now here is the image from the Wii U VC, running on an LCD:
Here we see that they have merely blurred the output. For what purpose, I don’t know. Perhaps they are scaling the emulator output and it is using the default bilinear scaling, when they intended nearest neighbour. In the closeups here it actually looks like a reasonable approximation, but even within the images the image on the CRT is still more clear (particularly vertically). The main problem is that the CRT output appears very clear and crisp from a distance; whereas at any distance the Wii U VC Output on an LCD looks blurry. Stranger still, the Virtual Console on the Nintendo 3DS doesn’t exhibit any of these visual effects.
To conclude, I think that a lot of the attachment to CRT displays is rooted in confirmation bias being supported primarily by nostalgia factors. While there are benefits to the Native analog capability of a CRT display- in particular, resolution switches are way faster – those benefits don’t really line up with a lot of the claimed advantages. And those that seem reasonable, such as CRT’s having less input latency- have only been measured in time delays that are otherwise inperceptible. The biggest concern is less that CRT is ideal, and more that LCD panels tend to use very poor digitizers to turn the analog signal into a digital one for use by the display panel itself. These issues can be eliminated by using a breakout box, such as a framemeister or a ViewHD, which accepts various inputs and outputs HDMI.
Previously, I wrote about something of an ‘experiment’ I was trying, which involved seeing what sort of performance and ability I would get out of a relatively low-cost computer build. The parts finally arrived the other day (Nearly a Month after I ordered them, nicely done, TigerDirect…) and I built the system.
The system cost around $400 dollars, by my recollection; but certainly less than $500. So how well does it function?
Quite well. The first game I ran on it was Minecraft, expecting it to be quite jerky. However I’ve found that the framerates are quite playable, and I even played it for a good hour or so on the system. I’m just allowing Steam to download a few games to see how well it works with those. However given the price, the system performs quite admirably.
The Case is a small-form-factor case, in the sense that it is a Mini Tower. I was flummoxed about where I was supposed to install the HDD and SSD drives, until I did the unthinkable and looked at the cases small foldout manual, which showed where they go- they are screwed into a holding bracket. It was interesting putting the system together. I was also disappointed as I didn’t look into the Motherboard option close enough; it states USB3 support but that is only the hubs provided on the motherboard, so I have no place to install the case’s USB3 front header connection.
The system is also incredibly quiet- near silent, in fact. Quiet impressive. In terms of performance it feels snappier than my older desktop system, which cost twice as much (though, that was in 2008 as well). Even though that system uses a dedicated 9800GT card.
I’m trying to decide whether to put Windows 10 onto the system or stick with Windows 8.1, as well as whether to use it as a sort of test system, effectively replacing my old desktop in that capacity.
A while ago, I noted in my post about remapping keys how I got a new laptop. Though at the time I had not used the system enough to feel it fair to provide any sort of review on the product, I’ve been using it for a month now and feel that should be enough to offer my thoughts on the product.
it is worth noting that the T550, like Lenovo’s other Thinkpad models, offers a lot of customization options. In my case, I configured it with a 2.6Ghz i7 5600U processor, 16GB of RAM, a 2880×1620 Multi-touch display, A Fingerprint reader, a 16GB SSD Cache,and a 500GB HDD. Since then I have replaced the Hard Disk with a 480GB Sandisk Ultra II SSD. It is somewhat notable that the system does not feature any sort of discrete graphics capability. My purposes for the machine was primarily for work tasks, so Visual Studio, Text Editors, pgadmin, Browsers, Excel, Skype, and so forth. “Gaming” would be off the table pretty much, though I imagine for some games it would run admirably, the lack of dedicated graphics means that desktop applications are the main benefactor.
I am quite impressed with the system and how well it holds up. It has amazing battery life- over twice the battery life of my previous laptop, which now serves the purpose of a clock on my nightstand. The high resolution of the screen makes it easy to have a lot of different applications open, and while I’ve found I needed to increase the DPI of the screen to be able to read anything, The added definition is amazing to see on a laptop system. It has a higher resolution than my desktop screen (which is 2560×1440) but is about a quarter of the area so pixel density is amazing.
I’ve taken to trying to use the system as my primary development system. This allows me to segregate some of my personal stuff and my work stuff. Realistically I’ve ended up using both my desktop and my laptop for development tasks- simply because it is faster to do so. I’ve also installed some prerelease VS versions for testing purposes, which I haven’t done on my desktop mostly due to disk space considerations (a 480GB SSD is only large if you don’t install a lot of stuff on it, it turns out)
Arguably one complaint I can think of would be how difficult it is to access the system’s innards. With my older Thinkpad 755CDV system, getting access to things like the Hard Disk was incredibly straightforward- the keyboard tray basically lifted up and you could remove and replace components toolessly. With this new T550, I had to release several captive screwes, spudger apart the bottom panel, and then it took quite a bit of force to remove it and get to the insides. Not a massive dealbreaker- as I don’t exactly intend to be constantly replacing components- but it was something of a surprise to see that accessibility has actually decreased with more recent models!
Of note perhaps is the expandability that requires said disassembly. Internally it can support up to 16GB of RAM, and has three M.2 slots. In my case, one has the Wireless card, and the other has the 16GB cache SSD, with the third remaining empty. This leaves some room for expansion, with the option of replacing or upgrading one of the existing M.2 cards and even adding a while new one. It should be noted that things are tightly packed and larger M.2 cards may not fit, though.
All in all I’ve found the Thinkpad T550 to be an excellent machine that while lacking a bit of Oomph compared to “Gaming” PCs it has excellent build quality and (most important to me) a Trackpoint. The Trackpoint has actually “ruined” me in the sense that using the AccuPoint on my old Toshiba feels odd simply because the Nub on the Toshiba is far smaller and has to be operated slightly different. With this more recent system I hold my finger over top, and gently push down and in the direction I want to move the cursor; with the Accupoint this sort of works but it lacks grip and typically you would push it from the side, or at an angle from one side depending on the direction you want to send the cursor.
My current PC is nice and fast and responsive and speedy-quick, so I really do not see a new Computer build in the foreseeable future, (to do so would be like wasting the money invested into this one!)
Despite this, I really do like messing with hardware and I have a hankering to build a new computer. I realized that this is entirely possible and not entirely unreasonable- if I create a build designed to be lower-cost but robust, rather than go for top-rung components, not only will I be able to build it, but it could also be a great gift for people who are struggling with older systems that they use lightly.
With that I set about creating a “Budget” build, designed to be cheaper but also reliable. I came up with the following build. I’ll note the components in my current system for comparison. For obvious reasons I have no intention of using this new build as my main system. I’m uncertain what function I will have it used for though NAS server seems as reasonable as any, once I get more hard disk drives for the purpose.
For the motherboard, I chose a Gigabyte GA-AM1M-S2P. This is a FlexATX Board which will provide on-board Graphics and Sound capabilities, which of course means we won’t need a dedicated graphics card (which on their own can be quite pricy).
I Chose Gigabyte as a brand here mostly because I have found their motherboards to be quite trustworthy. I’ve gotten a lot of mileage out of my EP43-UD3L Motherboard, which I used for my build that I made in 2008 and is still going strong as my backup desktop system. Furthermore, the AM1-MS2P Motherboard appears to have a number of new features, such as USB3, which are nice to have on a new system, and add value to a build designed to be built down to a price.
For comparison, the motherboard in this system is a GA-Z87X-UD3H motherboard, which is a LGA1150 Board for Intel processors. It has comparable, or superior (IMO) features which you would expect from the higher price point.
Sorry, I mean “APU”- it’s odd that AMD wants to call their Processors by a different name, when they are still really CPUs. For the CPU I was originally going to get a cheap Sempron, but I ended up going a bit better and getting a 2.05Ghz Athlon 5350 Quad Core. The CPU/APU of a system can definitely slurp up a lot of the total budget for a system so the aim here was to keep it affordable. This is part of the reason I chose to go with AMD- that, and I’ve not had one since I had a K6-2, so their current CPU/architectures I’m not familiar with so may as well.
For comparison, my current CPU in this system is an Intel i7 4770K with a speed of 3.5Ghz. Aside from the obviously faster clock speed, I’m not entirely clear how much better it is than the Athlon, though as I understand the Athlon uses an older architecture as well, so I think the i7 is quite a bit better- though if course how my current components from yesteryear which cost a pretty penny compare to components from today that cost a smaller fraction of the price is part of the purpose of this experiment.
As a budget build, 4GB was what I was aiming for. The motherboard supports 32GB but- again, budget build. For this “experiment” I decided to go for 2 2GB Corsair XMS3 sticks. I have 4 8GB XMS3 sticks in this system and several XMS2 sticks in my older machines and they have proven reliable. Nowadays 4GB is sort of the “bare minimum” for a usable system, and- again, budget build means saving money wherever possible, which is part of my purpose for this experiment.
For comparison, I actually have the same type of RAM in this system, but in larger module sizes. Rather than 2 2GB XMS3 Sticks, this system has 4 8GB XMS3 sticks.
A Computer case- or as they used to be called, “Cabinet”, really has two purposes- The main purpose is of course to hold all your computer bits, and protect them from the outside world. The second purpose is to not look awful. A lot of cases typically fail in the second aspect. In the case of the Fractal Design Core 1100, it doesn’t try to be a hero. It’s a very standard case
My current Case is a Thermaltake G42 Commander, if my memory serves. It is actually quite an annoying case because the drive cages leave so little clearance for the SATA connectors- there are still two SATA cables that I simply cannot unplug that I’ve left inside the case. If I was forced to review the case I cannot think of many positive points compared to even my older Cooler Master Centurion case.
Corsair CX Series CP-9020058-NA 430W Modular Power Supply – 80+ Bronze, ATX, Modular Cabling, Active PFC, Single +12V Rail, Low Noise, Trouble-free Installation
For the power supply I went with a Corsair 430CX Power supply. I figure this system will not need a lot of power as I don’t intend to install a graphics card, and the modular supply will allow me to actually try to do something approximating not-crap cable management. I chose a Corsair unit mostly for the same reason I chose Corsair RAM; I’ve had good experiences so far with Corsair Power Supplies, so decided to continue that success.
What is the purpose of this build, one might ask? I actually am unsure. Currently I expect that I will set it up as a Linux system, or possibly as a dual-boot, but the main purpose is just so I can build a new system for the building itself, and less so about what I will do with the finished product. Furthermore, I find I can never have enough backup systems; I’ll have my current Desktop system, my relatively recent T550 Laptop, my older Satellite L300 laptop, my older desktop system (Quad Core Q8200 etc.) and this new build system. I’m toyed with the idea of a sort of NAS system though I ought to have thought that through more to get a motherboard with more SATA ports (nothing a SATA card couldn’t solve, though). Furthermore, since I aimed primarily for a low budget (rather than performance or capability as I did with my builds so far) It is a good experiment to see just how much value you get from a system that is nearly 20 times cheaper than the cost of a standard IBM PC did when it was first made available (accounting for inflation, of course).
I have a habit of occasionally making exorbinant purchases that I cannot, under normal circumstances, justify. I have been considering a Sound Card as just such an exorbinant purchase for some time. Each time I managed to reason myself out of it. Well, until a few weeks ago when I took the plunge. The choice was between the Sound Blaster ZXR and a card in the ASUS Xonar Range. Xonar seems to be the go to Sound Card, reading reviews online, so, naturally, I went with the ZXR. I chose the ZXR over the ZX or the Z card in the series because I had more money than reasoning skills. If I had reasoning skills I likely wouldn’t have purchased it in the first place, but I decided to make an investment for this Blog… Yeah, that is what happened.
My experience with sound cards manages to somehow cover almost all the technologies. Starting with a ISA 16-bit Sound Blaster card, then a ISA SB AWE32, moving towards a Sound Blaster 16 PCI (which, as I learned recently, is really a rebranded Ensoniq); Then I bought an Audigy SE, because I thought it was an Audigy when in reality it featured no Audigy processor and was just a host-based processing card. Eventually I upgraded to a X-Fi XtremeGamer (before they renamed it- it has the X-Fi Processor chip and actually performed functions in hardware), and stuck with that until recently, when it caused a BSOD (somehow it managed to bork the system despite the WASAPI rearchitecture, oh well). I’ve been using the Motherboard Audio since for this system and it has functioned fine. I had literally no reason to buy a ZXR except to have a new thing, and of course so I could take pictures and share it on this blog for, again, no reason whatsoever.
The Packaging was about what you would expect. The transparent windows on the front of the box let you see the card and “Audio Control Module”. The standard smorgasbord of marketing guff covers the box like goto’s in a BASIC program. Inside I found the Sound Card, a daughterboard, a few cables and connectors, the Audio Control Module, a Software disc, and a small foldout manual.
Now I had to install the bloody thing. My approach when it comes to building my PC is effectively just making everything plug in, which lends itself to rather messy cabling. And even if I try to be neat I usually end up undoing it at some other point. Generally speaking I don’t exactly sit and admire my build job so it’s not a big issue. I am always paranoid that somehow in the process of installing something I’ll somehow destroy the machine, which is a rather unrealistic fear.
Installation of the card, like for any other PCI Express card was rather straightforward- find a PCI Express slot to use, find a slot to use for the Daughterboard (in this case) and install the card. I ended up installing the DBPro daughterboard right beside it, though I could have also used an unconnected slot cover that was mounted horizontally as well if I wanted to do so.
With the card installed, my Computer now had a piece of it’s rump suddenly festooned with brass and gold-coloured plating. This Sound card in particular uses the big fat TRS audio jacks as part of the actual jack, but the package provides an adapter so I was able to use my current speakers without issue. In the image showing my computer’s rear, We see I have only two slots left now. Also if I was to purchase a second GTX 770 I would have to do some rearranging to get it installed, since it would require the 16 slot currently housing the sound card.
As for the card itself, I’ve found it to be an upgrade. I did a direct comparison between my motherboard Audio and the Creative card with my JBL headphones, and I feel like it is a bit better. Was it worth it? Not really, at least not so far. However it has got me thinking about Sound/Music and C# again, in particular, I waxed a bit about whether I should try my hand at using WASAPI directly, which seems doable. I’m quite annoyed with depending on third-party libraries- commercial, non-free and fairly expensive libraries to boot, but have always found the algorithms and data formats of compressed audio rather intimidating. Being able to play WAV streams is one thing, but writing a stream-based decompressor is another. On the bright side, since I don’t plan to go commercial with anything that I’ve used BASS.NET for, I probably won’t have any issues anyway. But I ramble.
In the early days of computers, “Sound reproduction” on a computer was typically limited to a few beeps and boops. A few early PCs had limited digital audio capabilities, but they were typically limited. The Macintosh was possibly one of the first computers which gained a good market and mind share that had rather advanced sound capabilities. The lowly IBM PC’s “sound capabilities” lagged behind with it’s single basic piezo-electric speaker designed entirely for beep-booping error messages to you like some kind of demented blues singer. The “Sound Card” can trace it’s history to devices like the Creative Music System, The AdLib, and, later, the Creative Game Blaster cards built for the PC. These utilized the Expansion bus to add new capabilities to the system in the form of less beeping and booping and more recognizable music and sound effects.
For quite a number of years Sound Cards were considered “high-end” gaming equipment. Most game titles supported the PC Speaker because it can be assumed present; but many games also supported the sound cards of the day, by using better fidelity sound and even music if a compatible sound card was present.
There is an interesting history in the various sound companies; Creative bought Ensoniq which put Creative in the position to provide their products pre-installed on PCs. In terms of Sound capabilities on PC the most interesting change came in the mid to late 90’s, where Sound card circuitry started to be integrated onto the system motherboard. Discrete sound cards were still better in terms of capabilities, but the built-in sound card included on most systems- even up to today- provides pretty much any sort of Sound capability a typical user may want to use.
In the late 1990’s and early to mid 2000’s, however, Sound Cards did provide features atop what you could find on-board on motherboards. Fundamentally, such sound cards had one of a few distinct markets/purposes:
Games benefited from features such as 3-D Positional audio, hardware streams and mixing, and features such as on-board Sound RAM, used to store audio samples for playback either directly or as part of a Wavetable synthesizer for music.
Professional sound creation and mixing is a different beast entirely. These Cards focused on high-quality components used to provide a high Signal to Noise Ratio at a very high effective sample rate, typically providing strong hardware support to speed up processes involving sound processing and reproduction. These sound cards also have connectivity that allows the use of Professional Audio devices, or include high-grade headphone drivers that support high-impedance headphones.
Though motherboard audio is fairly sophisticated today, back in the day many motherboards either had fairly basic Sound Card’s integrated into the motherboard or lacked one altogether. Some “value” sound cards fill this gap by providing many of the features of Professional Audio cards, and, more often, Gaming cards, primarily via Software drivers that emulate those features that are typically provided via Hardware capabilities on the card, and use the provided card to effectively just provide a place for the audio data to go.
Windows Vista turned the world of Hardware-based Audio processing a bit of a curve ball; Windows Vista introduced a User-mode sound mixer built into the OS known as “WASAPI”, or the Windows Audio Service API. The claim is that this redesign took place because a large portion of STOP Errors on XP and earlier were tracable to Sound Card Drivers, which were, like other drivers, running in Kernel Mode. This redesign effectively created a new Audio Stack. Unfortunately, this relegated Sound cards and audio devices to merely “endpoints”; all processing was effectively done by the built-in Audio stack, with the Sound card driver basically allowing WASAPI to send the results to it. What this means is that many features such as EAX are no longer possible to implement via hardware support on Vista or Later. However, these capabilities are available via the use of emulation software.
Therefore hardware advantages for Sound cards for a large portion of users dwindled; even the “Gaming” Sound cards on the market today do very little to actually improve the sound for games or provide game-related features via the hardware.
Professional Audio, however, still has some hardware capabilities. This is because like any hardware device, drivers can do what they want; the reason WASAPI throws a monkey wrench in the works is that it effectively prevents many features from being processed by the hardware, instead, Windows-based sound capabilities need to be provided a certain way, and without kernel-mode involved anywhere in the stack, hardware cannot be invoked. Professional Audio systems, however, typically have their own particular APIs and interfaces, and these have continued to stick around, so hardware capabilities can be exploited fully via hardware interfaces like ASIO.
The Software emulation market is gone- All Motherboards in production today contain Audio capabilities.
I found this in my draft backlog, dating from November 2013, as can be seen in the permalink. I’ve updated some of the details in the meantime
When it comes to interfacing with a computer, we have in general three interface devices- The keyboard, the mouse, and the monitor. Given their importance in providing information to or from the computer these components could be considered to be some of the more important parts of a complete system; a cheap, awful keyboard, mouse, or monitor can significantly reduce the quality of your experience using an otherwise excellent PC system.
Some time ago, I purchased an Unicomp Ultra Classic Buckling Spring 101 key keyboard. It has been a while since I owned a mechanical keyboard, let alone used one regularly, and my experience told me it was worth the 79$ price tag.
It is very solid and well-built mechanically. The Alt Key keycap had fallen off in transit but I was able to locate it. I connected it up and started to use it… Then it disappeared from the system. Then it reappeared. Then it disappeared again. WTF?
I traced the cause of the issue to the cord going into the keyboard itself. After a while the keyboard became entirely unresponsive- the only way to get it to even be recognized required me to jam the cord into the keyboard’s case with all my strength and hold it upside-down, which is hardly the ideal way to try to then type on it. At least I knew it wasn’t completely fried, though. Annoying to me is the use of faceless nut screws, something for which I have no bit to open. Since it seems it might just be a loose connector inside it would be nice to fix it at the source. That said I don’t think I’m being picky when I say that having paid around $80 for a keyboard sort of has the implicit idea that the keyboard will actually work. (Ironically, while typing that sentence the keyboard disconnected AGAIN). I’m probably going to end up trying to get the keyboard apart myself. Assuming for the moment that it is in fact a loose connector that is the problem.
Ignoring the rather glaring issue that I occasionally cannot even use it, it is actually a quite nice keyboard. I’m disappointed I have to employ some fix-it magic on my own, but given the keyboards are manufactured to order I can understand that sometimes they will have issues or problems, and at least this one is something user-serviceable (hopefully, just a loose cord). Shame I ideally need to order and wait for a specific bit to get access to the case, or try my luck with a pair of needle-nose pliers, that is.
I’ve managed so far by trying the “jam the cord and use the cord run-through indent to hold it in place” again, and it’s been fine for a few hours. Still want to fix it more permanently by ripping it open and fixing it properly, and have to admit I am a bit disappointed this is a consideration, given my old keyboard is over seven years old and still holding up just fine. Still highly recommend Unicomp keyboards, especially if you type a lot.
I Eventually had to scrap that particular keyboard, and went ahead and purchased the same keyboard- I got the black version this time. I was also able to get a replacement cable for the old one but it seems the issue was with some other component of the keyboard, as even after a full disassembly of the keyboard I wasn’t able to get it to work.
In fact, that leads into one of the biggest disadvantages of the unicomp, which is it’s one-way assembly. Aside from the case being held together with headless nuts, the innards are held together with melted plastic, which means that disassembly is not very reversible. Many other keyboards and keyboard brands use a more servicable construction, so that is worth considering. However, those other mechanical keyboards are often more expensive, and further, only Unicomp keyboards have the buckling-spring; other “mechanical” keyboards emulate the bucking spring with plastic latches.
As I mentioned I considered a few alternatives first. I considered the Code keyboard as well as the offerings by wasdkeyboards.com. They are heavily customizable but I couldn’t figure out how to customize it properly; apparently the model I wanted used Version 2 of their designer- and proudly said they had a version 2, but didn’t say where it was. I tried a few things in the URL to see if I could find it manually but then realized I shouldn’t have to fight to give them money and explored alternatives. Perhaps I will try to get a keyboard from them in the future, though I surmise this one will last a while. the “CODE” keyboard- even if we ignore for the moment that they were sold out- didn’t really appeal to me. I liked the backlighting but overall I felt more like I would be paying for the Atwood endorsement than anything. Then I remembered that IBM’s original Model M was highly regarded, and that their keyboard division was actually sold to unicomp (or otherwise). A quick google search took me to pckeyboard.com, (Unicomp’s Sales front) and I found the keyboards offered there and made my purchase.
One large consideration when it comes to Unicomp is that they are really only useful for those within North America, and for those outside of the United States, the exchange rate as well as the cost of shipping can bring up the cost of the keyboard, such that keyboards offered by other companies can actually end up to be cheaper. I have no intention of replacing this keyboard but I may build and setup additional computers which themselves could use a good keyboard, which would allow me to perform comparisons between this Unicomp as well as any new keyboard. I probably won’t buy another Unicomp- their support was excellent and their product OK but I’ve been slightly soured by the first keyboard failing (arguably something I could have pursued warranty replacement for) and furthermore the U.S centric nature of their business cost me an excessive amount of shipping (Which is also why I didn’t go the warranty approach, as it would likely cost me just as much to have it warranty replaced as it would to just buy a new keyboard).
It has been about a month since I built a new PC. Maybe 2. The days sort of blend together. As a result I’ve been able to use the new system and can perhaps offer my thoughts on my component selection. This is hardly a ‘budget” build- it is, for all intents and purposes, ridiculously extravagant. Total cost comes to around $2600. For contrast, however, here is the system it replaced:
This system served me well. When I originally built it, I bought a 750GB Seagate hard disk drive, then later I purchased an additional 1.5TB Hard Drive for it as well. This build took place in 2009. Components were from 2006 or so ( a step behind the latest) for money-saving reasons (I was replacing a 1.6Ghz Single-core P4, so it would be easy to improve anyway). Since I used this system for a long time, I’ll offer my review of each component above:
When purchasing a motherboard, my main concern was, at the time, gigabit ethernet. In the price range this was one of the few that offered it. Also since I intended to purchase an additional Graphics card, I aimed for a motherboard that did not have on-board graphics, since that would be cheaper than one that included it. It was a motherboard and served as a motherboard what can I say? The one thing I have noticed is that it is very finicky about what situations it will boot in. For example, I diagnosed a problem for nearly 30 minutes where it simply refused to boot at all- system would start, then shut off immediately. Suspiciously like an overheating CPU. But turned out- after all the effort of repasting the CPU, remounting the fan, etc. that it was because I had one of the case fans disconnected. As I recall this was however a BIOS option that I simply never changed, though it seems a bit silly to prevent booting in that case, but oh well. The non-existence of SLI capabilities was a non-issue since that was never something I intended to pursue at the time.
There isn’t a lot one can say about Memory. The long-term plan was to eventually upgrade to 16GB, which would max out that board. Unfortunately what I didn’t consider fully was the fact that I would have to not only discard that RAM but buy 4 new 4GB DDR2 sticks, which as I write this runs something like 400 dollars. Far from ideal and hardly a worthy upgrade, IMO. the 8GB was however 8 times larger than the 1GB P4 I was using at the time I built the machine, so it was was a lot of RAM at the time.
Aside from reliability, there wasn’t much I was looking for in a Power Supply. The Model I chose was chosen over a Modular Power Supply to save money. It did the job so I can’t complain- that’s all you can really expect from a Power Supply.
For the Processor my selection choice was aiming at a CPU that was, like the rest of the system, just behind the leading edge. This is typically a good price/value ratio. For this particular CPU one disadvantage I found was that it lacked support for certain virtualization Extensions. In the long run this really only prevented me from installing and running OS/2 within most Virtual Machines.
This proved to be an… interesting… choice in the longer term. For one thing- BFGTech went out of business. For another, the card runs incredibly hot. I’m not sure if this is because the card typically runs really hot or due to changes BFGTech may have made (eg Overclocking). But idling at around 78 Degrees seems a bit on the high side. At first the system was causing bugchecks when the Video Card overheated (beyond 100 degrees). I repasted the Graphics Adapter with Fresh thermal paste which helped bring the termperatures down around 10 degrees, which seems to have worked until I replaced the system.
For the system case what I was going for was basically something simple and ‘plain’. This case worked great for that. It looks plain but also has a nice aesthetic. The only downsides I can think of are that it’s rails are easy to lose, and that the fan that is mounted on the siding needs to be disconnected and reconnected when working within the case.
At the time I had a crapton of Lightscribe discs, so it made sense to try to use them. On the other hand, I’ve ended up seldom using lightscribe capabilities. I find the printed output looks OK but it fades easily and quickly with a short time making them more difficult to read. They do look better than using a felt marker, and burning MSDN ISO files and then being able to provide useful labels that approximate the originals or in some sense try to provide the visage of the originals.
The on-board sound was alright, but I had this in my previous system so I brought it forward. This is an older card (in computer terms) and I believe Creative has reshuffled their naming scheme since to avoid people figuring out which models don’t actually have the DSP chip in them. This one does. You really do need to be fairly careful with some manufacturers- their cheaper offerings might have the same brand or label, but sometimes those budget models are not what they appear. the Audigy SE, for example, doesn’t have an Audigy Sound processor; There is an X-fi card that has the same problem but I have no idea what it’s called now- because Creative shuffled their names again.
This system served me quite well- I clocked countless hours using it with great success. However I started bumping into limitations and problems so decided that investing in a new system would probably be a good idea. Besides, it would give me new fun stuff to fiddle with- what’s not to like about that?
Additionally, unlike that previous build, I could afford a heavier investment- particularly since I will be using it for my work- whereas previously it was something I used entirely in my spare time. The new system components:
Since Gigabyte had proven itself to me with the earlier build I decided they would be the motherboard in my new system as well. I couldn’t find a system with no on-board Video capabilities, probably because they are really just motherboard features supporting the CPU’s on-board Graphics capabilities. That works fine, on-board video can be useful for diagnostic purposes if I encounter problems with the Graphics card. I’ve since also added an extra USB 3 front panel which adds two USB3 plugs and some card readers- now both my USB3 Front Panel Headers are used which I like- I somehow use up a lot of USB ports with Keyboard/mouse/controller/Hub/Printer/Network Adapter/headset/External hard drives. More is always useful, especially on the front panel. This board also happens to support SLI; though I’m not sure if I’ll ever use it. I barely even use the Graphics card I already have, speaking of which..
I think it’s fully possible I went rather overboard with this- I hardly ever play any games, let alone any that would in any case use the full capabilities of this graphics card. I did test it out with some newer games when I first set the system up, but the novelty faded pretty quickly and I went back to the games I usually play which do not require that sort of power. It’s a strange case where I can’t seem to play games; I just end up playing a few minutes, than come up with some idea for something to try to add to a program, then 5 hours later it’s 4 AM. Of course if you believed the hours I actually file I work the bare minimum hours so it may be the case that this approach will not end very well either way. I hope Paula doesn’t read this.
Again- Corsair’s memory didn’t give me any problems so I saw no reason to try another brand. In this case I went with 2x8GB sticks, giving me a total of 16GB with only 2 sticks; the motherboard maxes out at 32GB so I have room for expansion in case it turns out that 640K is not, in fact, enough for everybody. 16GB seems like a lot of memory but I’ve already had Out of Memory errors. I guess with more memory I just expand the set of programs I launch all the time and don’t bother to close.
Remember when I said this system would be overkill? Well here’s a testament to that. In particular, I have to specific desire to Overclock, so I’m not sure why I got a 4770k. I guess the same reason people buy Corvettes. 4 cores with hyperthreading at 3.5Ghz works ridiculously well. Further still, it’s a different generation- a newer one, of course- from the CPU I was using, so I get that “boost” as well, as well as new instruction sets, including the fabled extension required for emulating Programs like OS/2. At which point I promptly installed OS/2 then never touched it since. Worth it.
“With great power comes great responsibility”.
While I have to say advice on purchasing power supplies was definitely unexpected in a Spiderman film, particularly completely out of context, I can’t help but agree. And in this way the RM650 helps yo uresponsibly cable by using a modular design. Of course, that only helps people who use it, considering I just sorta plugged stuff in and hoped it worked rather than fussing about with routing cables behind the motherboard or something, I’m not sure I got the full benefit. But at least Uncle Ben can die knowing that I sort of followed his advice on Power supply purchasing decisions. Though for somebody who makes his living devising various ways of cooking and presenting rice I’m not sure if he is a reliable source for this sort of information.
Optical Drives are a commodity now. I needed a SATA Drive because my Lightscribe was IDE and my new motherboard didn’t provide an IDE Host Adapter. Thankfully they are cheap. I was considering a Blu-Ray drive but then considered that might not be the wisest option, particularly considering that I had no intention of ever using a blu-Ray disc. I’m sure one day I will eat my words, wondering what logic caused me to make the foolish decision to not purchase a Blu-Ray Drive, and perhaps if I had purchased a Blu-Ray drive the computers would not have risen and taken over humanity and enslaved us all. Which I suppose would itself raise additional question such as what purpose computers would have with human slaves. Perhaps they would act as sentinels for pressing any key.
For some reason this time I decided on a case with a Window. Even though it is never in a position to really be viewed at all, and is in many ways pointless. On the other hand, it’s very easy to see the LED Boot-up lights so that awards some points. Unfortunately despite my great expectations it did not infact make me attractive to the opposite sex. I think maybe they tag me as a real playa when I start talking about my windowed PC case. I can always see them roll their eyes, no doubt thinking “ugh, the old ‘My PC Case has a Window’ Pick-up line”. My main concern is if I end up getting married now, I will never know if she liked me for who I am or if she was just after my windowed PC case. The uncertainty would eat away at me until finally she reveals her agenda, taking out my heart and giving it to a pigeon with a bad sense of direction who then get’s lost in a nearby bank while trying to find the bathroom. A common problem with pidgeons.
But back to the PC-Case itself. It’s not as roomy as I would like- in fact the SATA Ports have so little clearance between them and the Case drive mounting chassis it practically requires toddler hands. unfortunately none of the hands I had in my toddler hand collection seemed to do the trick. It uses a front-mesh design which coincidentally (and quite by accident) ended up also being the design of the enclosure I bought, thouh I purchased it after so maybe I did it subconsciously.
I actually purchased these two drives and used them in the previous system, but I had no intention of keeping them there, explicitly making them something of a “first-pass” at the type of things the released album might end up being used for. I had to get an SSD if only for the claims to speed (which have proven quite true) as well as a ridiculously large 4TB drive.
I summary the system is excellent and runs everything I throw at it so far at the maximum settings, which is more than I could have asked for.