Ever since The release, and subsequent consumption of 64-bit operating systems by the general public, there has been quite some confusion over exactly what you can, and cannot run on it, as well as what happens when you do certain things. I touched on the topic of x64 some time ago with my post about Upgrading to x64: is it really as problem filled as many claim? which was more or less directed at some of the FUD being spread that x64 is limited in some fashion.
First, it’s important to understand exactly what the “bits” being discussed are. This is something that many people ignore, and they instead rely on over-simplified analogies that help perpetuate the rumours and disinformation on the subject. In order to fully understand what it entails, one needs to examine the history of Processors, or, more precisely, the history of x86 processors.
When people think about processor speed, the term that comes up quite quickly is Clock frequency. Todays processors often break 3Ghz, which roughly equates to 3 billion operations per second. Many people end the analysys right there. However, it’s important to understand exactly what constitutes an “operation”. One of the most common “operations” is moving or copying data. This is done using the “MOV” assembly instruction. The amount of data that is moved depends entirely on the width of the processor Data bus. the first “x86” compatible processor (or the first we can probably get away with calling the x86 compatible) would probably be the 8086 processor from Intel. This processor had a 16-bit Bus width, and could move data 16 bits for every clock cycle. This processor was released in 8Mhz and 10Mhz variants, therefore it could move 16 bits of data from one 16-bit register to another 16-bit register either 8 or 10 million times a second. (Of course, because memory is so much slower the register accesses, this would never be the actual thoroughput). The idea is, if it was an 8 bit processor, it’s effectiveness would be nearly halved- for every clock cycle it would be able to transfer only 8 bits of data, rather then 16. The 8088, which IBM chose as the processor for what would become the first cobblestone in a very long walkway of Personal computers using a similar architecture, the IBM PC. This wasn’t chosen because it outperformed the 8086 in any way. The deciding factor here was more economics- the cost tradeoff to use the 8086 instead, despite it’s greater speeds and bus width.
Once the Personal Computer Took off, Intel quickly established it’s position as the dominant “role model” for all other manufacturers for quite some time; following the release of the 8086 and 8086, Intel also released the 80186 (which to my understanding never met us common folk, and instead was reserved for industrial machinery), the 80286, which was used in IBM’s “AT” (Advanced Technology) Computer, the 80386, the 80486, and the Pentium. The Pentium was a hallmark because it apparently took Intel this long to realize that you couldn’t trademark a number (AMD, Cyrix, VIA, and a few other “clone” CPU manufacturers were using the same labelling, such as AMD 80486, and Intel wasn’t pleased) So they came up with the name “Pentium” which until recently was used on all their processors (the “i” series being I believe the first set of processors not to bear the name pentium, aside from a few exceptions).
The 80386 had a 32 bit bus width. What this means is that it could potentially transfer 32 bits of data per cycle. it came in speeds of 20,25,33, and 40 Mhz.
What makes this rather interesting is that although the 80386 was a fully 32-bit processor, as were all it’s successors, no true consumer-oriented 32-bit operating system was released until windows 95 (I’m not counting OS/2 because, although it was great it didn’t really get great market penetration).
With the introduction of windows 95, many of the API Functions that programmers had been using to interface with windows needed to be changed. This was as a result of the fact that handles, or, to be more precise, pointers, needed to be 32-bit to support both the architecture itself as well as changes in the OS designed to help support those changes, where now 32-bits, rather then 16-bits. this changed the size and declarations for a large number of functions, which meant that any application that needed to run on windows 95 would be required to be compiled specifically for it.
Thankfully, Microsoft realized that such an endeavor or forcing users into 32-bit without letting them wade a bit was a marketing mistake. So, they also built in a small emulator, called “Windows on Windows”. this allowed any 16-bit application to interface with 16-bit dlls in a mocked up 16-bit environment. It also allowed DOS programs to run, one of the many goals of windows 95 was to run a vast number of popular DOS games of the time.
At the time, the change wasn’t hugely publicized. There was no real choice- if you ran windows 3.1, you were 16-bit. if you ran windows 95, 32-bit. There weren’t 32-bit editions or 16-bit editions. In fact, come to think of it, they didn’t have editions at all.
So now, we have Windows Vista and Windows 7, both released with “32-bit” and “64-bit” versions, as well as Linux distributions with the same selection, and people are in an uproar giving out completely baseless information “if you have 4GB or more of RAM, only then should you use a 64-bit OS” and other completely ludicrous claims.
Equally, there are claims that the 64-bit “requires” more memory. It doesn’t. the fact that handles are 64-bits, instead of 32, is completely irrelevant. Will it consume more RAM? hell yeah, it will take a whole 32KB more RAM if you happen to have several hundred thousand handles open. Since you’d run out of memory long before you have that many handles open, it’s premature to blame the size of the handle.
a 64-bit Processor can move 64-bits of data in a single clock cycle. since most 64-bit processors often run at 2ghz or higher and are often even dual or quad core, that can mean a lot more potential for speedup. It won’t quite double the speed, but if you run your 64-bit machine using a 64-bit OS and mostly 64-bit applications, you are going to see the difference, just as running a 32-bit operating system on a 32-bit system will be different.
There is an additional case where running 32-bit programs is practically suicide- the Intel Itanium Processor. This was one of the first 64-bit processors developed. Unlike almost all of Intel’s efforts, the Itanium was not x86 compatible- it could run x86 code, but it did so very slowly in comparison to running equivalent 64-bit code. They were different instruction sets.
AMD was the “saviour” of sorts that got 64-bit back on track, by making it possibly to run 32-bit code natively with a 64-bit processor. Of course, just because it’s possible to run 32-bit code doesn’t make it a good idea.
The main problem with installing a 32-bit Operating system on one of todays 64-bit processors is that it locks you in. No matter what you do within that 32-bit operating system, as far as the OS is concerned, you are on a 32-bit processor. If you try to run a 64-bit application, you just get an “invalid executable image” error. This is why I don’t understand why people are sticking by their otherwise obsoleted 32-bit operating systems. Everything everybody says is “bad” about 64-bit- the fact that it “requires” more memory (they base this on MS recommendations, which are never accurate anyway) as well as all sorts of clearly fabricated horror stories about what happened last summer when they tried a 64-bit OS, is utter nonsense. I’m running a 64-bit Operating System on two of my machines, and I hardly even notice it. The only difference is that I can run 64-bit programs and have access to 64-bit features.
The additional features of 64-bit are particularly important when it comes to “Virtual machine” type environment. Specifically, I speak of the .NET CLR. with a 64-bit processor and a 64-bit operating system, running a .NET application will cause the Just In Time Compiler to create 64-bit code with 64-bit optimizations, whereas running the same code with a 64-bit processor on a 32-bit operating system yields no benefit at all. Since a large number of other programming idioms are now spreading to the concept of compiling on the spot rather then when the program is to be distributed, the concept of machine-specific optimization is incredibly relevant. While a 32-bit and 64-bit Windows OS can but run 32-bit windows programs, only a 64-bit OS can run a 64-bit program and thereby see the performance benefits therein. The problem is that people supposedly measuring the performance differences between 32-bit and 64-bit are in fact running benchmark programs and testing tools that are 32-bit in what they say is “an interest in fairness”, which is utter nonsense. In order to see the true benefits of 64-bit everything needs to be 64-bit. The OS itself needs to be 64-bit, and the programs used need to be 64-bit. If either of these is not and you base some sort of performance statistic on the performance of that combination then you are simply lying to yourself. All your doing by running a 32-bit benchmark program on 64-bit is testing the 64-bit processors ability to run x86, you aren’t taxing the limits of it’s capabilities.
Have something to say about this post? Comment!