22 Oct 2018 @ 7:26 PM 

Nowadays, we’ve got two official ways of measuring memory and storage. They can be measured via standard Metric Prefixes, such as Megabytes (MB) or Kilobytes (KB), or they can use the binary prefixes, such as Mibibytes (MiB) and Kibibytes(KiB). And yet, a lot of software seems to use the former, when referring to the latter. Why is that, exactly?

Well, part of the reason is that the official Binary SI prefixes didn’t exist until 2008, and were used to address growing ambiguities between them. Those ambiguities had been growing for decades.

From the outset, when Memory and storage were first developed in respect to computers, it became clear it would be necessary for some sort of notation to be used to measure the memory size and storage space, other than by directly indicating bytes.

Initially, Storage was measured in bits. A bit, of course, was a single element of data- a 0 or a 1. In order to represent other data and numbers, multiple bits would be utilized. While the sizes being in common discussion were small, bits were commonly referenced. In fact even early on there arose something of an ambiguity; often when discussing transfer rates and/or memory chip sizes, one would here "kilobit" or "megabit"; these would be 1,000 bits and 1,000 kilobits respectively, and were not base 2; however, when referring to either storage space or memory in terms of bytes, a kilobyte or a megabyte would be 1,024 bytes or 1,024 kilobits respective.

One of the simplest ways of organizing memory was using powers of two; this allowed a minimum of logic to access specific areas of the memory unit. Because the smallest addressable unit of storage was the byte, which were 8bits, it meant that most memory was manufactured to be a multiple of 1,024 bits, possible because it was the nearest power of 2 to 1,000 that was also divisible by 8. For the most part, rather than adhering strictly to the SI definitions for the prefixes, there was a industry convention that effective indicated that, within the context of computer storage, the SI prefixes were binary prefixes.

For Storage, for a time the same conveniences applied that resulted in total capacities measured in the same units. For example, A Single-sided 180K Floppy Diskette had 512 bytes per sector, 40 sectors a track, and 9 tracks a side.

A single sided 180K diskette had 512 bytes a sector, 40 sectors per track, and 9 tracks per side. That was 184320 Bytes. In today’s terms with the standardized binary prefixes, this would be 180KiB.

360K Diskettes had a similar arrangement but were double-sided. they were 368640 Bytes- again, Binary prefix was being used in advertising.

Same with 720 K 3-1/4" diskettes. 512 bytes/sector, 9 sectors per track, 80 tracks/side, two sides. That’s 737280 bytes. or 720KiB.

The IBM XT 5160 came with a drive advertised as 10MB in size. The disk has 512 bytes per sector, 306 cylinders, 4 heads, and 17 tracks. One cylinder is for diagnostic purposes and unusable. That gives us a CHS of 305/4/17. At 512 bytes/sector, that was 10,618,880 bytes of addressable space. (This was actually more than 10MiB as some defects were expected from the factory). The 20MB drive had a similar story as well. 615(-1 diag) cylinders, 4 heads, 17 sectors per track at 512 bytes a sector- 20.38MiB. The later 62MB drive was 940(-1 diag) cylinders, 8 heads, 17 sectors/track at 512 bytes/sector which gives ~62.36 MiB…

The "1.2MB" and "1.44MB" Floppy diskettes are when things started to get spitballed by marketing departments for ease of advertising and blazed an early trail for things to get even more misleading. The High density "1.2MB" diskettes were 512 bytes a sector, 15 sectors per track, 80 sectors per side, and double sided. That’s a total of 1,228,800 Bytes. or 1200 KiB, But they were then advertised as 1.2MB, Which is simply wrong altogether. It’s either ~1.7MiB, or it is ~1.23MB. it is NOT 1.2MB because that figure is determined by dividing the KiB by 1000 which doesn’t make sense. Same applies to "1.44MB" floppy diskettes, which are actually 1440KB due to having 18 sectors/track. (512 * 18 * 80 * 2=1474560 Bytes. That is either 1.47456MB, or 1.40625MiB, but was advertised as 1.44MB because it was 1440KiB (and presumably easier to write).

Hard drive manufacturers started to take it from there. First by rounding up a tiny bit- A 1987 Quantum LPS Prodrive advertised as 50MB was for example 49.87MB (752 cylinders, 8 heads, 17 sectors per track). I mean, OK- sure, 49.87 is a weird number to advertise I suppose…

it’s unclear when the first intentional and gross misrepresentation of HDD size was actually done where the SI Prefix definition was used to call a drive X MB. But, it was a gradual change. People started to accept the rounding and HDD manufacturers got more bold- eventually one of them released an X MB Drive that they KNEW full well people would interpret as X MiB, and when called out on it claimed they were using the "official SI Prefix" as if there wasn’t already a decades old de-facto standard in the industry regarding how storage was represented.

For the most part this confusion persisting forward is how we ended up with the official Binary Prefixes.

And yet- somewhat ironically – most OS software doesn’t use it. Microsoft Windows still uses the standard Prefixes. As I recall OSX provides for it as an option. Older Operating Systems and software will never use it as they won’t be updated.

The way I see it, HDD manufacturers have won. They are now selling Drives listed as "1TB" which are 930GiB, but because it’s 1,000,000,000,000 bytes or somewhere close, it’s totally cool because they are using the SI prefix.

Posted By: BC_Programming
Last Edit: 23 Oct 2018 @ 07:15 PM

EmailPermalinkComments (0)
Tags
 13 Dec 2011 @ 10:04 AM 

I don’t know how helpful this will be, but it sort of surprised me.

Basically, my brother has managed to go through three PS3 consoles. Each time, being the hardware expert he is – the type that would, when my 486 wasn’t booting up, open it up and make sure every connection was plugged into something – decided he could fix it himself. I think the issue was it wasn’t reading discs or something. Of course my advice was to send the bloody thing to Sony, but hey it was his warranty to void. What ended up happening of course was he ripped the entire thing apart, had absolutely no idea what he was doing and he ended up having to buy a new one since that one was no longer applicable for service. Anyway, I stumbled on the picked apart carcass of his old PS3- and I remembered that they have hard drives. So I opened up the HD access panel, took out the HD, and to my surprise I found it was just a 2.5″ SATA drive. To confirm this I plopped it into my laptop and installed Mint 12 on it. It’s mine now, heh. I’m not sure where his other picked part carcasses are, though. It’s a shame this laptop only allows for the installation of one Hard Drive, too.

Anyway, I didn’t know that they were so interchangable with PC parts in this manner, so maybe others might not be aware of it either. And I know quite a few people with dead consoles (PS3/XBox 360, etc) that they have basically shelved and forgotten about so if somebody needs an emergency Hard Drive this could be a useful nugget of info.

On a related Note, Mint 12 is extremely impressive… Although it primarily Reminded me just how heavily I customized the Mint 10 installation I was used to using on my laptop. The changes were mostly UI and I couldn’t figure out how to get my beloved Emerald working with a few quick googles so I swapped the drives back over. Now I could have messed about with Mint 12 by simply using the Live CD, but the Live CD is always somewhat slow and hardly really shows the OS at it’s true potential. And of course you can’t really add anything or make many changes to it, since it’s booting from a Read-Only medium.

Regarding Console Systems, though; is it just me, or are they basically just re-purposed PCs? The Xbox and Xbox360 are quite literally PC hardware specially built for handling gaming tasks, with specific software and also firmware “locks” to try to keep nosey people from finding out it’s really just a PC. This isn’t so bad, but it’s sort of stupid- I mean, really, the original XBox is essentially a Pentium 3 PC; The controller ports are just freakazoid USB connectors that they purposely changed just so they won’t be USB connections,and possibly to make them stay in better, USB ZIF slots aren’t what I would call the greatest for controllers. On the other hand, why change the entire pinout configuration- why couldn’t they have simply added some sort of additional mechanical connection that made them stay in better? And all the fancy crap about locking the Hard Drives from being changed by the user, and so forth is sort of silly. It doesn’t make a whole lot of sense to artificially limit what the device is capable of simply because you charge less for it than an equivalently configured PC.

And with all the add-ons for Console machines, such as keyboards, support for USB controllers, Hard Drives, Ethernet; the only real difference between consoles and PCs is that consoles always have the exact same hardware (things like GPU and CPU) that software developers can expect, whereas PCs have widely varying hardware; also, the Consoles are purposely locked down for reasons I can only guess.

This is all well and good, but as I noted, my Brother has gone through at least 3 Playstation 3 consoles. He wasn’t throwing them around the room or anything, I doubt he was abusive to them at all. And yet- they stopped working in one way or another. The failures of Xbox machines is no less of a problem. Meanwhile, my Super Nintendo is 20 years old and still works perfectly fine. A commonly cited “excuse” is that the machines are more complicated. Well, these people need to take a good hard look at the schematics for the various SNES ASIC chips and perhaps re-evaluate their definition of complicated. The only change is that newer consoles have more mechanical parts and they generate more heat and are squashed into as small a form-factor as possible. It has nothing to do with them being “more complicated” and everything to do with them being built out cheaper components than a PC (to justify the lower price point) and makes all hardware issues “non-user servicable”, unlike, say, a PC. This was a acceptable policy for things like the SNES or the Sega Genesis or earlier consoles of that nature; most of the issues that those consoles have are the result of loose connections that typically require Soldering knowledge to fix properly. But now, that sort of policy is sort of silly, since a lot of the problems with modern consoles are relatively simply in comparison, and many enthusiasts who know what the issue is could fix it themselves, if the machines themselves weren’t put together in a way that dissuades attempts to dissassemble- things like special screws (Torx); again, warranted when the device innards were generally something that wasn’t user-servicable to the typical enthusiast, but now it’s just a artificial barrier to make the machines seem less user-servicable than they are. And, more to the point, the fact is that they simply fail more often now, and it seems like it would be in the company’s best interest to make them more user-servicable since that would mean fewer warranty repairs. (Obviously they can keep their old “take it apart and void the Warranty” thing.

Posted By: BC_Programming
Last Edit: 13 Dec 2011 @ 10:04 AM

EmailPermalinkComments Off on Broken Consoles Means “free” Hard Drives :D
Tags

 Last 50 Posts
 Back
Change Theme...
  • Users » 47469
  • Posts/Pages » 382
  • Comments » 105

PP



    No Child Pages.

Windows optimization tips



    No Child Pages.

Soft. Picks



    No Child Pages.

VS Fixes



    No Child Pages.