17 Nov 2018 @ 2:23 PM 

“Rogue-like” Games have existed for quite a while. Effectively, these titles have some form of permadeath or perhaps a heavy penalty when your character dies or loses, but the most important part is that many aspects of the game are randomized. The idea is, effectively, that every play through is going to be different. This is contrary to most titles, where each play through is the same.

However, while the idea dates back some ways, it’s lately started to gain popularity to use “randomizer” software to randomize those formerly static games, in order to effectively “create” new games from them. Typically, they have restrictions such that items are randomized such that you can get them all.

Over the past week or so I’ve been using my SD2SNES SNES Flash Cartridge to play a Randomized Link to the Past. It’s been an interesting experience; A number of times it seems like I got “stuck” only to find a required item secluded in some random treasure chest. Effectively, the randomizer will randomize the locations of items and will also randomize change a number of other aspects. It has certainly made the game interesting, to say the least.

One can find a list of Game Randomizers here. I personally have had a lot of fun previously with a tool called “ObHack” which was a Doom II Randomized WAD generator which was a modified version of Oblige. I also changed the Lua so that I could generate ridiculous quantities of monsters, because why not. Often in the “open” Levels, I would be immediately spotted by vast hordes of enemies, as well as bosses in the distance, so I would have to be dodging attacks while attempting to battle through the enemies nearby. When this was the first level- staying alive became a trick of it’s own.

Paired with User-generated content such as ROM hacks and custom level sets, Game Randomizers can make a playthrough of your favourite games less samey than usual.

Posted By: BC_Programming
Last Edit: 17 Nov 2018 @ 02:23 PM

EmailPermalinkComments (0)
Tags
Categories: Games

 22 Oct 2018 @ 7:26 PM 

Nowadays, we’ve got two official ways of measuring memory and storage. They can be measured via standard Metric Prefixes, such as Megabytes (MB) or Kilobytes (KB), or they can use the binary prefixes, such as Mibibytes (MiB) and Kibibytes(KiB). And yet, a lot of software seems to use the former, when referring to the latter. Why is that, exactly?

Well, part of the reason is that the official Binary SI prefixes didn’t exist until 2008, and were used to address growing ambiguities between them. Those ambiguities had been growing for decades.

From the outset, when Memory and storage were first developed in respect to computers, it became clear it would be necessary for some sort of notation to be used to measure the memory size and storage space, other than by directly indicating bytes.

Initially, Storage was measured in bits. A bit, of course, was a single element of data- a 0 or a 1. In order to represent other data and numbers, multiple bits would be utilized. While the sizes being in common discussion were small, bits were commonly referenced. In fact even early on there arose something of an ambiguity; often when discussing transfer rates and/or memory chip sizes, one would here "kilobit" or "megabit"; these would be 1,000 bits and 1,000 kilobits respectively, and were not base 2; however, when referring to either storage space or memory in terms of bytes, a kilobyte or a megabyte would be 1,024 bytes or 1,024 kilobits respective.

One of the simplest ways of organizing memory was using powers of two; this allowed a minimum of logic to access specific areas of the memory unit. Because the smallest addressable unit of storage was the byte, which were 8bits, it meant that most memory was manufactured to be a multiple of 1,024 bits, possible because it was the nearest power of 2 to 1,000 that was also divisible by 8. For the most part, rather than adhering strictly to the SI definitions for the prefixes, there was a industry convention that effective indicated that, within the context of computer storage, the SI prefixes were binary prefixes.

For Storage, for a time the same conveniences applied that resulted in total capacities measured in the same units. For example, A Single-sided 180K Floppy Diskette had 512 bytes per sector, 40 sectors a track, and 9 tracks a side.

A single sided 180K diskette had 512 bytes a sector, 40 sectors per track, and 9 tracks per side. That was 184320 Bytes. In today’s terms with the standardized binary prefixes, this would be 180KiB.

360K Diskettes had a similar arrangement but were double-sided. they were 368640 Bytes- again, Binary prefix was being used in advertising.

Same with 720 K 3-1/4" diskettes. 512 bytes/sector, 9 sectors per track, 80 tracks/side, two sides. That’s 737280 bytes. or 720KiB.

The IBM XT 5160 came with a drive advertised as 10MB in size. The disk has 512 bytes per sector, 306 cylinders, 4 heads, and 17 tracks. One cylinder is for diagnostic purposes and unusable. That gives us a CHS of 305/4/17. At 512 bytes/sector, that was 10,618,880 bytes of addressable space. (This was actually more than 10MiB as some defects were expected from the factory). The 20MB drive had a similar story as well. 615(-1 diag) cylinders, 4 heads, 17 sectors per track at 512 bytes a sector- 20.38MiB. The later 62MB drive was 940(-1 diag) cylinders, 8 heads, 17 sectors/track at 512 bytes/sector which gives ~62.36 MiB…

The "1.2MB" and "1.44MB" Floppy diskettes are when things started to get spitballed by marketing departments for ease of advertising and blazed an early trail for things to get even more misleading. The High density "1.2MB" diskettes were 512 bytes a sector, 15 sectors per track, 80 sectors per side, and double sided. That’s a total of 1,228,800 Bytes. or 1200 KiB, But they were then advertised as 1.2MB, Which is simply wrong altogether. It’s either ~1.7MiB, or it is ~1.23MB. it is NOT 1.2MB because that figure is determined by dividing the KiB by 1000 which doesn’t make sense. Same applies to "1.44MB" floppy diskettes, which are actually 1440KB due to having 18 sectors/track. (512 * 18 * 80 * 2=1474560 Bytes. That is either 1.47456MB, or 1.40625MiB, but was advertised as 1.44MB because it was 1440KiB (and presumably easier to write).

Hard drive manufacturers started to take it from there. First by rounding up a tiny bit- A 1987 Quantum LPS Prodrive advertised as 50MB was for example 49.87MB (752 cylinders, 8 heads, 17 sectors per track). I mean, OK- sure, 49.87 is a weird number to advertise I suppose…

it’s unclear when the first intentional and gross misrepresentation of HDD size was actually done where the SI Prefix definition was used to call a drive X MB. But, it was a gradual change. People started to accept the rounding and HDD manufacturers got more bold- eventually one of them released an X MB Drive that they KNEW full well people would interpret as X MiB, and when called out on it claimed they were using the "official SI Prefix" as if there wasn’t already a decades old de-facto standard in the industry regarding how storage was represented.

For the most part this confusion persisting forward is how we ended up with the official Binary Prefixes.

And yet- somewhat ironically – most OS software doesn’t use it. Microsoft Windows still uses the standard Prefixes. As I recall OSX provides for it as an option. Older Operating Systems and software will never use it as they won’t be updated.

The way I see it, HDD manufacturers have won. They are now selling Drives listed as "1TB" which are 930GiB, but because it’s 1,000,000,000,000 bytes or somewhere close, it’s totally cool because they are using the SI prefix.

Posted By: BC_Programming
Last Edit: 23 Oct 2018 @ 07:15 PM

EmailPermalinkComments (0)
Tags

 26 Sep 2018 @ 1:38 PM 

I have a feeling this will be a topic I will cover at length repeatedly, and each time I will have learned things since my previous installments. The Topic? Programming Languages.

I find it quite astonishing just how much Polarizations and fanaticism we can find over what is essentially a syntax for describing operations to a computer. A quick google can reveal any number of arguments about languages, people telling you why Java sucks, people telling you why C# is crap, people telling you why Haskell is useless for real-world applications, people telling you that Delphi has no future, people telling you that there is no need for value semantics on variables, people telling you mutable state is evil, people telling you that Garbage collection is bad, people telling you that manual memory management is bad, etc. It’s an astonishing, never-ending trend. And it’s really quite fascinating.

Why?

I suppose the big question is- Why? Why do people argue about languages, language semantics, capabilities, and paradigms? This is a very difficult question to answer. I’ve always felt that polarization and fanaticism is far more likely to occur when you only know and understand one Programming Language. Of course, I cannot speak for everybody, only from experience. When I only knew one language “fluently”, I was quick to leap to it’s defense. It had massive issues that I can see now, looking back, but which I didn’t see at the time. I justified omissions as being things you didn’t need or could create yourself. I called features in newer languages ‘unnecessary’ and ‘weird’. So the question really is, who was I trying to prove this to? Was I arguing against those I was replying too- or was it all for my own benefit? I’m animate that the reasons for my own behaviour -and, to jump to a premature and biased conclusion, possibly those in which I see similar behaviour over other Languages- was the result of feeling trivialized by the attacks on the language I was using. Basically, it’s the result of programmers rating themselves based on what languages they know and use everyday. This is a natural- if erroneous – method of measuring one’s capabilities. I’ve always been a strong proponent that it isn’t the Programming Language that matters, but rather your understanding of Programming concepts, and how you apply them, as well as not subverting to the religious dogmas that generally surround a specific language design. (I’m trying very hard not to cite specific languages here). Programming Languages generally have set design goals. As a result, they typically encourage a style of programming- or even enforce it through artificial limitations. Additionally, those limitations that do exist (generally for design reasons) are worked around by competent programmers in the language. So when the topic enters the domain of their favourite language not supporting Feature X, they can quickly retort that “you don’t need feature X, because you can use Feature Q, P and R to create something that functions the same”. But that rather misses the point, I feel.

I’ve been careful not to mention specific languages, but here I go. Take Visual Basic 6. That is, Pre .NET. As a confession, I was trapped knowing only Visual Basic 6 well enough to do anything particularly useful with it for a very long time. Looking back- and having to support my legacy applications, such as BCSearch- and I’m astonished by two things that are almost polar opposites; The first is simply how limited the language is. For example, If you had a Object Type CSomeList and wanted to ‘cast’ it to a IList interface, you would have to do this:

Basically, you ‘cast’ by setting it directly to a variable of the desired type you want to cast. These types of little issues and limitations really add up. The other thing that astonished me was the ingenuity of how I dealt with the limitations. At the time, I didn’t really consider some of these things limitations, and I didn’t think of how I dealt with them as workarounds. For example, the above casting requirement I found annoying, so I ended up creating a GlobalMultiUse Class (which means all the Procedures within are public); in this case the Function might be called “ToIList()” and would attempt to cast the parameter to a IList and return it. Additionally, at some point I must have learned about Exception handling in other languages, and I actually created a full-on implementation of Exception handling for Visual Basic 6. Visual Basic 6’s Error Handling was, for those that aren’t aware, rather simple. You could basically say “On Error Goto…” and redirect program flow to a specific label when an error occured. All you would know about the error is the error number, though. My “Exception” implementation built upon this. To Throw an exception, you would create it (usually with an aforementioned public helper), and then throw it. in the Exception’s “Throw()” method, it would save itself as the active Unwind Exception (global variable) and then raise an Application defined error. Handlers were required to recognize that error number, and grab the active exception (using GetException(), if memory serves). GetException would also recognize many Error codes and construct instances of the appropriate Exception type to represent them, so in many cases you didn’t need to check for that error code at all. The result? Code like this:

would become:

There was also a facility to throw inner exceptions, by using ThrowInner() with the retrieved Exception Type.

So what is wrong with it? well, everything. The Language doesn’t provide these capabilities, so I basically have to nip and tuck it to provide them, and the result is some freakish plastic surgery where I’ve grafted Exceptions onto somebody who didn’t want Exceptions. Fact is that, once I moved to other languages, I now see just how freakish some of the stuff I implemented in VB was. That implementation was obviously not thread safe, but that didn’t matter because there was no threading support, for example.

Looking forward

With that in mind, it can be valuable to consider one’s current perspectives and how they may be misguided by that same sort of devotion. This is particularly important when dealing with things that you only have a passing knowledge of. It’s perhaps more applicable if you’ve gained experience with something to be able to recognize flaws, but it’s easy to find flaws or repaint features or aspects as flaws for the purpose of making yourself feel wiser for not having used it sooner.

Posted By: BC_Programming
Last Edit: 26 Sep 2018 @ 01:38 PM

EmailPermalinkComments (0)
Tags

 31 Aug 2018 @ 7:45 PM 

When I was implementing BASeTris, my Tetris Clone, I thought it would be nifty to have Controller support, so I could use my XBox One Controller that I have attached to my PC. My last adventure with Game Controllers ended poorly- BASeBlock has incredibly poor support for them, overall – In revisiting it with the consideration towards XInput rather than DirectInput this time, however, I eventually found XInput.Wrapper, which is a rather simple, single-class approach to handling XInput Keys.

The way that BASeTris handles input is my attempt at separating different Input methods from the start. The Game State interface has a single HandleGameKey routine which effectively handles a single press. That itself get’s called by the actual Input routines, which also include some additional management for features like DAS repeat for certain game keys. The XInput Wrapper, of course, was not like this. It is not particularly event driven and works differently.

I did mess about with it’s “Polling” feature for some time before eventually creating my own implementation of the same. The biggest thing I needed was a “translation” where I could see when keys were pressed and released and therefore track that information and translate it to appropriate GameKey presses. This was the rather small class that I settled on for this purpose and currently have implemented in BASeTris:

It is a bit strange that I needed to create a wrapper for what is itself a wrapper, but it wasn’t like I was going to find a ready-made solution that integrated into how I had designed Input in BASeTris anyway- some massaging was therefore quite expected to be necessary.

Posted By: BC_Programming
Last Edit: 31 Aug 2018 @ 07:45 PM

EmailPermalinkComments (0)
Tags
Categories: C#, Programming

 10 Jun 2018 @ 10:23 AM 

It suddenly occurred to me in the last week that I don’t really have a proper system in place for not only software downloads here on my website but appropriate build integration with the Source Control in building the projects as needed when commits are made. Having set up a Jenkins build environment for the software I work on for my Job, I thought it reasonable that I make the same demands of myself.

One big reason to do this, IMO, is that it can actually encourage me to create new projects. The idea of packaging up the result and making it more easily accessible or usable is often a demotivator I find for creating some new projects. Having an established “system” in place whereby I can make changes on github and have say, installer files “appear” properly on my website as needed can be a motivator- I don’t have to build the program, copy files, run installation scripts, etc. manually every time- I just need to configure it all once and it all “works” by itself.

To that end, I’ve setup Jenkins appropriately on one of my “backup” computers. It’s rather tame in it’s capabilities but it should get the job done, I think- only 4GB of RAM and an AMD 5350. I would use my QX6700 based system, but the AMD system uses far less power. I also considered having Jenkins straight-up on my main system but thought that could get in the way, and just be annoying. Besides- this gives that system a job to do.

With the implementation for work, there were so many projects interdependent and we pretty much always want “everything” that I just made it a single project which builds all at once. This way everything is properly up to date. The alternative was fiddling with 50+ different projects and figuring out the appropriate dependencies to build based on when other projects were updated and such- something of a mess. Not to mention it’s all in one  repository anyway which goes against that idea as well.

In the case of my personal projects on Github, They are already separate repositories. So I will simply have them built as separate projects, with jenkins itself understanding upstream/downstream, I can use that as needed.

I’ve successfully configured the new jenkins setup and it is now building BASeTris, a Tetris clone game I decided to write a while ago. It depends on BASeScores and Elementizer, so those two projects are in jenkins as well.

BASeTris’s final artifact is an installer.

But of course, that installer isn’t much good just sitting on my CI Server! However, I also don’t want to expose the CI Server as a “public” page- there are security considerations, even if I disregard upload bandwidth issues. To that end, I constructed a small program which uploads files using SSH to my website. It will run once a day and is given a directory. It will look in all the immediate subdirectories of that directory, get the most recent file, and upload it to a corresponding remote directory if it hasn’t already been uploaded. I configured BASeTris to copy it’s final artifact there into an appropriate folder.

Alternatively, it is possible to have each project configured to upload the artifacts via SSH as a post-build step. However, I opted to not do that because I would rather not a series of changes throughout the day result in a bunch of new uploads- those would consume space and not be particularly useful. Instead, I’ve opted to have all projects that I want to upload uploaded once a day and only if there have been changes. This should help reduce redundancy (and space usage) of those uploads.

My “plan” is to have a proper PHP script or something that can enumerate the folders and provide a better interface for downloads. If nothing else I would like each CI projects folder to have a "project_current.php” file which automatically sends the latest build- then I can simply link to that on blog download pages for each project and only update it to indicate new features or content.

As an example, http://bc-programming.com/downloads/CI/basetris/ is the location that will contain BASeTris version downloads.

There is still much work to do, however- the program(s) do have git hash metadata added to the project build, so they do have access to their git commit hash, but currently they do not actually present that information. I think it should for example be displayed in the title bar, alongside other build information such as build date, if possible. I’ve tried to come up with a good way to have the version auto-increment but I think I’ll just tweak that as the project(s) change.

Heck- the SSH Uploader utility seems like a good candidate for yet another project to add to github, if I can genericize it so it isn’t hard-coded for my site and purpose.

Posted By: BC_Programming
Last Edit: 10 Jun 2018 @ 10:23 AM

EmailPermalinkComments Off on About Time I had a CI Server, Methinks
Tags

 06 Jun 2018 @ 7:46 PM 

Flash Memory, like anything, is no stranger to illegitimate products. You can find 2TB Flash drives on eBay that are 40 bucks, for example. These claim to be 2TB, show up as 2TB- but, attempting to write data beyond a much smaller size, and the Flash data is corrupted because it actually writes to an earlier location on the drive. My first experience with this was actually with my younger brother’s Gamecube system; when he got it, he also got two "16MB" Memory cards (16 megabit, so 2 Megabytes) However, they would rather frequently corrupt data. I suspect, looking back, it was much the same mechanism- the Memory card was "reporting" as larger than it was and writing beyond the end was corrupting the information on it.

This brings me to today. You can still find cheap Memory cards for those systems which claim sizes such as 128MB. even at the "real" 128 Megabits size that it is, that’s still 16MB which is quite substantial. I’ve recently done some experiments with 4 cheap "128MB" Gamecube Memory Cards that I picked up. Some of the results are quite interesting.

First, I should note that my "main" memory cards for the system are similar cheap cards I picked up online 12 years ago or thereabouts. one is a black card that simply says "Wii/NGC 128 MEGA" on it, the other is a KMD Brand 128MB. The cheap ones I picked up recently have the same case as the KMD and, internally, look much the same, though they feel cheaper; They are branded "HDE". Now, for the ones I have, I’m fairly sure they are legitimate, but not 100%- the Flash chips inside are 128 Megabit and one is even 256Megabit. (Of course this means "128 Mega" and "128 MB" actually means 16MB and 128 Megabits, but whatever).

Since the 4 cards were blank, I decided to do a bit of experimenting with a little program called GCMM, or Gamecube Memory Manager. This is a piece of homebrew that allows you to pretty much do whatever you want with the data on memory cards, including making backups to an SD Card, restoring from an SD Card, copying any file between memory cards, etc. The first simple test is easy- just do a backup and a restore. it shouldn’t matter too much that the card is blank. I backed up the new card no problem. However, when I tried to restore it- it gives a write error at block 1024. This is right at the halfway point. No matter what I couldn’t get passed that point for any of the "new" cards. This indicates to me that the card(s) are actually 8MB cards, with approximately 1024 blocks of storage. What a weird "counterfeit" approach. 8MB is already a rather substantial amount of space, why "ruin" the device by having it report the wrong size and allow data corruption? I found that I could cause raw restores to succeed if I was able to take the card out during the restore process right before it got to 1024.

This discovery is consistent with what I understand of counterfeit flash- the controller will basically write to earlier areas of the memory when instructed to read beyond the "real" size, and will usually overwrite, say, file system structures, needing it to be formatted. Interestingly, If I rip it out before it get’s there, everything backed up up to that point is intact. Something else interesting I found was by looking inside the raw dump I originally created on one of the "new" cards. I found some very interesting data in the raw image. the File system itself was clean but that data remains in the memory, and was still there for viewing. I could see that Wrestlemania 2002 was probably used for testing the card at some point, as there was "w_mania2002" in the raw data, as well as a number of other tidbits that referenced wrestler’s that appeared in that game. What I found much more interesting, however, were a number of other strings: "V402021 2010-06-08" suggests a date that the card might have been manufactured. "Linux-2.6.23.17_stm23_A18B-pdk71"… Now this is interesting! Linux was involved in some way? This wouldn’t be surprising if it was constructed with some sort of embedded system, however it doesn’t make a lot of sense that this would appear on the memory card data itself. Similarly, I found various configuration files:

NetType=1
Language=0
Em10Mode=0
ConTimeout=30
ProductID=00100199007011400002D0154ADF986E
Licence=222
ServiceUser=0512200052225
ServicePwd=282026
PppoeUser=0512200052225@vod
PppoePwd=282026
DHCPPUser=0512200052225@vod
DHCPPPw=282026
IpAddr=192.168.1.12
NetMask=255.255.0.0
GateWay=
DNS=
MacAddr=D0:15:4A:DF:98:6E
Volume=100
TimeZone=8
ProxyFlag=0
AcceptCookie=99

Due to a lot of network information, WLAN IDs, etc. my suspicion is that these flash chips are not actually new, but were taken from some sort of networking device, such as a router or switch. This is supported because googling a few of the configuration settings seems to always lead me to some sort of Chinese ADSL Provider, so I suspect perhaps these Flash chips were re-used from old networking equipment. That, in itself, does add another concern to these Memory Cards- if they were used before they found themselves in these Memory Cards- how much were they used? And how? Were they used to contain the firmware, for example? or were they used to hold a small file system for the networking device?

Overall, for something so seemingly mundane, I found this ti be a very interesting distraction and perhaps this information could prove useful or at least interesting to others.

Posted By: BC_Programming
Last Edit: 06 Jun 2018 @ 07:46 PM

EmailPermalinkComments Off on Memory Card Adventures
Tags
Categories: Programming

 08 Apr 2018 @ 9:45 PM 

I’ve seen, unusually, a few discussions revolving around Apple products which seem to go forward with an assumption that the original 128K Macintosh was a failure. I found it intriguing. Specifically, I’ve seen it said to have failed because it was expensive, underpowered, incompatible with the IBM PC, and didn’t have much memory, and Apple would have had more success if they had released the Macintosh OS as a Desktop Environment on top of Windows. I think this argument comes from a lack of understanding of the early computer ecosystem. Not to mention many of the points are simply incorrect.

One of the bigger draws of the Macintosh was that it was actually fairly affordable for what it provided. At $2,499, the Original Macintosh 128K was cheaper than a IBM PC equipped with 128K of memory by nearly $1,000- And that is compared to a base MDA adapter model without a monitor. Realistically, the only systems that were more affordable than the Macintosh at the time were systems like the Apple II, TRS-80, and Commodore 64.

The system was quite cutting-edge for the time period. the 512×342 display was Black and White, and didn’t have the resolution or colours of the EGA Adapter that was available for the IBM PC, but did not cost anything extra. Additionally, since it was in all Macintosh computers it was, naturally, something that all Macintosh software- at least for a time- was designed for. Unlike MS-DOS Applications, there was no need for special BGI drivers or display card selections/options. Another advantage was that the graphics operated through DMA, meaning that some processing was off-loaded from the otherwise anemic CPU. This was one of the factors which resulted in the desktop environments on the IBM PC being slow until Video Accelerator cards appeared- it is no coincidence, I think, that Windows only truly started to take off only after Video Accelerator cards appeared on the market.

Compatibility is something we take for granted today even between otherwise disparate systems. You can plug in your smartphone and transfer files to your PC or Laptop and then upload them without much fuss, for example, or share Flash Drives or Burned Optical media between different systems with easer. in 1984, this simply wasn’t the case; There was very little in terms of standardization, while many systems used specific floppy diskettes, they seldom used compatible filing systems. TRS-80, Commodore 64, Apple II and IBM PC’s for example could accept the same form factor of 5-1/4" diskettes but you couldn’t share data between them directly because they formatted the disks different and used different file systems. Even as late as 1984 the IBM PC hadn’t truly established itself as a "standard" of it’s own and there were still innumerable standards "vying" for the attention of the typical user. One could just as easily argue that the IBM PC would have been more successful if it had been compatible with Apple II software.

Heck- whether an MS-DOS program even ran on a computer that ran MS-DOS was not really a given. Software often had to be ported between IBM Clone systems for them to work properly. Lotus 1-2-3 often couldn’t be run on many systems that ran MS-DOS anymore than it could run on a Mac 128K… Except that you could run it on a Mac 128K with add-ons like the MacCharlie. Making Equipped Mac 128K more IBM compatible than many clone systems!

Remember that this was a time frame where the idea of a GUI was, in and of itself, a "Killer App" altogether. On MS-DOS for example you couldn’t show charts or graphs in a spreadsheet at the same time as the spreadsheet itself; you had to shift to a graphics mode where you saw only the graph or chart, or possibly there would be a way to preview the printed output, but it was separate from actual editing, all of which would be in a system standard fixed width font, so you got no feedback while actually changing the document regarding things like spacing if you wanted proportional print output. Programs would show certain aspects within those limitations. "Bold" text or headers might be indicated by surrounding them with smiley face characters for example (PC-Write).

The fact that you could manipulate text on-screen and it would reasonably accurately show you what it would look like on the printed page was HUGE. It was the big reason that the Macintosh jump-started the idea of Desktop Publishing and then succeeded in dominating that space for years. THis might all seem redundant since a Mac OS desktop environment would have worked just as well- except it wouldn’t have. One of the things that made it truly possible was the video DMA capabilities of the system which made the fast GUI possible and which therefore made that entire thing possible. If the Mac System software had been a PC OS then it would have simply blended in with the countless other slow and clunky Graphical Environments that had been made available.

Posted By: BC_Programming
Last Edit: 08 Apr 2018 @ 09:45 PM

EmailPermalinkComments Off on Would Apple have done better with the Mac OS on the IBM PC?
Tags
Tags: , ,
Categories: Macintosh

 11 Mar 2018 @ 4:39 PM 

Just a sort of sidebar note thing- I’ve been running this blog/website since around 2009. A few times, I tried to “monetize” it with advertisements. However, I found they were either annoying, or they were simply not worthwhile. I even fiddled with ad blocker detectors and had a little banner for it. However, For a few years now, I’ve removed all advertisements and things like google analytics from the site wherever I could find it.

I have no plans to change this approach. This blog is my megaphone for presenting information to the world, not a funnel that I can use to make money. And with all the concerns surrounding advertisements and their potential for tracking as well as infection of the system, and websites complaining when you use an ad blocker, I’ve decided that simply not having any advertisements at all is one way to set my blog apart.

One interesting aspect is that a lot of things don’t really declare that they will include tracking. including things like the Facebook “Like” button can often be enough for user tracking on your entire website by Facebook, which means that even if I’m not tracking users outside the site (I’m neither interested nor equipped!), other entities could be tracking traffic to my website and using it for traffic shaping or targeted advertisements towards my website visitors, and yet finding and eliminating all of those sources is not entirely obvious.

For added security as well as to meet some upcoming browser changes in the most popular browsers, I switched the websites default protocol to HTTPS a while ago.

I may not be posting as frequently as I used to but I want my posts and information to be provided in the interest of sharing and not through some implicit moral contract that visitors will support me by allowing advertisements and/or tracking.

Posted By: BC_Programming
Last Edit: 11 Mar 2018 @ 04:39 PM

EmailPermalinkComments Off on My website and Advertisements
Tags
Categories: Site News

 28 Feb 2018 @ 9:59 PM 

Nowadays, game music is all digitized. For the most part, it sounds identical between different systems- with only small variations, and the speakers are typically the deciding factor when it comes to sound.

But this was not always the case. There was a time when computers were simply not performant enough- and disk space was at too high a premium- to use digital audio tracks directly as game music.

Instead, if games had music, they would typically use sequenced music. Early on, there were a number of standards, but eventually General MIDI was settled on as a standard. The idea was that the software would instruct the hardware what notes to play and how to play them, and the synthesizer would handle the nitty-gritty details of turning that into audio you could hear.

The result of this implementation was that the same music could sound quite different because of the way the MIDI sequence was synthesized.

FM Synth

The lowest end implementation dealt with FM Synthesis. This was typically found in lower-cost sound cards and devices. The instrument sounds were simulated via math functions, and oftentimes the approximation was poor. However, this also contributed a “unique” feel to the music. Nowadays FM Synth has become popular for enthusiasts of old hardware. Products like the Yamaha OPL3 for example are particularly popular as a “good” sound card for DOS. In fact, the OPL3 has something of a cult following, to the point that source ports of some older games which use MIDI music will often incorporate “emulators” that mimic the output of the OPL3. It’s also possible to find “SoundFonts” which work with more recent audio cards that mimic the audio output of an OPL3, too.

Sample-Based Synth

Sample-based synth is the most common form of MIDI synthesis. Creative Labs referred to their implementation as “Wavetable synthesis” but that is not an accurate description of what their synthesizer actually does. A sample-based synthesizer has a sampled piece of audio from the instrument and will adjust it’s pitch and other qualities based on playback parameters. So for example it might have a sampled piece of audio from a Tuba and then adjust the pitch as needed to generate other notes. This produces a typically “Better” and more realistic sound than FM Synth.

Wavetable Synthesis

Wavetable synthesis is a much more involved form of synthesis which is like FM Synth on steroids; where FM Synth tended to use simpler waveforms, Wavetable synth attempts to reproduce the sound of instruments by having the sound of those instruments modelled with a large number of complicated math functions and calculations as well as mixing numerous pieces of synthesized audio together to create a believable instrument sound. I’m not personally aware of any hardware implementations- though not being anything of a music expert I’m sure there are some- but Software implementations tend to be present and plugins or features of most Music Creation Software.

Personally, I’m of the mind that the best Sample-based Synthesis is better than the FM Synth  that seems to be held on a pedestal; they were lower-end cards built down to a price which is why they used the much more simplistic FM synthesis approach, after all. It’s unique audio captured a lot of people playing games using that sort of low end audio hardware, so to a lot of people, FM Synth is  how games like Doom or Monkey Island are “Supposed” to sound. I think that Sample-based synth is better- but, on the other hand, that is how I first played most of those games, so I’m really just falling into the same trap.

Posted By: BC_Programming
Last Edit: 28 Feb 2018 @ 09:59 PM

EmailPermalinkComments Off on MIDI Madness
Tags

 02 Feb 2018 @ 5:50 PM 

CPU architectures and referring to them sit in a sort of middle ground. They need to be technically accurate but over time their terminology can change. I thought it would be interesting to look into the name origins of the two most widely used CPU architectures for desktop systems today. And, admittedly this is fresh on my mind from some research I was doing.

x86

Nowadays, most 32-bit software for desktops and laptops is referred to as being  built for “x86”. What does this mean, exactly? Well, as expected, we have to go back quite a ways. After the 8086, Intel released the 80186, 80286, and 80386. The common architectures and instructions behind these CPUs came to be known as “80×86 instructions”- understandably. The 486 that followed the 80386 dropped the 80 officially from the name- inspection tools would “imply” it’s existence but Intel never truly called their 486 CPUs “80486”. It’s possible this is how the 80 got dropped. Another theory could be that it was simply dropped for convenience- x86 was enough to identify what was being referenced, after all.

The term survived up to today, even though, starting with the Pentium, the processors themselves never truly bore the “mark” of an x86 processor.

x64

x64 is slightly more interesting in it’s origins. 64-bit computing had existed on other architectures before but x64 now references the “typical” x86-compatible 64-bit operating mode. Intel’s first foray into this field was with the Itanium processor. This is a 64-bit processor and it’s instruction set is called “IA-64” (as in Intel-Architecture 64”). This did not work well as it was not directly compatible with x86 and therefore required software emulation.

it was AMD who extended the existing x86 instruction set to add support for 64-bit through a new operating mode. Much as 32-bit instructions were added to the 80386 and compatibility preserved by adding a new “operating mode” to the CPU, the same was done here; 64-bit operations would be exclusive to the 64-bit Long Protected Mode, where 32-bit was still 32-bit protected mode, and the CPU was still compatible with real mode.

This AMD architecture was called “AMD64” and the underlying architecture that it implemented was “x86-64”.

Intel, as part of a series of settlements, licensed AMD’s new architecture and implemented x86-64. This implementation went through a few names- x86E, EM64T- but Intel eventually settled on Intel64. Intel64 and AMD64 aren’t identical, so software targets a subset- this subset is where we get  the name x64.

Posted By: BC_Programming
Last Edit: 02 Feb 2018 @ 05:50 PM

EmailPermalinkComments Off on x86 and x64 Name Origins
Tags
Tags: ,
Categories: Hardware





 Last 50 Posts
 Back
Change Theme...
  • Users » 47469
  • Posts/Pages » 383
  • Comments » 105

PP



    No Child Pages.

Windows optimization tips



    No Child Pages.

Soft. Picks



    No Child Pages.

VS Fixes



    No Child Pages.