28 Aug 2016 @ 1:08 AM 

As anybody knows, there can be a lot of incorrect information on the internet. Internet “Just so” stories can spread like wildfire if they are believable and explain something neatly. One of those “just so” stories involves older game consoles and computers; over time, we find that our once-white and gray plastics on old systems like Apple II’s, NES consoles, SNES consoles, and so on change colour; they change from white or gray to yellow, and over time that yellow darkens, sometimes even turning brown.

This phenomena is “explained” here. Or is it? Does what is stated there about the process reflect reality? Does it make chemical sense? To the layman or casual observer- hey, it makes sense. Bromine IS brown, after all, it’s added to the plastic. But is there a chemical basis and support for it? What reactions actually take place?

“RetroBright”- which is basically just Hydrogen peroxide – is commonly recommended to “reverse” the effects. The reason I care about the actual chemical properties is because the yellowing itself goin g away isn’t an indication that everything is back to how it was. Colour changes can be the result of all sorts of things. More importantly, if we learn the actual chemical processes involved, perhaps we can come up with alternative approaches.

Basically, the story put forth in the article is a rather commonly repeated myth- a Chemical “just-so” story of sorts- “Bromine is brown so that must be it” Is the extent of the intellectual discussion regarding chemistry, more or less. Generally though there isn’t much drive to look further into it- it all makes sense to the layman on the surface, or even one with rather standard chemistry knowledge. But when you look deeper than the surface of the concept- you see that the commonly held belief that Brominated Flame Retardants are responsible doesn’t seem to hold up.

First we can start with the first inaccuracy in that link- Bromine is not added as a flame retardant- that is flat out, categorically and completely wrong, and trivially easy to refute. Bromine compounds are added as flame retardants, But as they are compounds, the colour of elemental Bromine (brown) is irrelevant, because elemental Bromine is not added to the plastic. Specifically, chemicals like Tetrabromobisphenol A. (C15H12Br4O2).

The article also says that “The problem is that bromine undergoes a reaction when exposed to ultraviolet (UV) radiation” But Bromine doesn’t photo-oxidize. It doesn’t even react with anything in the air on it’s own; creating Bromine dioxide either involves exposing it to Ozone at very low temperatures alongside trichlorofluoromethane, alternatively, gaseous bromine can be made to react with oxygen by passing a current through it. Neither of these seem like they take place in a Super Nintendo. Not to mention elemental bromine is brown, so if it was in the plastic, oxidization would change it from the brown of elemental bromine to the yellow of bromine dioxide.

Back to what IS in the plastic, though- Tetrabromobisphenol A is not photosensitive; it won’t react with oxygen in the air due to UV light exposure, and the bromine cannot be “freed” from the compound and made elemental through some coincidence in a typical environment. It is simply not the cause of the yellowing; (it will yellow without BFR’s as well, which sort of indicates it’s probably not involved).

The Yellowing is inherent to ABS plastics, because it is the ABS plastic itself that is photo-oxidative. On exposure to UV light (or heat, which is why it can happen with systems stored in attics for example), the butadiene portion of the polymer chain will react with oxygen and form carbonyl-b. That compound is brown. There’s your culprit right there. Retrobright works because thsoe carbonyls react with hydrogen peroxide, and create another compound which is colourless. but the butadiene portion of the polymer remains weak- oxalic acid is thought to be one possible way to reverse the original reaction.

So why does it sometimes not affect certain parts of the plastic or certain systems? here the “just so” story is a bit closer to reality- the “story” is that the plastic formulae has different amounts of brominated flame retardants, This is probably true, but as that compound isn’t photo-reactive or involved in the chemical process, it’s not what matters here. What causes the difference is a variance in a different part of the formulae- the UV stabiliser.

UV Stabilisers are added to pretty much All ABS plastic intentionally to try to offset the butadiene reaction and the yellowing effect the resulting carbonyl has. They absorb UV light and dissipate it as infrared wavelength energy which doesn’t catalyze a reaction in the butadiene. Less UV Stabilizer means more UV gets to the Butadiene and causes a reaction and the plastic yellows more quickly. more UV stabilizer means less UV catalyzes reactions and the plastic takes longer to change colour.

As with anything related to this- the best way is to experiment. I’ve decided to pick up some supplies and test both approaches on a single piece of plastic. some standard “retrobright” mixture using hydrogen peroxide, and a variation using oxalic acid. I can apply both to the same piece of yellowed plastic, and observe the results. Are both effective at removing the yellowing color? what happens longer term? It should be an interesting experiment.

Posted By: BC_Programming
Last Edit: 28 Aug 2016 @ 01:08 AM

EmailPermalinkComments (0)
Tags
 16 May 2016 @ 11:59 AM 

Occasionally, I like to fire up gzDoom and play through some of the old Doom and Doom II Games and megawads. I use a Random Level generator, Obhack, which I also hacked further to increase enemy and ammo. However, one alteration I like to make is to have higher Ammunition limits. As it happens, the way I had it set up, this information was in a DEHacked patch file within the WAD. As a result, to make changes, I had to use a tool called “Doom Wad Editor”.

Doom WAD Editor, or DWE for short, is about the most up to date tool I could find, and it is rather messy internally. It performs a lot of up-front processing to load the file and show previews and it doesn’t support a lot of modern capabilities. I recently came to a realization that the WAD Format is not some major secret- I could create my own tool.

So far, I’ve been able to construct the Format handler that is able to open and save the internal LUMP files. I’ll likely expand things to also use the KGROUP format (which is used by sole Build Engine games like Duke Nukem 3D) and create a Modern Application for current Windows versions for modifying those older file formats.

The WAD File Format

The WAD (For “Where’s All the Data?”) Format is a format used for Doom and Doom II as well as games using the same engine as well as modern source ports for those games to store game data; this includes maps, levels, textures, sprites, sounds, etc.

The Format itself is rather straightforward. As with most files, we have a Header. At the very start of the file, we find the characters IWAD or PWAD. These characters determine the “type” of the WAD file; a PWAD is a “Patch” Wad, which means it patches another WAD file’s data by replacing it’s contents. For example, a mod that changes all the sounds to be silly animal noises would be a PWAD which uses the same names for different data. an IWAD can be thought of as an “Initial” WAD. These are the “core” WAD files that are needed to play the games in question. The Header data is followed by a signed 32-bit integer indicating the number of Lumps in the file. (A Lump being effectively a piece of data). After that, is another 32-bit integer which is a file offset, from the beginning of the file, where the Lump Directory begins. The Lump Directory is a sequence of Lump Positions in the file, their size, and their 8-character name.

This is all, so far, relatively straightforward. So let’s get to it!. Now, this is just a code example of the basic implementation- my plan going forward with this tool is to flesh it out into a WPF Application that provides full editing and manipulation capabilities to WAD files. There is still an active Doom community creating Megawads and it may prove useful to somebody, and it’s unique enough that creating such an application should be interesting. I’ve been able to load and then resave the standard DOOM.WAD and have the newly saved version function correctly, so it would seem I did something correctly so far:

Posted By: BC_Programming
Last Edit: 16 May 2016 @ 11:59 AM

EmailPermalinkComments (0)
Tags
Categories: .NET, C#, Games
 26 Dec 2015 @ 1:43 PM 

When it comes to playing older game consoles, there are a lot of varying opinions. One of the common ones I see is that the only way to play old game consoles like the NES/SNES/Genesis/etc. ‘authentically’ is to play them on a CRT. I’ve never bought into that, personally. The general claim seems to revolve around some very particular scenarios, which I will mention- which are used to support an idea that the games were designed specifically for CRT technology. Let’s look into the facts. Then, we can do some experiments.

First, we’ll start with a comparison image that I commonly see used to support this. The image is a screenshot of a portion of a screen in FF6 (FF3 for the US) from the SNES. First, we have the image which is called the “Emulator” image:

Emulated_FF6

FF6, Alleged Image from an Emulator

This is held as an example of how ‘pure’ emulated game imagery is “gross and blocky”. Stemming from that, the claim is that this is not “authentic”- that in order to do so, the game imagery is supposed to be blurred; this is claimed to be a direct side effect of the CRT technology. Then this image is typically provided:

Interpolation_FF6

FF6, Alleged image from a CRT Television.

This is often claimed to be “what the game looks like on a CRT TV” and, typically, claimed as what it is designed to look like. However, there are a few issues with the claim. the first one is that this is taking a relatively small portion of the game screen, and blowing it up immensely. The fact is that you aren’t going to be seeing any of the pixel detail of the first image unless you press your face right into your monitor. Another, and perhaps more glaring issue- is that the second image is taken from an emulator as well. The effect can be achieved by merely turning on bilinear interpolation in an emulator such as SNES9X. So the image doesn’t actually tell us anything- it shows us an image without an emulator feature, and an image with it enabled. It asserts the latter image is “accurate to what it looks like on a CRT” But is it? The image itself is hardly proof of this.

Some short debates got me thinking about it. In particular, one common discussion is about Nintendo’s Wii U Virtual Console. For their NES library, I will often argue that for whatever reason it applies a rather gross blur filter over everything. I am told something along the lines of that this is intended to “mimic the original CRT TVs which were always blurry”. I find this difficult to believe. So the desire to properly experiment with an actual CRT TV, and the fact that my ViewHD upscaler doesn’t support the ideal S-Video for my SNES and N64 systems led me to ebay to buy a CRT TV. They were expensive, so I said “Nope” and decided not to. As it turns out, however, the previous tenants of my house who had sort of ran off a few years ago to avoid paying several months of back-rent had also left behind a CRT television. I had never noticed because I had never actually gone out to the shed the entire time I’ve been here. Mine now, I guess. So I brought it inside. Once the spiders decided to leave, I was initially disappointed as it refused to turn on- then an hour later seemed to work fine, but was blurry as heck. I was able to fix that as well by adjusting the focus knob on the rear, such that it now works quite well and has quite a sharp picture.

Before we get too far, though, let’s back up a bit. There are actually quite a few “claims” to look at, here. With the appropriate equipment it should be possible to do some high-level comparisons. But first, let’s get some of the technical gubbins out of the way here.

Send me a Signal

The first stumbling block, I feel, is input method. With older game consoles, the signal accepted by televisions- and thus generated by most systems- was Analog. Now, when we get right down into the guts, A CRT’s three electron guns- one for each color- are driven through independent signals. Some high-end Televisions and Monitors, particularly PVM Displays, have inputs that allow the signal to be passed pretty much straight through in this manner. This is the best signal possible with such a setup- the signal sent from the device get’s sent straight to the CRT electron guns. No time for screwing about.

However, Other video signal formats were used for both convenience as well as interoperability. Older Black and White televisions had one electron gun, and thus one signal, Luma, which was effectively luminousity. This allowed for Black and White images. When Color Television was introduced, one issue was backwards compatibility- it was intended that colour signals should be receivable and viewable on Black and White sets.

The trick was to expand the channel band slightly and add a new signal, the Chroma signal. This signal represented the colour properties of the image- a Black and White TV only saw the Luma, and Color TVs knew about the Chroma and used that. (Conveniently, a Color TV not receiving a Chroma Signal will still show Black and White, so it worked both ways). This worked fine as well.

Moving swiftly along, TV’s started to accept a Coaxial input. This provided a large number of “channels” of bandwidth. Each channel was a signal with the Chroma information lowpass-filtered onto the Luma signal.

Composite worked similarly, but abandoned the channel carrier, effectively just sending the combined Luma & Chroma signal without any channel adjustment.

S-Video Sent the Luma and Chroma signals entirely separately- with no low-pass filtering or modulation at all.

In terms of fidelity, the order going from least-desired to best of these, is RF, Composite, and then S-Video.

Now, this is North American-centric- Europe and the UK had a slightly different progression. Over there, a somewhat universal connector, the SCART connector, became effectively the de-facto standard. SCART could support a composite signal, separated Luma/Chroma (S-Video) signals, or an RGB Signal. and RGB signal as effectively three separate signals, one for each of the Red, Green, and Blue electron guns in the television. This is effectively the best possible signal- the signal goes straight to the electron guns with very minimal processing, as opposed to Chroma and Luma, which require some modulating and other actions to turn into an RGB signal to send to the electron guns. RGB was available in North America, but the equivalent connection method used- Component Video- wasn’t used until fairly late- around the time that CRT displays were being replaced with flat-panel LCD and Plasma displays.

So with that out of the way, one of the factors in how good an image looks is how much information is lost. In the case of older game consoles, the choices- without modding- tend to be RF, Composite, or S-Video.

For the NES, the ideal output, without modifying the system, was Composite:

CRT displaying an image from a Composite signal from an NES.

CRT displaying an image from a Composite signal from an NES.

It is notable that we can still make out individual pixels, here; the dithered background doesn’t “mix” together. There is blurring, particularly along the horizontal scanlines, as well as dot skew along Megaman’s sprite, but those are not inherent properties of the CRT itself, but rather of the composite signal. As shown by running the same game via the Megaman Anniversary Collection on the Gamecube and using S-Video output:

 

A CRT Television displaying the S-Video output from a Gamecube.

A CRT Television displaying the S-Video output from a Gamecube.

This is a much clearer image. However, there is still some noticable blurring around Megaman. Could this be added by the Gamecube’s emulation? I don’t know. we’ll have to do more experiments to find out.

As I mentioned, Composite is inferior to S-Video; this is because Composite is the result of applying a low-pass filter to the Chroma signal, and “mixing” it with the Luma signal. The lowpass filter is so it doesn’t interfere with the Luma signal- but the effective result is that it doesn’t interfere with the Luma signal as much. The primary problem is that by having both signals as part of one signal, the demodulation will still pick up bits of the other signal due to crosstalk. Another possibility is that the signal being generated could be being generated in a less-than-optimal way- in the case of the NES for example it’s PPU generates a composite signal, but the composite signal is created from square waves, rather than

Now, since I have no immediate plans to try modding any sort of higher video output from my NES, the best solution for comparisons would be to actually use something that can be compared directly. I decided to go with Super Mario All Stars and the SMB3 World 1 Map screen. First, we can see it with Composite:

CRT displaying Mario All Stars via a Composite input.

CRT displaying Mario All Stars via a Composite input.

 

Next, we can switch it over to S-Video:

 

svideo3

CRT displaying SNES Mario All Stars SMB3 via S-Video

Just compare those two- The S-Video is much better. This difference is entirely because of the separation of the Luma and Chroma into two signals; one can see a bit of “noise” in the composite version, whereas the S-Video output is very well defined. It is almost night-and-day. However, these differences are not purely due to the use of a CRT. S-Video signals can be accepted by any number of devices.

 

Design Intention Declarations

One common statement made regarding older consoles is that their art and sprites and design are intended for a CRT; and therefore, a CRT is necessary to have an “authentic” experience. This seems reasonable in it’s surface. However, it really is not possible to design for a CRT in a general fashion. CRT Televisions accept varying signal inputs, they use widely different technologies- Aperture Grille, Shadow Mask, etc- have widely different convergence, moire, dot pitch, and other characteristics. While it would be possible to tailor or use the side-effects of a particular Television set to achieve a specific effect, that effect would be lost on pretty much any other set; and even on the same set if adjustments are made.

However, one thing that does have well-defined aspects and side effects that can be utilized is the signal. In particular, for systems that use a composite signal (either via composite itself or through a carrier-wave RF), the artifacts can result in certain image characteristics. These characteristics, however, have no relevance to CRT technology at all, and are not innate features that present themselves on CRT television sets.

The most common example is in Sonic the Hedgehog. The game has waterfalls in the foreground- in order to allow you to see your character, and because the Genesis hardware doesn’t support translucency, the game dithers the waterfall by having it drawn with vertical stripes. When this is paired with a composite signal, it looks sort of translucent:

 

Sonic the Hedgehog-Composite

Well, OK, it doesn’t work great, since you can still see the lines- but the characteristics of composite lend themselves to some horizontal blending, which helps make it look transclucent. At any rate, the argument is that the game is designed for a CRT and it is designed for composite, based on this- therefore, not using a CRT or not using Composite aren’t “playing it properly”.

 

I challenge this claim, however. First, the effect is irrelevant to CRT, as I stated, so we can throw that one right out. Second, the fact that this has a useful side-effect with the most common video signal format doesn’t mean it was designed that way. The problem arises that there realistically wasn’t any other way for it to be implemented. Dithering is a very common method of attempting to simulate semi-transparency, and had been for some time.

Another issue is that Composite was not the only signal format available. The system also output S-Video, and, in supported regions, full RGB signals. With an S-Video connection, that same waterfall effect looks like this:

Sonic- SVideo

 

If the system was designed for Composite- why does it support signal formats with higher fidelity? There is simply no merit to the claim that games were designed to exploit composite blending. The fact of the matter is that in all instances where it has an effect, there wasn’t any other option for implementing what they wanted to do. Dithering is the most common situation and it is merely a result of writing game software on a device that doesn’t support translucency. That the typical connection signal blended dithered portions of an image together a bit more wasn’t an intended result, it was, in the words of Bob Ross, a “Happy Accident”.

 

Moving forward from that, however- and taking a step back to the Wii U Virtual Console. We’ve already established that CRT displays do not have inherent blurring characteristics. Furthermore, the blurring effect of composite itself is relatively slight. The best way to compare is to simply compare the two images directly. For example, I have Kirby’s Adventure on the Wii U VC. I also have it on my Everdrive N8, allowing it to run on the NES as it would with the original cartridge. Let’s compare the two.

First, the composite image captured on a CRT, using the NES’s Composite connection:

Kirby-NES-Composite

Kirby’s Adventure, running on an NES and being displayed via Composite to a CRT Television.

There is a bit of a moire pattern from when I took the picture and how the phospor’s are lining up, but those aren’t normally visible. There is some slight blurring, but it is mostly in the horizontal direction. Now here is the image from the Wii U VC, running on an LCD:

WiiUVC-Kirby

Kirby’s Adventure, running on the Wii U Virtual Console and being displayed on an LCD Screen through HDMI.

Here we see that they have merely blurred the output. For what purpose, I don’t know. Perhaps they are scaling the emulator output and it is using the default bilinear scaling, when they intended nearest neighbour. In the closeups here it actually looks like a reasonable approximation, but even within the images the image on the CRT is still more clear (particularly vertically). The main problem is that the CRT output appears very clear and crisp from a distance; whereas at any distance the Wii U VC Output on an LCD looks blurry. Stranger still, the Virtual Console on the Nintendo 3DS doesn’t exhibit any of these visual effects.

To conclude, I think that a lot of the attachment to CRT displays is rooted in confirmation bias being supported primarily by nostalgia factors. While there are benefits to the Native analog capability of a CRT display- in particular, resolution switches are way faster – those benefits don’t really line up with a lot of the claimed advantages. And those that seem reasonable, such as CRT’s having less input latency- have only been measured in time delays that are otherwise inperceptible. The biggest concern is less that CRT is ideal, and more that LCD panels tend to use very poor digitizers to turn the analog signal into a digital one for use by the display panel itself. These issues can be eliminated by using a breakout box, such as a framemeister or a ViewHD, which accepts various inputs and outputs HDMI.

Posted By: BC_Programming
Last Edit: 26 Dec 2015 @ 01:43 PM

EmailPermalinkComments (0)
Tags
Categories: Games, Hardware
 05 Sep 2015 @ 9:07 AM 

For some time I’ve been looking forward to a new “Game” that Nintendo announced some time ago- Mario Maker, which is now titled Super Mario Maker. I put “Game” in quotes because what it is is perhaps not that straightforward. Effectively, it allows you to create your own levels; but further, it allows you to share your levels via the online community and play levels created by others. One could think of it almost like a Mario game that you can change, while also having crowd-sources levels.

Opinions on the product are, of course, varied. Some consider it a “Little big planet” rip off. this is an odd descriptor since that game hardly trademarked the idea of having an online community for sharing levels. Others say it is nothing more than a glorified ROM hacking tool. That is an interesting argument, one that I rather disagree with.

What is ROM Hacking

For those unfamiliar, “ROM hacking” is effectively taking the ROM data of a Console video game and fiddling with the innards; this could involve changing graphics, code, or level information. In this context, most are comparing Super Mario Maker to Level editing tools such as Lunar Magic and Mario 3 Workshop; These programs provide something of a more graphical and easier approach to editing level information in Super Mario World and Super Mario Brothers 3 ROM files. I don’t think such comparisons are particularly valid; the primary issue is that neither of those tools is nearly as intuitive and obvious; both require some knowledge of the game engine, particularly how they deal with pointers and exits/entrances. Another consideration is that when it comes to ROM hacking, the typical distribution method is patches. the purpose being to circumvent/avoid copyright infringement. Effectively the person who creates a hack distributes a patch file, which describes the changes that are to be made to the ROM of the game in order to create their hacked version. this way the patch file being distributed only distributes the changes, and doesn’t distribute copyright content by Nintendo. This means that while there are communities and websites covering, reviewing, and featuring these hacked ROMs, in order to try such a hack one needs to download the patch file and apply the patch to the appropriate ROM file and load it in an emulator (or, run it on the original console using something like an Everdrive N8/Super Everdrive or an SD2SNES). The communities are also typically rather niche; while there is excellent help to be found in the community for creating, editing, and working with ROM hacking tools, these tools and the methods used are rather involved. It also requires that the user skirt the law; Unless you dump a ROM file from a cartridge yourself, you have to download it from the Internet which means breaking copyright law. of course, whether something is illegal and whether the laws prohibiting it will be enforced is another question, but it is still something that may scare many away

Super Mario Maker avoids all of these problems. That said, despite it allowing you to play/create levels that look similar to Super Mario Brothers, Super Mario Brothers 3, Super Mario World, and New Super Mario Brothers U, it is very much a different game. Many elements from the originals are changed; new things are added; features are revised, limitations are removed, new limitations are added; etc. I like to think of it as a new game that merely provides skins that approximate some of the original titles, myself. And I look forward to experimenting with many of the new capabilities that the revised engine allows. For example, in the original titles, enemies didn’t bounce off of note blocks or springboards. Only Mario could interact with them, and enemies just walked on them like normal blocks (or passed right through them); With Super Mario Maker enemies and various other entities will interact with Springboards and noteblocks; things like bob-omb’s will explode and damage the level, by destroying things like bricks; you can make large sized koopas and their shells can destroy otherwise indestructible blocks that are in the level; Koopa’s and other enemies and objects will interact with objects like platforms where previously they simply fell through them like they weren’t there. This provides a wealth of capabilities when it comes to designing unique levels which simply aren’t possible with ROM hacking tools, while providing the capabilities using an easy-to-use interface that requires no technical knowledge of pointers or object data or anything like that.

Super Mario Maker is going to be released next Friday (September 11th) a day chosen to approximate the 20th anniversary of the original Super Mario Bros. game, which, in Japan, was September 13th, 1985. Since in 2015 it falls on a Sunday, the date was selected as the Friday before it. Some have pointed out the unfortunate coincidence, and even referred to it as tasteless. However I’m of the mind that the world cannot simply stop on the Anniversary of such events, and any memoriam or sombre attitude associated with a date isn’t necessarily mutually exclusive with other activities, anyway. Who among us can claim to have never played a Video Game on November 11th, for example?

Posted By: BC_Programming
Last Edit: 05 Sep 2015 @ 09:07 AM

EmailPermalinkComments (0)
Tags
Categories: Games
 11 May 2015 @ 11:50 AM 

A bit of a shorter post. Sort of a “progress” report on some of the personal stuff I’ve worked on recently.

BASeBlock

I’ve come to form a love/hate relationship with BASeBlock. This is mostly because there are a lot of things I like about it’s design, and a lot of things I hate, and the things I dislike tend to be difficult to change. Some of the basic dislikes includes a lack of DPI support and how it won’t actually scale to larger sizes. On my monitor it is now tiny which is annoying and pretty much trash in the form of an action game. I’ve brainstormed a few solutions. The simplest would be to simply scale up the bitmap that get’s drawn to the screen. That is still a pain but is doable. Another would be to go a step further and actually scale everything in the game itself to larger sizes. That would be a rather large undertaking, to the point that I’m not even sure it would be worth the effort. I made a few minor revisions to try to get it to scale using the first method but ended up shelving that work for the moment. It’s rather disappointing to find such glaring problems with your old projects that you put so much time into, and almost painful to even consider just shelving the project entirely and just moving on. I certainly made a lot of mistakes with BASeBlock but I think it does well for a game using such basic capabilities (GDI+ drawing for a Game is rather unorthodox!).

Prehender

3-D programming is incredibly annoying and uses math that is far beyond my abilities to actually comprehend. Viewmatrices, rotation matrices, dot products. It’s basically a case of Googling to find out how to do things, then hoping I can figure out how to get the math to work with OpenTK. Nonetheless, I have managed to make a bit of progress.

As with BASeBlock and realistically any game I make going forward most likely, it’s primary purpose is for learning stuff. BASeBlock is at this point “learning” how to refactor an old codebase to improve it, whereas originally it was for learning C#. Prehender is going to be both applying the design techniques I learned since creating BASeBlock as well as being my first 3-D game. With that in mind, it is a rather simple concept.

Originally, I was going to just create some sort of 3-D Block breaker. I have a rather unhealthy fetish with them or something. But I decided to change it up a bit. I decided to “steal” a bit of the design of the 2-D Game, “Spring-up Harmony” which effectively uses a physics engine, and you shoot coloured balls at coloured blocks. If you hit a matching block it will “loosen” from the static background and will interact with other blocks. Then you can catch them with your “bucket” at the bottom of the screen. I haven’t worked out all the details but right now I have it so you shoot coloured balls at an arrangement of cubes, and when a coloured ball touches a coloured block, the block loosens and will fall. I haven’t actually figured out the gameplay specifics, though. That does bring me to the title, though- 3-D programming is quite difficult. I haven’t used Unity before, I may give it a go at some point, however my interest in creating games is typically in what I can learn about actually making them- Unity seems to be more for people interested in making games, as it abstracts some of the parts I find interesting. But in my case, I’m using C# and OpenTK. Unfortunately this means I get the fun of dealing with concepts such as Projection and View Matrices, Dot Products, cross products, and the like. My math also fails me as I’m not able to determine the Camera position from the projection and view matrix, which is a bit goofy when I want to shoot the balls from the position of the camera.

On the bright side, this does make it (IMO) a more useful learning experience. I find it rather strange that I’ve had to resort to third party libraries (OpenTK and BASS.NET) for providing 3-D display and compressed audio capabilities into my C# Program. XNA has been rather left behind (Though still works) and it has a few omissions that I found frustrating when I was working on BCDodger. I would hope that .NET is given first-party support for creating games in the future that makes the task much easier but allows us to use the full power of .NET and C#. Sort of an XNA successor allowing us to also publish to XBox One. (Heck if such a library was made available even at cost I think I could justify an XBox One.)

BCSearch .NET

BCSearch, my VB6 program works, but working on it is pretty much a non-starter these days. I am impressed with the patience I used to have working with Visual Basic 6 only 7 short years ago. Some features of the program will simply not be brought to completion.

Instead, I would like to create a modern WPF Windows Application that uses modern programming (and async await and such) for the same purpose. The effective goal is to create a rather straightforward on-demand search program. This differs from standard Start->Search and the search tool of Windows Explorer in that it is a full application oriented around searches and dealing with the results of those searches. I often find myself trying to find files based on rather specific criteria, and in that context I can see myself using an imagined BCSearch.NET that allows me to write a short C# method in a textbox for filtering each file. This would also allow me to rethink some of the weird architecture decisions I made with BCSearch, while also allowing me to work with WPF again (my work is almost exclusive Windows Forms, and the last time I really worked with WPF was with BCJobClock).

Posted By: BC_Programming
Last Edit: 11 May 2015 @ 11:50 AM

EmailPermalinkComments (0)
Tags
 22 Feb 2015 @ 11:30 AM 

I’ve been out of the whole Bukkit Plugin business for some time, But I recently jumped back in because there was a tiny bit of demand for me to update “GP-EnderHoppers”. It got me thinking about possibly jumping back in more fully- Not to GriefPrevention, mind you- that has moved on, rolling back pretty much everything I did to the plugin, but rather with my own “protection” plugin of sorts.

My thought is that I could apply what I’ve learned through work to a Minecraft Plugin- that is, I work with software dealing primarily with Marina’s- Customers, Sites, sublets, reservations, Invoices, etc. And it might be interesting to try to architect- from scratch- a Minecraft plugin built to the same purpose. It’s only something I’ve been considering on and off for the last day or so in a passing whimsy.

Furthermore- what about a marina plugin? Something which fixes boats (makes them unbreakable, allows them to be named, etc) and brings real-world Marina terms and applications into a Minecraft Environment. Rather than a “general” protection plugin, it could be designed for instead managing harbours within the game, something which- at least as far as I can tell- doesn’t have any plugin built for the purpose. And being that I could bring real-world experience to the table in that context, it might be interesting.

That said the plugin landscape is pretty- goofy. Between the whole Bukkit DMCA debacle and the spigot workaround, it’s sort of like walking on eggshells. I’m not a big fan of the ecosystem, to be honest, I don’t even really like the sort of servers which would use plugins.

My main consideration is that I need a “hefty” side project of sorts, as currently I feel I’m not really doing as much as I could be in that regard, since almost everything I do is in some sense motivated by my work. To that end I did crack open “Prehender” (why did I call it that?) And even fixed some of the issues that had caused me to lose interest (HUD stuff). Once I work out some details about the camera I think I’ll be well on the way to having a basic, but playable, 3-D game.

Posted By: BC_Programming
Last Edit: 22 Feb 2015 @ 11:30 AM

EmailPermalinkComments (0)
Tags
Categories: API, Games
 17 Feb 2015 @ 3:08 AM 

The “Video game” industry is interesting. It’s consumer base is often fickle, frugal, and judgmental. Often, reading about it- I’m glad I’m not trying to make a living writing Games.

It’s not that Games aren’t a great medium- they are. Personally, I suppose my “passion” is not so much games as it is programming. And, when you think about it- how many business consumers rely on games functioning properly? None. This is why Game consumers can be frugal- the software is not a necessity, it is a luxury- an excess. In contrast, “business” software can often have quite weighty contracts and a high price point. When software is required to do business, the costs are justified because they translate to a more efficient business. When it comes to games, many consumers will pay a low price point- perhaps 15-20 dollars at most- and then effectively “demand” updates to the game and new features to be added years later. I cannot wrap my head around the logic involved.

Within any consumer base, there is going to be technical illiteracy, particularly regarding the details of software construction. In the domain of Games, though- it seems that all the people who build PCs for the latest and greatest games become experts on how software works and how hard it is to implement new features. Typically, this leads to an inability to comprehend scope.

As a specific example- take Minecraft. One of the common criticisms of the game is that it runs “slower than it should”. This has the issue of being an assertion- on what basis do they know how it “should” run, after all? Furthermore, it is usually substantiated with “See, this Mod is just made by one guy and makes the game faster!”. Again- scope comprehension. When you develop a Mod, you can do whatever you want and typically you create a modification by focusing your scope; and as a mod author, you can ignore everything else, including hardware compatibility. The number of issues with “performance” mods in Minecraft is in the single digit percentage but that would be an insane number of people if the same sort of additions were integrated into the vanilla game.

Speaking in those terms, lately a “buzzword” has been “Mod API” or “Plugin API”; it is odd, in a way- since games did not typically advertise that they had any sort of an API until recently. And, even more curious, is that most of the people who want one, aren’t going to use it- they are effectively asking for it because they figure it means that other people will create good content for the game. But the reailtiy is that is going to happen regardless of whether there is an API. If you look at any older game you can find al ot of dedicated ROM hackers, modders, and expertise surrounding changing it or making modifications to the game. In reality a proper “API” surrounding a game really would just lower the barrier to entry and mean there is more shovelware and poorly written software.

Another issue is is that Games are more interesting. This might seem odd but what I mean is that it is often the “entry point” for new software developers who are getting interested. They will research programming and perhaps learn on their own, but too often one sees self-taught teenagers with no substantial experience or portfolio trying to criticize a game that has already earned unimaginable sums. This has a significant falloff as you move away from Game development and enter the world of business software. A game developer has to deal with the teenagers who complain about how the algorithm used to interpolate the rock items in the game is not entirely accurate and they should change it- that sort of thing doesn’t happen in business software, If you have to deal with anybody in such a capacity they are going to be an adult and they are going to be working in the interests of their company in terms of the business purpose, not for their own ego.

Naturally, there are exceptions. Developing games and even game modifications can be rewarding- But as you expand your portfolio of projects you find that you are, for lack of a better term, “Stretching yourself thin”. How do you find the time to maintain all of these different projects in your spare time without going absolutely bonkers? You don’t. YOu end up leaving projects alone for months, or even years; new ideas you had slowly drift away as you realize that you simply won’t have time to bring it to bear.

To summarize, the consumers of any Game Development are more finicky, frugal, picky, and critical than business software. It’s easy to evaluate if a piece of software suites a particular business purpose. It’s harder to evaluate if it is entertaining. With a lot of business software, the consumer has to choose something- so you only need to show how you are better than your competitors. This isn’t the case for games- you have to prove that your title is amusing, which is significantly harder.

Posted By: BC_Programming
Last Edit: 17 Feb 2015 @ 03:08 AM

EmailPermalinkComments (0)
Tags
Tags:
Categories: Games, Programming
 15 Sep 2014 @ 4:15 AM 

I noticed I’ve been a bit quiet with posts lately. I was very busy during most of September but I’ve got some code-intensive stuff I’m expecting to share in a future blog post (It involves the Theme API), so look forward to that- or don’t, It’s not my choice.

Over the past week or so, a rumour, tracing it’s source to a “Source close to the matter” in a Wall Street Journal article, has spread like wildfire particularly amongst the Minecraft community. The rumour is that Microsoft is currently negotiating with Mojang to purchase Mojang. As somebody who used to be rather involved in the Minecraft Community and one who actively uses and appreciates many Microsoft technologies and offerings, It seemed like a good topic for a post.

I will ignore the various aspects of “is this true or not?” It is easy to find reasons to doubt it as it is to find reasons to feel it is legitimate. Instead, I want to focus on the impact, assuming it is true. The reason is that I think Microsoft has had some heavy misrepresentations within many forum, blog, and articles I’ve seen posted on this issue in regards to what people seem to expect Microsoft to do with Mojang properties. I’ve heard claims that Microsoft would fire all Mojang employees and replace them with Microsoft employees who would rewrite Minecraft in C++; I’ve heard claism regarding Microsoft injecting Minecraft with DRM, and all sorts of wild claims.

I find these claims particularly baseless. Assuming the acquisition goes through, this is not going to be the first one Microsoft made. And in particular, most game studios keep a good semblance of their individuality- the only real transfer is the ownership of the IP.  It is unlikely that Microsoft would change anything- it is worth considering that changing anything within the operations of a company that you just bought sort of diminishes the value you had apparently placed into it to purchase it. You don’t pay 2 billion dollars for a company that needs sweeping internal changes.

In those terms, I think this would not be the “Doomsday” that many seem to predict.

There is, however, a flip-side to this coin. While I cannot fault Microsoft for wanting to be involved in the very active and popular Minecraft, I am surprised (again, I’m assuming it is true) Notch, the creator of Minecraft, would perform such a heel-face turn. It is- perhaps disappointing in a way. Earlier on he was always standing beside his principles of Software freedoms, anti-DRM, and even rallied against Windows 8, even though his understanding of it was about equal to a 5-year old’s understanding of quantum fissures. If the rumours are true, than I think this speaks to a lack of character in some ways, since it suggests he is willing to throw out his principles simply because he sees enough dollar signs. If it goes through, I have no doubt he will try damage control with some nonsense about “doing what is best for the company”, but make no mistake that if the sale goes through he is set to be a heavy beneficiary.

In many ways, this is why I find myself doubting the story. I disagree with Notch/Markus on many of his prior Anti-Microsoft ramblings regarding Windows 8 and MS being evil and such, but I at least respected his character in holding those positions; if he changed his mind based on information, I can respect that as well- Strong opinions, weakly held.

But if all it took to suddenly change his strong opinions was enough 0’s, that I find disappointing. There is an old saying, that “everybody has their price”. To provide a rather vulgar example, You would be hard-pressed to find a person who would not suck a dirty hobo to orgasm for a billion dollars. I mean, if you ask most people, they will stick by their principles. Most people are definitely about helping the homeless, but that is definitely far off the radar. The difference comes when somebody actually has a billion dollars. They have suitcases of real, verified Money amounting to a billion dollars. And it’s all yours if you can give smelly Sid here a good time. The fact is, you can buy mouthwash later. It will be disgusting and unpleasant, but once it’s done, it’s done. And somehow I never expected this post to somehow devolve into a thought experiment involving homeless people fellatio, which I suppose is just a further illustration of how unpredictable our actions can be.

Supposedly the results/decision will be announced Monday. My personal prediction is that there may be a partnership deal between the two companies, possibly involving XB1 exclusive content (this has happened for a few games for Xbox 360 and Xbox 1 previously). A acquisition seems rather out of left field, and given the vagueness of the description of the sources provided, they may not be as privvy to the details and may be filling in blanks themselves. So how accurate will I be? Well, we’ll find out Tomorrow- or, Rather, today!

 
Edit: Well, I was completely wrong here. Oh well!

Posted By: BC_Programming
Last Edit: 17 Oct 2014 @ 03:47 PM

EmailPermalinkComments (0)
Tags
Tags:
Categories: Games
 24 Sep 2013 @ 5:53 PM 

Recently, Valve announced their upcoming ‘product’- a Free Operating System called SteamOS.

Steam OS is the culmination of a year or so of complete and utter cluelessness by Gabe Newell on Software products like Windows 8. Remember how he said that Windows 8 was a “catastrophe” and would be a “launch failure”? You might have expected him to change his tune when his very own Steam showed that 8% of Steam users were on Windows 8; whereas every non-Windows OS barely broke a single percentage point combined. He still stands by his ignorance. This ignorance included a completely misunderstanding of pretty much every single thing about Windows 8. His claims are that it is encouraging a closed gaming marketplace. It’s not. Desktop Applications still run. Desktop Applications still install; in fact you can even have installers to install Modern UI applications. Most games aren’t going to be using the WinRT APIs, though, so any and all argument about a “walled-garden” is entirely specious.

The problem is every “argument” against it starts with the postulation that you [i]want[/i] your games on the Windows Store. Why? That’s stupid. You can still use the same traditional digital distribution you do today. The Windows Store is only useful to you if you happen to have a WinRT Application that you would like to deploy to WinRT and Desktop platforms. Some simple games may fit this, but most games do not. And as a result, the argument about the Windows Store being closed is completely tangential. THhy argue that they need to actually have their Windows Store links point to another retailer. Well, my first question is why do they have a Windows Store link to begin with? Windows 7 doesn’t seem to suffer from the lack of a Windows Store and completely ignoring the fact that [i]the standard Desktop still exists in the Desktop versions of the OS[/i] I suspect is almost done entirely on purpose.

so, with the above out of the way, based on Gabe Newell’s various quotations on the subject, I can safely say that he has practically no understanding of Windows 8, the Windows Store, or any of those related technologies and his “concern” over it in regards to the gaming industry is based entirely on a castle of FUD he has built himself.

But to bring this circus act together, we have SteamOS. Apparently, SteamOS is Gabe’s answer to Windows 8; it’s more or less a crappy HTPC that runs some Valve Software and is based on Linux, and might be able to play almost 4% of Steam titles. Wow. Colour me impressed. I can totally get behind them working on this game company working on an [i]Operating System[/i] instead of actually starting any sort of development on the most anticipated game sequel ever. For somebody who throws their weight around in Gaming development circles they seem to be doing very little actual game development.

The fact that people are hailing Steam OS as some good thing that everybody in the gaming community needs makes me sick. Steam is still as awful a platform as it was ten years ago. The irony is that back then the complaints about a new closed gaming marketplace were directed at Steam. How can they throw those exact accusations at Microsoft when they very clearly are [i]running their own closed gaming marketplace[/i]? Steam decides if or if not a game get’s listed. That’s the very definition of a closed system.

With any luck Valve will either wisen up and get rid of the clearly incompetent Gabe Newell who has used his position almost either maliciously or stupidly for spreading idiotic FUD based on so little research the fact that he is still chiming the same old song and dance makes it difficult to consider him cognitively capable of being a circus clown, let alone running one of the biggest software distribution empires in existence today.

Posted By: BC_Programming
Last Edit: 24 Sep 2013 @ 05:53 PM

EmailPermalinkComments (0)
Tags
Tags: , , ,
Categories: Games, Linux, Windows
 04 Feb 2013 @ 9:24 PM 

Is XNA Going Away?

The following consists of my opinion and does not constitute the passing on of an official statement from Microsoft. All thoughts and logic is purely my own and I do not have any more ‘insider’ information in this particular topic than anybody else

I’ve been hearing from the community a bit of noise about Microsoft’s XNA Framework- a Programming library and suite of applications designed to ease the creation of Games- being cut. A google reveals a lot of information, but a lot of it is just plain old rumours. The only one I could find that was based on actual information still makes a lot of assumptions. It is based on this E-mail:

Our goal is to provide you the best experience during your award year and when engaging with our product groups. The purpose of the communication is to share information regarding the retirement of XNA/DirectX as a Technical Expertise.

The XNA/DirectX expertise was created to recognize community leaders who focused on XNA Game Studio and/or DirectX development. Presently the XNA Game Studio is not in active development and DirectX is no longer evolving as a technology. Given the status within each technology, further value and engagement cannot be offered to the MVP community. As a result, effective April 1, 2014 XNA/DirectX will be fully retired from the MVP Award Program.

Because we continue to value the high level of technical contributions you continue to make to your technical community, we want to work with you to try to find a more alternate expertise area. You may remain in this award expertise until your award end date or request to change your expertise to the most appropriate alternative providing current contributions match to the desired expertise criteria. Please let me know what other products or technologies you feel your contributions align to and I will review those contributions for consideration in that new expertise area prior to the XNA/DirectX retirement date.

Please note: If an expertise change is made prior to your award end date, review for renewal of the MVP Award will be based on contributions in your new expertise.

Please contact me if you have any questions regarding this change.

This is an E-Mail that was sent out- presumably- to XNA/DirectX MVPs. I say presumably because for all we know it was made up to create a news story. If it was sent out, I never received it, so I assume it would have been sent to those that received an MVP Award with that expertise. It might have been posted to an XNA newsgroup as well. Anyway, the article that had this E-mail emblazoned as “proof” that MS was abandoning XNA seemed to miss the ever-important point that it actually says nothing about XNA itself, but actually refers to the dropping of XNA/DirectX as a technical Expertise. What this means is that there will no longer be Awards given for XNA/DirectX development. It says nothing beyond that. Now, it could mean they plan to phase it out entirely- but to come to that conclusion based on this is a bit premature, because most such expertise-drops actually involved a merge. For example, in many ways, an XNA/DirectX expertise is a bit redundant, since XNA/DirectX works using a .NET Language such as VB.NET and C# and very few XNA/DirectX MVPs truly can work with XNA in any language at all, it might make sense to just clump them with us lowly Visual C# and Visual Basic MVPs.

To make the assumption that XNA is being dropped based on this E-mail is a bit premature. In my opinion, I think the choice was made for several reasons. I guess some of the misconceptions might be the result of misconceptions about just what a Microsoft MVP is. First, as I mentioned before, a lot of the expertise of XNA/DirectX involves an understanding- and expertise- in some other area. Again, Visual C#, Visual Basic, Visual C++, etc. So in some ways they might have considered a separate XNA/DirectX expertise redundant. Another reason might have to do with the purpose of an MVP. MVP Awards are given to recognize those who make exceptional community contributions in the communities that form around their expertise. For example, my own blog typically centers around C#, solving problems with C# and Visual Studio, and presents those code solutions and analyses to the greater community by way of the internet, as well as sharing my knowledge of C# and .NET in those forums in which I participate. MVP awardees don’t generally receive much extra inside information- and that they do get is typically covered by a NDA agreement. The purpose of the award is to also establish good community members with which Microsoft can provide information to the community. MVPs are encouraged to attend numerous events where they can, quite literally, talk directly to the developers of the Products with which they acquainted. in some way you could consider MVPs “representatives” of the community, who are chosen because their contributions mean they likely have a good understanding of any prevalent problems with the technologies in question, and interacting with MVPs can give the product teams insight into the community for which their product is created. Back to the particulars here, however- as the E-mail states, XNA Game Studio is not under active development. Now, following that, it seems reasonable to work with the assumption that either that product has no Product Team, or those that are on that Product Team are currently occupied in other endeavours, or other products for which their specific talents are required.

It’s not so much that they are “pulling the plug in XNA”- the product is currently in stasis. As a direct result of this, it makes sense that without an active Product Team, having specific MVP Awardees for that expertise isn’t particularly useful for either side- MVPs gain from personal interactions with the appropriate Microsoft Product team as well as fellow MVPs, and Microsoft gains from the aggregate “pulse of the community” that those MVPs can provide. Without a Product Team for a expertise, that expertise is redundant, because there is nobody to get direct feedback. This doesn’t mean the XNA community is going away, just that, for the Moment, there is no reason for Microsoft to watch it’s pulse, because the product is “in stasis” as the OS and other concerns surrounding the technology metamorphize and stabilize (The Windows 8 UI style, Windows Store, and other concerns in particular). Once the details and current problems with those technologies are sussed out, I feel they will most certainly look back and see how they can bring the wealth of game software written in XNA to the new platform. Even if that doesn’t happen, XNA is still heavily used for XBox development- which is also it’s own expertise.

I hope this helps clear up some of the confusion that has been surrounding XNA. It doesn’t exactly eliminate uncertainty- this could, in fact, be a precursor to cutting the technology altogether. But there is nothing to point to that being the direction, either.

Posted By: BC_Programming
Last Edit: 04 Feb 2013 @ 09:24 PM

EmailPermalinkComments (0)
Tags

 Last 50 Posts
 Back
Change Theme...
  • Users » 41480
  • Posts/Pages » 347
  • Comments » 104
Change Theme...
  • VoidVoid « Default
  • LifeLife
  • EarthEarth
  • WindWind
  • WaterWater
  • FireFire
  • LightLight

PP



    No Child Pages.

Windows optimization tips



    No Child Pages.

BC’s Todo List



    No Child Pages.