12 Sep 2019 @ 11:33 AM 

C# 1.0 was something of a first-pass as a language design. It received refinements and improvements, and started to create it’s own unique identity with 2.0. C# 2.0, like many C# versions, relied heavily on changes made to the CLR; that is, a C# 2.0 program typically could not run on the .NET 1.1 Framework runtime. This set it apart from Java, where language features are typically designed to run on any Java Virtual Machine.


One of the biggest features added to the language with C# 2.0 was Generics. Generics allow you to define type parameters which can “stand for” other classes, based on how you use the Generic class itself. Good examples of Generics can be found in their use within the .NET Base Class Library. With .NET 2.0 we got new strongly-typed classes such as List<T> and Dictionary<K,V>; These classes allow you to create strongly typed collections of any type and dictionaries using almost any type for the key and value types. This provides a wealth of flexibility to programming, as well as providing additional type safety by adding compile-time type checking. Their non-generic counterparts, ArrayList and Hashtable, would let you add anything. Even if your code expected all items in the ArrayList to be one type, you could introduce errors without realizing it by adding a String or number to the ArrayList, and it would still compile and run but you would receive run-time errors.

C#’s generics implementation is “first-class”; what this means is that the generic classes are preserved and are part of the compiled assembly. As a result, through reflection, it is possible to construct instances of a generic class with any type parameters. This is a result of the feature being implemented as part of the run-time. This is in contrast to how the same feature was implemented in Java. in Java, in order to allow the code that utilized Generics to run on older Virtual Machine implementations, Generics are implemented as part of the compilation. This results in something known as “Type erasure”- effectively, the generic class ceases to exist and java instead compiles the class as if the strongly-typed generic type definitions were the base Object type, after performing, of course, appropriate compile-time checks. In some instances the definition is replaced with the first bound class when type constraints are utilized. In either case, the downside to this implementation is that usage of reflection at run time is unable to construct generic type instances in the same manner as can be done in C#, resulting in much more complicated workarounds if that is desired.

Static Classes

C# 2.0 also introduced the idea of a static class. This is a class that cannot be instantiated and can only contain static members. Strictly speaking classes that function identically could be constructed by merely enforcing that a class would only contain such members, but with the static keyword being supported by class definitions this became a compile time check.

Nullable Types

Nullable types also got their start as early as C# 2.0. A nullable type allows you to effectively treat a value type as a reference type. This could allow you to indicate an extra data point (eg. a Nullable<int> field on a data class could accept null to indicate specific behaviour when being interpreted) This can be particularly useful when working with certain databases, as many fields may map to value types within .NET except for the fact that they can also be null within the database.

Anonymous Methods

Anonymous methods allowed C## 2.0 to define code methods that defined delegates within the body of other routines. This was in contrast to defining a separate routine and referencing it to construct the delegate. The big difference and advantage to an anonymous method is that it can eliminate the use of fields in that the anonymous method implementation can use the local variables where it is defined. There are a few caveats involved regarding what variables are closed over, but suffice it to say that there was some debate over whether anonymous methods actually allowed closures due to some of the specifics of handling local variables at the same scope as the anonymous method.

Partial Types

Partial types primarily benefit code generators, as they allow a single class definition to be split across multiple files, with each definition indicating it is a partial definition with the partial keyword. This primarily benefits code generators largely because if a class was large enough that it would be sensible to split into multiple files, then it would be more prudent to consider more extensive refactoring, such as creating new classes to handle some of the behaviour and code implementations.

Property Access Modifiers

With  C# 2.0, we got the ability to restrict accessibility with the get or set accessors of a property beyond the access level of the property itself. A property could be marked public, but a setter could be protected or even private. This allowed for the creation of properties that were for example read-only from the perspective of public clients, but with derived or internal routines could access. the way that was accomplished previously was by only defining the get accessor, and then making the backing field protected or private and accessing the backing field directly.

In addition to the additions to the language itself, since C# 2.0 paired with .NET Framework 2.0, it came with a bunch of new improvements, changes, and additions to the .NET Base Class Library. I won’t be covering those in depth here but aside from things like the introduction of things utilizing new CLR features such as generic collection classes, .NET 2.0 also added features to XMLDocument, Remoting, ASP.NET and ADO.NET.

Posted By: BC_Programming
Last Edit: 12 Sep 2019 @ 11:33 AM

EmailPermalinkComments (0)
Tags: ,
Categories: C#, Programming
 22 Jun 2019 @ 8:34 AM 

For some time now, I’ve occasionally created a relatively simple game and typically I’m not bothered to get into fancy “game engines” or using special rendering. usually I just have a Windows Form Application, a Game loop, and paint routines working with the System.Drawing.Graphics canvas. “BASeTris”, a Tetris clone, was my latest effort using this technique.

While much maligned, it is indeed possible to make that work and maintain fairly high framerates; one has to be careful what and when things get drawn, and eliminate unnecessary operations. By way of example, within my Tetris implementation, the blocks that are “set” on the field are drawn onto a separate bitmap only when they change; For the main paint routine, that bitmap get’s drawn in one go, instead of individually drawing each block, which involves bitmap scaling and such each time. Effectively I attack the problem by using separate “layers” which get rendered to individually and then those layers are painted unscaled each “frame”.

Nonetheless, it is a rather outdated approach. Because of that I decided I’d give SkiaSharp a go. SkiaSharp is a cross-platform implementation that is a wrapper around the Skia Graphics Library. This is used in many programs, such as Google Chrome. For the most part, the featureset is very similar conceptually to GDI+, though it tends to be more powerful, reliable, and, of course, portable, since it runs across different systems as well as other languages. It’s also hardware accelerated which is a nice-to-have.

The first problem, of course, was that much of the project was tightly coupled to GDI+. For example, elements that appear within the game will typically have a routine to Perform a Frame of animation and a routine that is capable of drawing to a System.Drawing.Graphics. Now, it would be possible to amend the interface such that there is an added Draw routine for each implementation, But this would clog up a lot of the internals of the logic classes.

Render Providers

I hit upon the idea, which is obviously not original, to separate the rendering logic into separate classes. I came up with this basic interface for those definitions:

The idea being that implementations would implement the appropriate generic interface for the class they can draw, the “Canvas” object they are able to draw onto (the target) and additional information which can vary based on said implementation. I also expanded things to create an interface specific to “Game States”; The game, of course, would be in one state at a time, which is represented by an abstract class implementation for the Menu, the gameplay itself, the pause screen, as well as the pause transitions and so on.

Even at this point I can already observe many issues with the design. The biggest one is that all the details of drawing each object and each state would effectively need to be duplicated. The alternative it seems would be to construct a “wrapper” that is for example able to handle various operations, in a generic but still powerful way, to paint on both SKCanvas as well as System.Drawing.Graphics. I’ve decided against this approach because realistically once a SkiaSharp implementation is working, GDI+ is pretty much just legacy stuff that I could arguably remove altogether anyway. Furthermore, that sort of abstraction would prevent or at least make more difficult utilization of features specific to one implementation or another within the client code doing the drawing, and would just mean that now the drawing logic is coupled to whatever abstraction I created.

There is still the problem of Game Elements using data types such as PointF or RectangleF and so forth, and particularly Image and Bitmap to represent positions, bounds, and loaded images, so I suspect things outside the game “engine” will require modification, but it has provided a scaffolding upon which I can build the new implementations. Seeing working code, I find, tends to motivate further changes. Sort of a tame form of Test Driven Development, I suppose.

I have managed to implement some basic handlers so hopefully I can get a SkiaSharp implementation utilizing a SKControl as the drawing surface sorted out. I decided to implement this stuff before for example trying to create a title screen menu because that would be yet another state and drawing code I’d need to port over.

Some of the direct translations were interesting. They also gave peripheral exposure to what looks like very powerful features that are available in SkiaSharp that would give a lot of power in terms of drawing special effects compared to GDI+. For example, using BlendFilters, it appears it would be fairly straightforward to apply a blur effect to the play field while the game is paused, which I think would look pretty cool.

Posted By: BC_Programming
Last Edit: 22 Jun 2019 @ 08:40 AM

EmailPermalinkComments (0)
Tags: , ,
Categories: .NET, C#, Programming
 23 Mar 2019 @ 2:15 PM 

There are a lot of components of Windows 10 that we, as users, are not “allowed” to modify. It isn’t even enough when we find a way to do so, such as by disabling services or scheduled tasks by using command prompt running under the system account. This is because when you next install updates, those settings are often reset. There are also background tasks and services intended specifically for “healing” tasks, which is a pretty friendly way to describe a  trojan downloader.

One common way to “assert” control is using the registry and the Image File Execution Options key, found as:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options

By adding a Key here with the name of the executable, one can add additional execution options. The one of importance here is a string value called debugger. When you add a debugger value, Windows will basically not start the executable and will instead launch the executable listed for the “debugger” value, with the executable that was being run as a parameter.

We can use this for two purposes. The most obvious is that we can simply swap in an executable that does nothing at all, and basically prevent any executable from running. For example, if we add “C:\Windows\System32\systray.exe” as the debugger value for an executable, when the executable in question is run, instead the systray.exe stub will run, do nothing, and exit, and the executable that was being launched will not. As a quick aside- systray.exe is a stub that doesn’t actually do anything- it used to have built-in notifications icons for Windows 9x, and it remains because some software would actually check if that file existed to know whether it was running on Windows 95 or later.

The second way we can use it is to instead insert our own executable as the debugger value. Then we can log and record each invocation of any redirected program. I wanted to record the invocations of some built-in Windows executables I had disabled, so I created a simple stub program for this purpose:


I Decided to separate the settings for future editing. For my usage, I just have it hard-coded to C:\\IMEO_Logs right now and create the folder beforehand. The bulk of the program of course is the entry point class:

I’ve used this for a few weeks by manually altering the Image File Execution Options to change my existing settings that redirected some executables (compattelrunner.exe, wsqmcons.exe, and a number of others) to systray.exe to instead redirect to this program- It then logs all the attempts to invoke that executable alongside details like the arguments that were passed in.

Posted By: BC_Programming
Last Edit: 23 Mar 2019 @ 02:15 PM

EmailPermalinkComments Off on Taking Control of Windows 10 with Image File Execution Options
 14 Mar 2019 @ 6:56 PM 

Over the last few years it’s become apparent, of course, that many people building and using PCs are using physical media less and less. One thing I have noticed is that a lot of people that go “Optical Drive free” seem to evangelize it and assume everybody who uses DVDs or physical media is some kind of intellectually incognizant doofus- That Optical Media is Unnecessary in general and nobody needs it, which is of course a very silly statement.

Of course it’s "unnecessary"; a Graphics card is "unnecessary" and a sound card is "unnecessary" and both have been for years, but people still buy and use them; Optical drives are probably more in the camp of the latter than the former since the former is arguably a necessity for "gaming" whereas Optical drives are certainly not- at least not, in general.

But like they said- different people have different needs; or, perhaps, a better term, would be different uses for them.

Just speaking personally, my Main System has both a Blu Ray Burner and a DVD Drive installed. I use the Blu Ray burner for watching Blu Rays, as I prefer physical media, and I’ve found BD-R discs great for making hard-copy backups. Why not use a USB Drive? I have USB drives and external USB Drives/enclosures, but I’ve found them incredibly uneconomical for long term hard backups. With Blu-Ray Discs, if I want a hard copy backup it’s something I want to burn, label, and basically file away. Flash Drives and External Drives wouldn’t work like that- they would constantly be changing alongside the data source being backed up Making them more a redundancy rather than an actual backup solution over a longer term. Another problem is that a good one isn’t cheap. The External Drives Seagate and WD sell are reasonably cheap for the capacity, mind, but- those are dogshit; WD/Seagate both use their shittiest drives and create externals out of them. Sorry but I don’t trust a Seagate Sunfish (or whatever they call their low-end model) or WD Green drive as a safe backup drive anymore than I’d trust the safe deposit boxes of a bank that operates out of the back of a Toyota Tercel. Which makes a good backup drive less economical because it means getting a good enclosure (eSATA and USB 3 is an obvious must here) as well as a good drive.

Space per GB is also better with BD-R (perhaps less with spindles of DVDs).

Another aspect is that I also have a number of older game titles on DVDs. Some are available on Steam, but I’m not about to buy them again. Fuck that noise.

of course, I *could* do all this with an External drive. But because I actually utilize the physical media it is for, it’s not economical time-wise. I use them rather frequently. So it’s sort of like somebody who for some physiological reason never shits saying that having a toilet inside is unnecessary. they are strictly right, but I’m not going to start shitting in a chamberpot or outhouse.

Conversely, it’s not Necessary- My laptop doesn’t have an optical drive, for example, and it’s not affected anything, as it’s not a gaming machine and doesn’t actually keep anything special that I need to backup to start with.

Posted By: BC_Programming
Last Edit: 14 Mar 2019 @ 06:56 PM

EmailPermalinkComments Off on "DVD Drives are unnecessary in modern PCs"
Categories: Programming
 28 Jan 2019 @ 4:53 PM 

Recently, a Microsoft engineer had this to say with regards to Mozilla and Firefox:

Thought: It’s time for @mozilla to get down from their philosophical ivory tower. The web is dominated by Chromium, if they really *cared* about the web they would be contributing instead of building a parallel universe that’s used by less than 5%?

As written this naturally got a lot of less-than optimistic responses. Here are some follow up tweets wherein they explain their position:

I don’t neglect the important work Mozilla has contributed, but here’s a few observations shapes my perspective:

1) The modern web platform is incredible complex. Today it’s an application runtime comparable to the Java or .net framework.

2) This complexity it’s incredibly expensive to implement a web runtime. Even for Google/Microsoft it’s hard to justify such investment that would take thousands of engineers in multiple years. The web has become too capable for multi engines, just like many frameworks.

3) Contribution can happen on many levels, and why is it given that each browser vendor has to land their contributions in *their own* engine? What isn’t the question what drives most impact for the web as a holistic platform?

4) My problem with Mozilla’s current approach is that they are *preaching* their own technology instead of asking themselves how they can contribute most and deliver most impact for the web? Deliver value to 65% of the market or less than 5%?

5) This leads to my bigger point: In a world where the web platform has evolved into a complex .application runtime, maybe it’s time to revise the operation and contribution model. Does the web need a common project and an open governance model like fx Node Foundation?

6) What if browser vendors contributed to a "common webplat core" built together and each vendor did their platform specific optimizations instead of building their own reference implementations off a specification from a WG? That’s what I mean by "parallel universes".

7) I believe Mozilla can be much more impactful on the holistic web platform if they took a step back and revised their strategy instead of throwing rocks after Google/MS/etc.

8) I want the web to win, but we need collaboration not parallel universes. Writing specs together is no longer enough. The real threat to the web platform is not another browser engine, but native platforms, as they don’t give a damn about an open platform.

That’s a lot to take in, however, my general “summary” would be “Why have these separate implementations of the same thing when there can be one” which is pretty much a case for promoting code reuse. However, that idea doesn’t really hold fast in this context. This may be why the statement was so widely criticized on Twitter.

In an ideal world, of course, the idea that we could have as they describe, a single, “common webplat core” that every vendor can freely contribute to and for which no one vendor has any absolute or direct control or veto power over, is a good one. But it is definitely not what we have nor is it something that seems to be in development right now. That “common webplat core built together by every vendor” is most definitely NOT Chromium, or the Blink engine, so it’s sort of a red herring argument here. Chromium is heavily influenced and practically “under the control” of Google, an advertising company. Microsoft- another company that has a large advertising component, has now opted to use the same Blink rendering engine and chromium underpinnings that are used in Chrome, via a re-engineering of the Microsoft Edge browser. That’s two companies that are shoulder deep in the advertising and marketing space that have a history of working in their own best interests rather than the best interests of end users with a hand on the reins of Chromium. Not exactly the open and free ‘common webplat core’ that they described!

Given this, Mozilla seems to be the only browser/rendering engine vendor that is committed to an open web, The idyllic scenario they have described only makes sense if we were to start with an assumption that all Open Source software is inherently free of any sort of corporate influence, which simply is not the case. Furthermore, the entire point of Open Source projects is to provide alternatives, not to provide a single be-all end all implementation- The entire idea of Open Source is to provide choices, not take them away. There is no single Desktop Environment, Shell, Email Server,  Web Server, text editor, etc. Think of a type of software and the Open Source community has numerous different implementations. This is because realistically there is no “be all end all” implementation for any non-trivial software product, and implementations of an Open Web fall under that umbrella. Suggesting that there be only one standard implementation that is used for every single web browser is actually completely contrary to the way Open Source already works.

Posted By: BC_Programming
Last Edit: 28 Jan 2019 @ 04:53 PM

EmailPermalinkComments Off on Why did the Microsoft Engineer Tweet about an Open Web? Because they are now on the other side.
Categories: Programming
 26 Sep 2018 @ 1:38 PM 

I have a feeling this will be a topic I will cover at length repeatedly, and each time I will have learned things since my previous installments. The Topic? Programming Languages.

I find it quite astonishing just how much Polarizations and fanaticism we can find over what is essentially a syntax for describing operations to a computer. A quick google can reveal any number of arguments about languages, people telling you why Java sucks, people telling you why C# is crap, people telling you why Haskell is useless for real-world applications, people telling you that Delphi has no future, people telling you that there is no need for value semantics on variables, people telling you mutable state is evil, people telling you that Garbage collection is bad, people telling you that manual memory management is bad, etc. It’s an astonishing, never-ending trend. And it’s really quite fascinating.


I suppose the big question is- Why? Why do people argue about languages, language semantics, capabilities, and paradigms? This is a very difficult question to answer. I’ve always felt that polarization and fanaticism is far more likely to occur when you only know and understand one Programming Language. Of course, I cannot speak for everybody, only from experience. When I only knew one language “fluently”, I was quick to leap to it’s defense. It had massive issues that I can see now, looking back, but which I didn’t see at the time. I justified omissions as being things you didn’t need or could create yourself. I called features in newer languages ‘unnecessary’ and ‘weird’. So the question really is, who was I trying to prove this to? Was I arguing against those I was replying too- or was it all for my own benefit? I’m animate that the reasons for my own behaviour -and, to jump to a premature and biased conclusion, possibly those in which I see similar behaviour over other Languages- was the result of feeling trivialized by the attacks on the language I was using. Basically, it’s the result of programmers rating themselves based on what languages they know and use everyday. This is a natural- if erroneous – method of measuring one’s capabilities. I’ve always been a strong proponent that it isn’t the Programming Language that matters, but rather your understanding of Programming concepts, and how you apply them, as well as not subverting to the religious dogmas that generally surround a specific language design. (I’m trying very hard not to cite specific languages here). Programming Languages generally have set design goals. As a result, they typically encourage a style of programming- or even enforce it through artificial limitations. Additionally, those limitations that do exist (generally for design reasons) are worked around by competent programmers in the language. So when the topic enters the domain of their favourite language not supporting Feature X, they can quickly retort that “you don’t need feature X, because you can use Feature Q, P and R to create something that functions the same”. But that rather misses the point, I feel.

I’ve been careful not to mention specific languages, but here I go. Take Visual Basic 6. That is, Pre .NET. As a confession, I was trapped knowing only Visual Basic 6 well enough to do anything particularly useful with it for a very long time. Looking back- and having to support my legacy applications, such as BCSearch- and I’m astonished by two things that are almost polar opposites; The first is simply how limited the language is. For example, If you had a Object Type CSomeList and wanted to ‘cast’ it to a IList interface, you would have to do this:

Basically, you ‘cast’ by setting it directly to a variable of the desired type you want to cast. These types of little issues and limitations really add up. The other thing that astonished me was the ingenuity of how I dealt with the limitations. At the time, I didn’t really consider some of these things limitations, and I didn’t think of how I dealt with them as workarounds. For example, the above casting requirement I found annoying, so I ended up creating a GlobalMultiUse Class (which means all the Procedures within are public); in this case the Function might be called “ToIList()” and would attempt to cast the parameter to a IList and return it. Additionally, at some point I must have learned about Exception handling in other languages, and I actually created a full-on implementation of Exception handling for Visual Basic 6. Visual Basic 6’s Error Handling was, for those that aren’t aware, rather simple. You could basically say “On Error Goto…” and redirect program flow to a specific label when an error occured. All you would know about the error is the error number, though. My “Exception” implementation built upon this. To Throw an exception, you would create it (usually with an aforementioned public helper), and then throw it. in the Exception’s “Throw()” method, it would save itself as the active Unwind Exception (global variable) and then raise an Application defined error. Handlers were required to recognize that error number, and grab the active exception (using GetException(), if memory serves). GetException would also recognize many Error codes and construct instances of the appropriate Exception type to represent them, so in many cases you didn’t need to check for that error code at all. The result? Code like this:

would become:

There was also a facility to throw inner exceptions, by using ThrowInner() with the retrieved Exception Type.

So what is wrong with it? well, everything. The Language doesn’t provide these capabilities, so I basically have to nip and tuck it to provide them, and the result is some freakish plastic surgery where I’ve grafted Exceptions onto somebody who didn’t want Exceptions. Fact is that, once I moved to other languages, I now see just how freakish some of the stuff I implemented in VB was. That implementation was obviously not thread safe, but that didn’t matter because there was no threading support, for example.

Looking forward

With that in mind, it can be valuable to consider one’s current perspectives and how they may be misguided by that same sort of devotion. This is particularly important when dealing with things that you only have a passing knowledge of. It’s perhaps more applicable if you’ve gained experience with something to be able to recognize flaws, but it’s easy to find flaws or repaint features or aspects as flaws for the purpose of making yourself feel wiser for not having used it sooner.

Posted By: BC_Programming
Last Edit: 26 Sep 2018 @ 01:38 PM

EmailPermalinkComments Off on Programming Languages (2)
 31 Aug 2018 @ 7:45 PM 

When I was implementing BASeTris, my Tetris Clone, I thought it would be nifty to have Controller support, so I could use my XBox One Controller that I have attached to my PC. My last adventure with Game Controllers ended poorly- BASeBlock has incredibly poor support for them, overall – In revisiting it with the consideration towards XInput rather than DirectInput this time, however, I eventually found XInput.Wrapper, which is a rather simple, single-class approach to handling XInput Keys.

The way that BASeTris handles input is my attempt at separating different Input methods from the start. The Game State interface has a single HandleGameKey routine which effectively handles a single press. That itself get’s called by the actual Input routines, which also include some additional management for features like DAS repeat for certain game keys. The XInput Wrapper, of course, was not like this. It is not particularly event driven and works differently.

I did mess about with it’s “Polling” feature for some time before eventually creating my own implementation of the same. The biggest thing I needed was a “translation” where I could see when keys were pressed and released and therefore track that information and translate it to appropriate GameKey presses. This was the rather small class that I settled on for this purpose and currently have implemented in BASeTris:

It is a bit strange that I needed to create a wrapper for what is itself a wrapper, but it wasn’t like I was going to find a ready-made solution that integrated into how I had designed Input in BASeTris anyway- some massaging was therefore quite expected to be necessary.

Posted By: BC_Programming
Last Edit: 31 Aug 2018 @ 07:45 PM

EmailPermalinkComments Off on A Wrapper for… The XInput Wrapper (?)
Categories: C#, Programming
 10 Jun 2018 @ 10:23 AM 

It suddenly occurred to me in the last week that I don’t really have a proper system in place for not only software downloads here on my website but appropriate build integration with the Source Control in building the projects as needed when commits are made. Having set up a Jenkins build environment for the software I work on for my Job, I thought it reasonable that I make the same demands of myself.

One big reason to do this, IMO, is that it can actually encourage me to create new projects. The idea of packaging up the result and making it more easily accessible or usable is often a demotivator I find for creating some new projects. Having an established “system” in place whereby I can make changes on github and have say, installer files “appear” properly on my website as needed can be a motivator- I don’t have to build the program, copy files, run installation scripts, etc. manually every time- I just need to configure it all once and it all “works” by itself.

To that end, I’ve setup Jenkins appropriately on one of my “backup” computers. It’s rather tame in it’s capabilities but it should get the job done, I think- only 4GB of RAM and an AMD 5350. I would use my QX6700 based system, but the AMD system uses far less power. I also considered having Jenkins straight-up on my main system but thought that could get in the way, and just be annoying. Besides- this gives that system a job to do.

With the implementation for work, there were so many projects interdependent and we pretty much always want “everything” that I just made it a single project which builds all at once. This way everything is properly up to date. The alternative was fiddling with 50+ different projects and figuring out the appropriate dependencies to build based on when other projects were updated and such- something of a mess. Not to mention it’s all in one  repository anyway which goes against that idea as well.

In the case of my personal projects on Github, They are already separate repositories. So I will simply have them built as separate projects, with jenkins itself understanding upstream/downstream, I can use that as needed.

I’ve successfully configured the new jenkins setup and it is now building BASeTris, a Tetris clone game I decided to write a while ago. It depends on BASeScores and Elementizer, so those two projects are in jenkins as well.

BASeTris’s final artifact is an installer.

But of course, that installer isn’t much good just sitting on my CI Server! However, I also don’t want to expose the CI Server as a “public” page- there are security considerations, even if I disregard upload bandwidth issues. To that end, I constructed a small program which uploads files using SSH to my website. It will run once a day and is given a directory. It will look in all the immediate subdirectories of that directory, get the most recent file, and upload it to a corresponding remote directory if it hasn’t already been uploaded. I configured BASeTris to copy it’s final artifact there into an appropriate folder.

Alternatively, it is possible to have each project configured to upload the artifacts via SSH as a post-build step. However, I opted to not do that because I would rather not a series of changes throughout the day result in a bunch of new uploads- those would consume space and not be particularly useful. Instead, I’ve opted to have all projects that I want to upload uploaded once a day and only if there have been changes. This should help reduce redundancy (and space usage) of those uploads.

My “plan” is to have a proper PHP script or something that can enumerate the folders and provide a better interface for downloads. If nothing else I would like each CI projects folder to have a "project_current.php” file which automatically sends the latest build- then I can simply link to that on blog download pages for each project and only update it to indicate new features or content.

As an example, http://bc-programming.com/downloads/CI/basetris/ is the location that will contain BASeTris version downloads.

There is still much work to do, however- the program(s) do have git hash metadata added to the project build, so they do have access to their git commit hash, but currently they do not actually present that information. I think it should for example be displayed in the title bar, alongside other build information such as build date, if possible. I’ve tried to come up with a good way to have the version auto-increment but I think I’ll just tweak that as the project(s) change.

Heck- the SSH Uploader utility seems like a good candidate for yet another project to add to github, if I can genericize it so it isn’t hard-coded for my site and purpose.

Posted By: BC_Programming
Last Edit: 10 Jun 2018 @ 10:23 AM

EmailPermalinkComments Off on About Time I had a CI Server, Methinks
 06 Jun 2018 @ 7:46 PM 

Flash Memory, like anything, is no stranger to illegitimate products. You can find 2TB Flash drives on eBay that are 40 bucks, for example. These claim to be 2TB, show up as 2TB- but, attempting to write data beyond a much smaller size, and the Flash data is corrupted because it actually writes to an earlier location on the drive. My first experience with this was actually with my younger brother’s Gamecube system; when he got it, he also got two "16MB" Memory cards (16 megabit, so 2 Megabytes) However, they would rather frequently corrupt data. I suspect, looking back, it was much the same mechanism- the Memory card was "reporting" as larger than it was and writing beyond the end was corrupting the information on it.

This brings me to today. You can still find cheap Memory cards for those systems which claim sizes such as 128MB. even at the "real" 128 Megabits size that it is, that’s still 16MB which is quite substantial. I’ve recently done some experiments with 4 cheap "128MB" Gamecube Memory Cards that I picked up. Some of the results are quite interesting.

First, I should note that my "main" memory cards for the system are similar cheap cards I picked up online 12 years ago or thereabouts. one is a black card that simply says "Wii/NGC 128 MEGA" on it, the other is a KMD Brand 128MB. The cheap ones I picked up recently have the same case as the KMD and, internally, look much the same, though they feel cheaper; They are branded "HDE". Now, for the ones I have, I’m fairly sure they are legitimate, but not 100%- the Flash chips inside are 128 Megabit and one is even 256Megabit. (Of course this means "128 Mega" and "128 MB" actually means 16MB and 128 Megabits, but whatever).

Since the 4 cards were blank, I decided to do a bit of experimenting with a little program called GCMM, or Gamecube Memory Manager. This is a piece of homebrew that allows you to pretty much do whatever you want with the data on memory cards, including making backups to an SD Card, restoring from an SD Card, copying any file between memory cards, etc. The first simple test is easy- just do a backup and a restore. it shouldn’t matter too much that the card is blank. I backed up the new card no problem. However, when I tried to restore it- it gives a write error at block 1024. This is right at the halfway point. No matter what I couldn’t get passed that point for any of the "new" cards. This indicates to me that the card(s) are actually 8MB cards, with approximately 1024 blocks of storage. What a weird "counterfeit" approach. 8MB is already a rather substantial amount of space, why "ruin" the device by having it report the wrong size and allow data corruption? I found that I could cause raw restores to succeed if I was able to take the card out during the restore process right before it got to 1024.

This discovery is consistent with what I understand of counterfeit flash- the controller will basically write to earlier areas of the memory when instructed to read beyond the "real" size, and will usually overwrite, say, file system structures, needing it to be formatted. Interestingly, If I rip it out before it get’s there, everything backed up up to that point is intact. Something else interesting I found was by looking inside the raw dump I originally created on one of the "new" cards. I found some very interesting data in the raw image. the File system itself was clean but that data remains in the memory, and was still there for viewing. I could see that Wrestlemania 2002 was probably used for testing the card at some point, as there was "w_mania2002" in the raw data, as well as a number of other tidbits that referenced wrestler’s that appeared in that game. What I found much more interesting, however, were a number of other strings: "V402021 2010-06-08" suggests a date that the card might have been manufactured. "Linux-"… Now this is interesting! Linux was involved in some way? This wouldn’t be surprising if it was constructed with some sort of embedded system, however it doesn’t make a lot of sense that this would appear on the memory card data itself. Similarly, I found various configuration files:


Due to a lot of network information, WLAN IDs, etc. my suspicion is that these flash chips are not actually new, but were taken from some sort of networking device, such as a router or switch. This is supported because googling a few of the configuration settings seems to always lead me to some sort of Chinese ADSL Provider, so I suspect perhaps these Flash chips were re-used from old networking equipment. That, in itself, does add another concern to these Memory Cards- if they were used before they found themselves in these Memory Cards- how much were they used? And how? Were they used to contain the firmware, for example? or were they used to hold a small file system for the networking device?

Overall, for something so seemingly mundane, I found this ti be a very interesting distraction and perhaps this information could prove useful or at least interesting to others.

Posted By: BC_Programming
Last Edit: 06 Jun 2018 @ 07:46 PM

EmailPermalinkComments Off on Memory Card Adventures
Categories: Programming
 25 Nov 2017 @ 7:11 PM 

The code for this post can be found in this github project.

Occasionally you may present an interface which allows the user to select a subset of specific items. You may have a setting which allows the user to configure for example a set of plugins, turning on or off certain plugins or features.

At the same time it may be desirable to present an abbreviated notation for those items. As an example, if you were presenting a selection from alphabetic characters, you may want to present them as a series of ranges; if you had A,B,C,D,E, and Q selected, for example, you may want to show it as “A-E,Q”.

The first step, then, would be to define a range. We can then take appropriate inputs, generate a list of ranges, and then convert that list of ranges into a string expression to provide the output we are looking for.

For flexibility we would like many aspects to be adjustable, in particular, it would be nice to be able to adjust the formatting of each range based on other information, so rather than hard-coding an appropriate ToString() routine, we’ll have it call a custom function.

Pretty straightforward- a starting point, an ending point, and some string formatting. Now, one might wonder about the lack of an IComparable constraint on the type parameter. That would make sense for certain types of data being collated but in some cases the “data” doesn’t have a type-specific succession.

Now, we need to write a routine that will return an enumeration of these ranges given a list of all the items and a list of the selected items. This, too, is relatively straightforward. Instead of a routine this could also be encapsulated as a separate class with member variables to customize the formatted output. As with any programming problem, there are many ways to do things, and the trick is finding the right balance, and in some cases a structured approach to a problem can be suitable.

Sometimes you might not have a full list of the items in question, but you might be able to indicate what an item is followed by. For this, I constructed a separate routine with a similar structure which instead uses a function to callback and determine the item that follows another. For integer types we can just add 1, for example.

This is expanded in the github project I linked above, which also features a number of other helper routines for a few primitive types as well as example usage. In particular, the most useful “helper” is the routine that simply joins the results of these functions into a resulting string:

Posted By: BC_Programming
Last Edit: 25 Nov 2017 @ 07:11 PM

EmailPermalinkComments Off on List Selection Formatting
Categories: .NET, C#, Programming

 Last 50 Posts
Change Theme...
  • Users » 47469
  • Posts/Pages » 392
  • Comments » 105


    No Child Pages.

Windows optimization tips

    No Child Pages.

Soft. Picks

    No Child Pages.

VS Fixes

    No Child Pages.

PC Build 1: “FASTLORD”

    No Child Pages.