26 Sep 2018 @ 1:38 PM 

I have a feeling this will be a topic I will cover at length repeatedly, and each time I will have learned things since my previous installments. The Topic? Programming Languages.

I find it quite astonishing just how much Polarizations and fanaticism we can find over what is essentially a syntax for describing operations to a computer. A quick google can reveal any number of arguments about languages, people telling you why Java sucks, people telling you why C# is crap, people telling you why Haskell is useless for real-world applications, people telling you that Delphi has no future, people telling you that there is no need for value semantics on variables, people telling you mutable state is evil, people telling you that Garbage collection is bad, people telling you that manual memory management is bad, etc. It’s an astonishing, never-ending trend. And it’s really quite fascinating.

Why?

I suppose the big question is- Why? Why do people argue about languages, language semantics, capabilities, and paradigms? This is a very difficult question to answer. I’ve always felt that polarization and fanaticism is far more likely to occur when you only know and understand one Programming Language. Of course, I cannot speak for everybody, only from experience. When I only knew one language “fluently”, I was quick to leap to it’s defense. It had massive issues that I can see now, looking back, but which I didn’t see at the time. I justified omissions as being things you didn’t need or could create yourself. I called features in newer languages ‘unnecessary’ and ‘weird’. So the question really is, who was I trying to prove this to? Was I arguing against those I was replying too- or was it all for my own benefit? I’m animate that the reasons for my own behaviour -and, to jump to a premature and biased conclusion, possibly those in which I see similar behaviour over other Languages- was the result of feeling trivialized by the attacks on the language I was using. Basically, it’s the result of programmers rating themselves based on what languages they know and use everyday. This is a natural- if erroneous – method of measuring one’s capabilities. I’ve always been a strong proponent that it isn’t the Programming Language that matters, but rather your understanding of Programming concepts, and how you apply them, as well as not subverting to the religious dogmas that generally surround a specific language design. (I’m trying very hard not to cite specific languages here). Programming Languages generally have set design goals. As a result, they typically encourage a style of programming- or even enforce it through artificial limitations. Additionally, those limitations that do exist (generally for design reasons) are worked around by competent programmers in the language. So when the topic enters the domain of their favourite language not supporting Feature X, they can quickly retort that “you don’t need feature X, because you can use Feature Q, P and R to create something that functions the same”. But that rather misses the point, I feel.

I’ve been careful not to mention specific languages, but here I go. Take Visual Basic 6. That is, Pre .NET. As a confession, I was trapped knowing only Visual Basic 6 well enough to do anything particularly useful with it for a very long time. Looking back- and having to support my legacy applications, such as BCSearch- and I’m astonished by two things that are almost polar opposites; The first is simply how limited the language is. For example, If you had a Object Type CSomeList and wanted to ‘cast’ it to a IList interface, you would have to do this:

Basically, you ‘cast’ by setting it directly to a variable of the desired type you want to cast. These types of little issues and limitations really add up. The other thing that astonished me was the ingenuity of how I dealt with the limitations. At the time, I didn’t really consider some of these things limitations, and I didn’t think of how I dealt with them as workarounds. For example, the above casting requirement I found annoying, so I ended up creating a GlobalMultiUse Class (which means all the Procedures within are public); in this case the Function might be called “ToIList()” and would attempt to cast the parameter to a IList and return it. Additionally, at some point I must have learned about Exception handling in other languages, and I actually created a full-on implementation of Exception handling for Visual Basic 6. Visual Basic 6’s Error Handling was, for those that aren’t aware, rather simple. You could basically say “On Error Goto…” and redirect program flow to a specific label when an error occured. All you would know about the error is the error number, though. My “Exception” implementation built upon this. To Throw an exception, you would create it (usually with an aforementioned public helper), and then throw it. in the Exception’s “Throw()” method, it would save itself as the active Unwind Exception (global variable) and then raise an Application defined error. Handlers were required to recognize that error number, and grab the active exception (using GetException(), if memory serves). GetException would also recognize many Error codes and construct instances of the appropriate Exception type to represent them, so in many cases you didn’t need to check for that error code at all. The result? Code like this:

would become:

There was also a facility to throw inner exceptions, by using ThrowInner() with the retrieved Exception Type.

So what is wrong with it? well, everything. The Language doesn’t provide these capabilities, so I basically have to nip and tuck it to provide them, and the result is some freakish plastic surgery where I’ve grafted Exceptions onto somebody who didn’t want Exceptions. Fact is that, once I moved to other languages, I now see just how freakish some of the stuff I implemented in VB was. That implementation was obviously not thread safe, but that didn’t matter because there was no threading support, for example.

Looking forward

With that in mind, it can be valuable to consider one’s current perspectives and how they may be misguided by that same sort of devotion. This is particularly important when dealing with things that you only have a passing knowledge of. It’s perhaps more applicable if you’ve gained experience with something to be able to recognize flaws, but it’s easy to find flaws or repaint features or aspects as flaws for the purpose of making yourself feel wiser for not having used it sooner.

Posted By: BC_Programming
Last Edit: 26 Sep 2018 @ 01:38 PM

EmailPermalinkComments (0)
Tags
 31 Aug 2018 @ 7:45 PM 

When I was implementing BASeTris, my Tetris Clone, I thought it would be nifty to have Controller support, so I could use my XBox One Controller that I have attached to my PC. My last adventure with Game Controllers ended poorly- BASeBlock has incredibly poor support for them, overall – In revisiting it with the consideration towards XInput rather than DirectInput this time, however, I eventually found XInput.Wrapper, which is a rather simple, single-class approach to handling XInput Keys.

The way that BASeTris handles input is my attempt at separating different Input methods from the start. The Game State interface has a single HandleGameKey routine which effectively handles a single press. That itself get’s called by the actual Input routines, which also include some additional management for features like DAS repeat for certain game keys. The XInput Wrapper, of course, was not like this. It is not particularly event driven and works differently.

I did mess about with it’s “Polling” feature for some time before eventually creating my own implementation of the same. The biggest thing I needed was a “translation” where I could see when keys were pressed and released and therefore track that information and translate it to appropriate GameKey presses. This was the rather small class that I settled on for this purpose and currently have implemented in BASeTris:

It is a bit strange that I needed to create a wrapper for what is itself a wrapper, but it wasn’t like I was going to find a ready-made solution that integrated into how I had designed Input in BASeTris anyway- some massaging was therefore quite expected to be necessary.

Posted By: BC_Programming
Last Edit: 31 Aug 2018 @ 07:45 PM

EmailPermalinkComments (0)
Tags
Categories: C#, Programming
 10 Jun 2018 @ 10:23 AM 

It suddenly occurred to me in the last week that I don’t really have a proper system in place for not only software downloads here on my website but appropriate build integration with the Source Control in building the projects as needed when commits are made. Having set up a Jenkins build environment for the software I work on for my Job, I thought it reasonable that I make the same demands of myself.

One big reason to do this, IMO, is that it can actually encourage me to create new projects. The idea of packaging up the result and making it more easily accessible or usable is often a demotivator I find for creating some new projects. Having an established “system” in place whereby I can make changes on github and have say, installer files “appear” properly on my website as needed can be a motivator- I don’t have to build the program, copy files, run installation scripts, etc. manually every time- I just need to configure it all once and it all “works” by itself.

To that end, I’ve setup Jenkins appropriately on one of my “backup” computers. It’s rather tame in it’s capabilities but it should get the job done, I think- only 4GB of RAM and an AMD 5350. I would use my QX6700 based system, but the AMD system uses far less power. I also considered having Jenkins straight-up on my main system but thought that could get in the way, and just be annoying. Besides- this gives that system a job to do.

With the implementation for work, there were so many projects interdependent and we pretty much always want “everything” that I just made it a single project which builds all at once. This way everything is properly up to date. The alternative was fiddling with 50+ different projects and figuring out the appropriate dependencies to build based on when other projects were updated and such- something of a mess. Not to mention it’s all in one  repository anyway which goes against that idea as well.

In the case of my personal projects on Github, They are already separate repositories. So I will simply have them built as separate projects, with jenkins itself understanding upstream/downstream, I can use that as needed.

I’ve successfully configured the new jenkins setup and it is now building BASeTris, a Tetris clone game I decided to write a while ago. It depends on BASeScores and Elementizer, so those two projects are in jenkins as well.

BASeTris’s final artifact is an installer.

But of course, that installer isn’t much good just sitting on my CI Server! However, I also don’t want to expose the CI Server as a “public” page- there are security considerations, even if I disregard upload bandwidth issues. To that end, I constructed a small program which uploads files using SSH to my website. It will run once a day and is given a directory. It will look in all the immediate subdirectories of that directory, get the most recent file, and upload it to a corresponding remote directory if it hasn’t already been uploaded. I configured BASeTris to copy it’s final artifact there into an appropriate folder.

Alternatively, it is possible to have each project configured to upload the artifacts via SSH as a post-build step. However, I opted to not do that because I would rather not a series of changes throughout the day result in a bunch of new uploads- those would consume space and not be particularly useful. Instead, I’ve opted to have all projects that I want to upload uploaded once a day and only if there have been changes. This should help reduce redundancy (and space usage) of those uploads.

My “plan” is to have a proper PHP script or something that can enumerate the folders and provide a better interface for downloads. If nothing else I would like each CI projects folder to have a "project_current.php” file which automatically sends the latest build- then I can simply link to that on blog download pages for each project and only update it to indicate new features or content.

As an example, http://bc-programming.com/downloads/CI/basetris/ is the location that will contain BASeTris version downloads.

There is still much work to do, however- the program(s) do have git hash metadata added to the project build, so they do have access to their git commit hash, but currently they do not actually present that information. I think it should for example be displayed in the title bar, alongside other build information such as build date, if possible. I’ve tried to come up with a good way to have the version auto-increment but I think I’ll just tweak that as the project(s) change.

Heck- the SSH Uploader utility seems like a good candidate for yet another project to add to github, if I can genericize it so it isn’t hard-coded for my site and purpose.

Posted By: BC_Programming
Last Edit: 10 Jun 2018 @ 10:23 AM

EmailPermalinkComments Off on About Time I had a CI Server, Methinks
Tags
 06 Jun 2018 @ 7:46 PM 

Flash Memory, like anything, is no stranger to illegitimate products. You can find 2TB Flash drives on eBay that are 40 bucks, for example. These claim to be 2TB, show up as 2TB- but, attempting to write data beyond a much smaller size, and the Flash data is corrupted because it actually writes to an earlier location on the drive. My first experience with this was actually with my younger brother’s Gamecube system; when he got it, he also got two "16MB" Memory cards (16 megabit, so 2 Megabytes) However, they would rather frequently corrupt data. I suspect, looking back, it was much the same mechanism- the Memory card was "reporting" as larger than it was and writing beyond the end was corrupting the information on it.

This brings me to today. You can still find cheap Memory cards for those systems which claim sizes such as 128MB. even at the "real" 128 Megabits size that it is, that’s still 16MB which is quite substantial. I’ve recently done some experiments with 4 cheap "128MB" Gamecube Memory Cards that I picked up. Some of the results are quite interesting.

First, I should note that my "main" memory cards for the system are similar cheap cards I picked up online 12 years ago or thereabouts. one is a black card that simply says "Wii/NGC 128 MEGA" on it, the other is a KMD Brand 128MB. The cheap ones I picked up recently have the same case as the KMD and, internally, look much the same, though they feel cheaper; They are branded "HDE". Now, for the ones I have, I’m fairly sure they are legitimate, but not 100%- the Flash chips inside are 128 Megabit and one is even 256Megabit. (Of course this means "128 Mega" and "128 MB" actually means 16MB and 128 Megabits, but whatever).

Since the 4 cards were blank, I decided to do a bit of experimenting with a little program called GCMM, or Gamecube Memory Manager. This is a piece of homebrew that allows you to pretty much do whatever you want with the data on memory cards, including making backups to an SD Card, restoring from an SD Card, copying any file between memory cards, etc. The first simple test is easy- just do a backup and a restore. it shouldn’t matter too much that the card is blank. I backed up the new card no problem. However, when I tried to restore it- it gives a write error at block 1024. This is right at the halfway point. No matter what I couldn’t get passed that point for any of the "new" cards. This indicates to me that the card(s) are actually 8MB cards, with approximately 1024 blocks of storage. What a weird "counterfeit" approach. 8MB is already a rather substantial amount of space, why "ruin" the device by having it report the wrong size and allow data corruption? I found that I could cause raw restores to succeed if I was able to take the card out during the restore process right before it got to 1024.

This discovery is consistent with what I understand of counterfeit flash- the controller will basically write to earlier areas of the memory when instructed to read beyond the "real" size, and will usually overwrite, say, file system structures, needing it to be formatted. Interestingly, If I rip it out before it get’s there, everything backed up up to that point is intact. Something else interesting I found was by looking inside the raw dump I originally created on one of the "new" cards. I found some very interesting data in the raw image. the File system itself was clean but that data remains in the memory, and was still there for viewing. I could see that Wrestlemania 2002 was probably used for testing the card at some point, as there was "w_mania2002" in the raw data, as well as a number of other tidbits that referenced wrestler’s that appeared in that game. What I found much more interesting, however, were a number of other strings: "V402021 2010-06-08" suggests a date that the card might have been manufactured. "Linux-2.6.23.17_stm23_A18B-pdk71"… Now this is interesting! Linux was involved in some way? This wouldn’t be surprising if it was constructed with some sort of embedded system, however it doesn’t make a lot of sense that this would appear on the memory card data itself. Similarly, I found various configuration files:

NetType=1
Language=0
Em10Mode=0
ConTimeout=30
ProductID=00100199007011400002D0154ADF986E
Licence=222
ServiceUser=0512200052225
ServicePwd=282026
PppoeUser=0512200052225@vod
PppoePwd=282026
DHCPPUser=0512200052225@vod
DHCPPPw=282026
IpAddr=192.168.1.12
NetMask=255.255.0.0
GateWay=
DNS=
MacAddr=D0:15:4A:DF:98:6E
Volume=100
TimeZone=8
ProxyFlag=0
AcceptCookie=99

Due to a lot of network information, WLAN IDs, etc. my suspicion is that these flash chips are not actually new, but were taken from some sort of networking device, such as a router or switch. This is supported because googling a few of the configuration settings seems to always lead me to some sort of Chinese ADSL Provider, so I suspect perhaps these Flash chips were re-used from old networking equipment. That, in itself, does add another concern to these Memory Cards- if they were used before they found themselves in these Memory Cards- how much were they used? And how? Were they used to contain the firmware, for example? or were they used to hold a small file system for the networking device?

Overall, for something so seemingly mundane, I found this ti be a very interesting distraction and perhaps this information could prove useful or at least interesting to others.

Posted By: BC_Programming
Last Edit: 06 Jun 2018 @ 07:46 PM

EmailPermalinkComments Off on Memory Card Adventures
Tags
Categories: Programming
 25 Nov 2017 @ 7:11 PM 

The code for this post can be found in this github project.

Occasionally you may present an interface which allows the user to select a subset of specific items. You may have a setting which allows the user to configure for example a set of plugins, turning on or off certain plugins or features.

At the same time it may be desirable to present an abbreviated notation for those items. As an example, if you were presenting a selection from alphabetic characters, you may want to present them as a series of ranges; if you had A,B,C,D,E, and Q selected, for example, you may want to show it as “A-E,Q”.

The first step, then, would be to define a range. We can then take appropriate inputs, generate a list of ranges, and then convert that list of ranges into a string expression to provide the output we are looking for.

For flexibility we would like many aspects to be adjustable, in particular, it would be nice to be able to adjust the formatting of each range based on other information, so rather than hard-coding an appropriate ToString() routine, we’ll have it call a custom function.

Pretty straightforward- a starting point, an ending point, and some string formatting. Now, one might wonder about the lack of an IComparable constraint on the type parameter. That would make sense for certain types of data being collated but in some cases the “data” doesn’t have a type-specific succession.

Now, we need to write a routine that will return an enumeration of these ranges given a list of all the items and a list of the selected items. This, too, is relatively straightforward. Instead of a routine this could also be encapsulated as a separate class with member variables to customize the formatted output. As with any programming problem, there are many ways to do things, and the trick is finding the right balance, and in some cases a structured approach to a problem can be suitable.

Sometimes you might not have a full list of the items in question, but you might be able to indicate what an item is followed by. For this, I constructed a separate routine with a similar structure which instead uses a function to callback and determine the item that follows another. For integer types we can just add 1, for example.

This is expanded in the github project I linked above, which also features a number of other helper routines for a few primitive types as well as example usage. In particular, the most useful “helper” is the routine that simply joins the results of these functions into a resulting string:

Posted By: BC_Programming
Last Edit: 25 Nov 2017 @ 07:11 PM

EmailPermalinkComments Off on List Selection Formatting
Tags
Categories: .NET, C#, Programming
 16 Jun 2017 @ 3:43 PM 

Windows 10 introduced a new software development platform- the Universal Windows Platform, or UWP. In some respects it builds upon the earlier Windows Runtime that was introduced with Windows 8. One interesting aspect of the platform is that- properly used- it can be utilized to have software that can be built once and distributed to a number of platforms running Microsoft Operating Systems, such as the XBox One.

I’ve fiddled a bit with UWP but honestly I found it tricky to determine what it’s for; As it is, it’s API as well as set of third-party portable libraries simply isn’t anywhere near a typical Application targeting the Desktop via WPF or even Windows Forms. But I think that is intentional; these aren’t built towards the same purpose. Instead, the main advantage of UWP appears to be in being able to deploy to multiple Windows Platforms. Unfortunately that is an advantage that I don’t think I can really utilize. However, I expect it will be well used for future applications- and it has already been well used for games like Forza Horizon 3, which utilized it for “Play anywhere” so it can be played not only on the XBox console but on any capable Windows 10 system. Forza 7 will also be using it to much the same effect.

Even if I won’t utilize it, it probably makes a lot of sense to cover it. My recent coding-related posts always seem to involve Windows Forms.  Perhaps I should work instead to learn UWP and then cover that learning experience within new posts? If I an encountering these hurdles then I don’t think it is entirely unreasonable to think perhaps others are as well.

I’ve also got to thinking that perhaps I have become  stuck in my ways, as I’m not partial to the approach that appears to bring web technologies to the desktop; Even today I find web applications and UI designed around the web to have a “feel” that is behind a traditional desktop application in usability. That said, I’m also not about to quit my job just because it involves “legacy” frameworks; We’re talking about quite an old codebase- bringing it forward based on library and platform upgrades would mean no time for adding new features that customers actually want. That, and the upgrade path is incredibly murky and unclear; with about 50 different approaches  for every 50 different problems we might encounter, not to mention things like deciding on the Framework versions and editions and such.

I know I was stuck in my ways previously so it’s hardly something that isn’t worth considering- I stuck with VB6 for far too long and figured it fine and these newfangled .NET things were unnecessary and complicated. But as it happens I was wrong about that. So it’s possible I am wrong about UWP; and if so then a lot of the negative discussion about UWP may be started by the same attitude and thinking. Is it that it is something rather large and imposing that I would need to learn that results in me perceiving it so poorly? I think that is very likely.

Which is  not to suggest of course that UWP is perfect and it is I who is wrong for not recognizing it; but perhaps it is the potential of UWP as a platform that I have failed to assess. While there are many shortcomings, future revisions and additions to the Platform are likely to resolve those problems as long as enough developers hop on board. And it does make sense for there to be a reasonable Trust Model where you “Know” what information an application actually uses or requests, rather then it being pretty much either limited user accounts or Administrator accounts and you don’t know exactly what is being used.

It may be time to come up with a project idea and implement it as a UWP application to start that learning experience. I did it for C# and Windows Forms, I did it for WPF, and I don’t see how the same approach couldn’t work for UWP. Unless it’s impossible to learn new stuff after turning 30, which I’m pretty sure is not the case!) If there is a way to execute other programs from UWP, perhaps the Repeater program I’m working on could be adapted. That is a fairly straightforward program.

Posted By: BC_Programming
Last Edit: 19 Jun 2017 @ 08:59 PM

EmailPermalinkComments Off on A UWP Discussion
Tags
Tags: ,
Categories: Programming
 09 Jun 2017 @ 11:06 PM 

This is part of a occasionally updated series on various programming languages. It should not be interpreted as a benchmark, but rather as a casual look at various programming languages and how they might be used by somebody for a practical purpose.
Currently, there are Articles written regarding Python,C#, Java and VB6 (merged for some reason),Scala,F# & Ruby,Perl, Delphi, PHP,C++,Haskell,D, VB.NET, and even QuickBASIC

QuickBASIC is an out of place choice when compared to most other languages that I’ve written in this series. Why would I jump so far backwards to QuickBASIC?

There are actually an umber of reasons. The first is that QuickBASIC actually imposes a number of limitations. Aside from the more limited programming language compared to, say C#, it also means any solution needs to appropriately contend with issues such as Memory usage and Open File Handles on MS-DOS. At the same time, a lot of the development task is actually more simple; one doesn’t need to fiddle with designers, or property pages or configuration tabs, or anything of that sort. You open a text file and start writing the program.

The first task is to determine an algorithm. Of course, we know the Algorithm- it’s been described previously- However, in this instance, we don’t have hashmaps available; furthermore, even if we want to implement that ourself, we cannot even keep all the information in memory. As a result, one compromise is to instead keep an array of index information in memory; that array can contain the sorted word as well as a record index into another random-access file, so, to start, we have these two TYPE structures:

By writing and reading directly from a scratch file when we need to add a new file to the “hash” we can avoid having any of the SORTRECORD structures in memory except the one we are working with. This drastically reduces our memory usage. As did determining that the longest word in SORTINDEX is 28 characters/bytes. The algorithm thus becomes similar- basically, with a word, we sort the words letters, and then we consult the array of SORTINDEX types. If we find one with the sorted word, we take the OFFSET and we read in the SORTRECORD at that offset, increment wordcount, and add the word to the SORTWORDS array, then PUT it back into the scratch file. And if it isn’t found in the SORTINDEX, we create a new entry- saving a new record with the word to the scratch file and recording the offset and sorted text in the index for that record.

Of course this does have several inefficiencies that I won’t address; The first is that the search for the matching sorted word is effectively a sequential search. Ideally, the in-memory index would be kept sorted and searches could use a Binary Search. I guess if somebody is interested I “left it as an exercise for the reader”.

Otherwise all seems well. But not so fast- the dict.txt file has 45402 words. Our type definition is 32 bytes, which means for all words to be stored in the index, we would need 1,452,864 bytes, which is far beyond the conventional memory limits that we are under. So we need to drastically reduce the memory usage of our algorithm. And we had something so promising! Seems like it’s back to the drawing board.

Or is it? instead of trying to reduce how much our algorithm uses, we could reduce how much data it is working with. At a time. We can split the original dictionary file into chunks, and as it happens since words of different lengths cannot be anagrams of each other, we can merely split the file into separate file organized by length. Then we perform the earlier algorithm on each of those files and output the resulting anagram list of each to one file. That would give us one file listing all anagrams without exceeding memory limitations!

Before we get too excited, let’s make sure that the largest “chunk” would be small enough. using another QuickBASIC program (because, what the hell, right?) I checked over the count of files of particular lengths. In this case, the chunk with the most is words of 7 letters in length, of which there are 7371 in the test dictionary file. This would require 235,872 Bytes of storage, which is well within our 640K conventional memory limit.

Of course, there is a minor caveat; we do need to start QuickBASIC with certain command line arguments, as, by default, the dynamic array maximum is actually 64K. We do this by launching it with the /Ah command line parameter. Otherwise, we might find that it encounters Subscript out of range errors once we get beyond around the 2000 mark for our 32-byte index record type.

Another consideration I encountered was open files. I had it opening all the dictionary output files at once, but it maxed out at 16 files, so I had to refactor it to be much slower by reading a line, determining the file to open, writing the line, and them closing the file. Again, there may be a better technique here to increase performance. For reference, I wasn’t able to find how to increase the limit, either (adjusting config.sys didn’t help).

After that, it worked a treat- the primary algorithm runs on each length subset, and writes the results to an output file.

Without further Adieu- The Full source of this “solution”:

And there you have it. an Anagram search program written in QuickBASIC. Of course, it is arather basic and is a bit picky about preconditions (hard-coded for a specific file, for example) but it was largely written against my test VM.

Posted By: BC_Programming
Last Edit: 09 Jun 2017 @ 11:06 PM

EmailPermalinkComments Off on Anagram Search Program Part XVI: QuickBASIC
Tags
Categories: Programming
 03 Jun 2017 @ 3:00 AM 

One of the fun parts of personal projects is, well, you can do whatever you want. Come up with a silly or even dumb idea and you can implement it if you want. That is effectively how I’ve approached BASeBlock. Iit’s sort of depressing to play- held back by older technologies like WindowsForms and GDI+, and higher resolution screens make it look quite awful too. Even so, when I fire it up I can’t help but be happy with what I did. Anyway, I had a lot of pretty crazy ideas for things to add into BASeBlock, some fit and were even rather fun to play- like adding a “Snake” boss that was effectively made out of bricks- others were sort of- well, strange, like my Pac man boss which attempts to eat the ball. At some point, I decided that the paddle being able to shoot lightning Palpatine-style wasn’t totally ridiculous.

Which naturally led to the question- how can we implement lightning in a way that sort of kind of looks believable in a mostly low-resolution way such that if you squint at the right angle you go “yeah I can sort of see that possible being lightning?” For that, I basically considered the recursive “tree drawing” concept. One of the common examples of recursionm is drawing a tree; f irst you draw the trunk, then you draw some branches coming out of the trunk, and then branches from those branches, and so on. For lightning, I adopted the same idea. The eesential algorithm I came up with was thus:

  1. From the starting point, Draw a line in the specified direction in that direction at the specified “velocity”
  2. From that end point, choose a random number of forks. For each fork, pick an angle up to 45 degrees of difference from the angle between the starting point and the second point, and take the specified velocity and randomly add or subtract up to a maximum of 25% of it.
  3. If any of the generated forks now have a velocity of 0 or less, ignore them
  4. otherwise, recursively call this same routine and start another “lightning” from this position at the specified velocity from the fork position.
  5. Proceed until there are no forks to draw or a specified maximum number of recursions has been reached

of course as I mentioned this is a very crude approximation; lightning doesn’t just randomly strike and stop short of the ground and this doesn’t really seek out a path to ground or anything along those lines. Again, crude approximation to at least mimic lightning. The result in BASeBlock looked something like this:

BB_lightning

Now, there are a number of other details in the actual implementation- first it is written against the game engine so it “draws” using the game’s particle system, and also uses other engine features to for example stop short on blocks and do “damage” to blocks that are impacted, and there are short delays between each fork (which again is totally not how lightning works but I’m taking creative license). The result does look far more like a tree when you look at it but the animation and how quickly it disappears (paired with the sound effect) is enough, I think, to at least make it “passably” lightning.

But All this talk, and no code, huh? Well, since this starts from the somewhat typical “Draw a shrub” concept applied recursively and with some randomization, let’s just build that- the rest, as they say, will come on their own. And by that, I suppose they mean you can adjust and tweak it as needed until it gets the desired effect. Or maybe you want to draw a shrubbery, I’m not judging. With that in mind here’s a quick little method that does this against a GDI+ Graphics object. Why a GDI+ Graphics Object? Well, there isn’t really any other way of doing standard Bitmap drawing on a Canvas type object as far as I know. Also as usual I just sort of threw this together so I didn’t have time to paint it and it might not be to scale or whatever.

What amazing beautiful output do we get from this? Whatever this is supposed to be:

bluetree

It does sort of look like a shrubbery I suppose. I mean, aside from it being blue, that is. It looks nothing like lightning, mind you. Though in my defense if electricity tunnels through certain things it often leaves tree-like patterns like this. Yeah, so it’s totally electricity related.

 

This is all rather unfulfilling, so as a bonus- how about making lightning in Photoshop:

 

Step 1: Start with a Gradient like so

Next, Apply the “Difference Clouds” filter with White and Black selected as Main and Secondary Colours.

Invert the image colours, then Adjust the levels to get a more pronounced “Beam” as shown.

Finally, add a layer on top and use the Overlay filter to add a vibrant hue- Yellow, Red, Orange, Pink, whatever. I’m not your mom. I made it cyan for some reason here.

Posted By: BC_Programming
Last Edit: 03 Jun 2017 @ 03:00 AM

EmailPermalinkComments Off on Faking “Lightning” in a 2-D game
Tags
Tags: , , ,
Categories: .NET, C#, Programming
 13 May 2017 @ 4:19 PM 

Storing, calculating, and working with Currency data types seems like one of the few things that still provides a mixed bag of emotion. in C#, on the one hand you have the decimal data type, but on the other, you have pretty much no functions which actually accept a decimal data type or return one.

As it is the most suitable, the decimal data type is largely recommended for financial calculations.This makes the most sense. And while, as mentioned, there are many functions and calls you might need to make which will result in casts back and forth from  other data types like double or even float, if you design your software and data from the ground up to deal with it you can usually accommodate these issues.

The problems arise, as usual, when we start looking at existing systems. For example, your old product might be working reasonably well with only a few problems for a customer, and they have loads of data. They aren’t going to be as likely to hop aboard your new system if it means that they will have to re-enter a load of data, and they aren’t going to like seeing errors from their current data appear in the new system. It was working before, after all. These old systems might be using ISAM databases and may have used floating point internally for calculations; even if it doesn’t use it’s own math routines you’ve got to consider that whatever programming environment was used for the old software might not follow the same mathematical rules as the new system, and so you have to decide how to proceed. Something as simple as a rounding function dealing with a corner case differently could result in massive amounts of manual data entry for either the customer or even yourself. On the other hand, using floating point types and writing wrappers to mimic the fiobles of the old functions is effectively building technical debt into the product. The compromise solution- have some sort of configuration which will either set it to use floating point compatibility mode or for the product to use decimal native mode would involve a lot of ground-up architecture to  implement. Database schemas will differ, and you can’t just willy-nilly swap the option either

It’s the sort of problem that doesn’t seem to get covered in Academia on the subject, but it comes up often and each decision must be made carefully, in order to avoid alienating customer bases while attempting to avoid unnecessary technical debt. particularly since technical debt is why new systems might be implemented to begin with- bringing forward technical debt from the replaced system sort of defeats the purpose.

Posted By: BC_Programming
Last Edit: 14 May 2017 @ 07:08 PM

EmailPermalinkComments Off on Numeric Types and historical technical debt
Tags
Categories: .NET, C#
 07 May 2017 @ 10:03 AM 

Upgrading library components across an entire suite of applications is not without it risks, as one may shortly learn when upgrading from Npgsql 2 to Npgsql 3. Though it applies between any version- who knows what bugs might be added or maybe even bugs were fixed that you relied upon previously, either intentionally or without even being aware that the behaviour on which you relied was in fact unintended.

As it happens, Npgsql 3 has such particulars when it comes to upgrading from Npgsql 2. On some sites, and with some workstations, we were receiving reports of Exceptions and errors after the upgrade was deployed. These were in the form of IOExceptions from within the Npgsql library when we ran a query, which failed because “a connection was forcibly closed by the remote host” form. Even adding some retry loops/attempts didn’t resolve the issue, as it would hit it’s retry limit, as if at some point it just refuses to work across the NpgsqlConnection.

After some investigation, it turned out to be a change in Npgsql 3 and how pooling is managed. In this case, a connection in the pool was being “closed” by the postgres server. This was apparently masked with previous versions because with Npgsql 2, the Pooled connections would be re-opened if they were found to be closed. Npgsql 3 changed this both for performance reasons as well as to be consistent with other Data Providers; This change meant that our code was Creating a connection and opening it- but that connection was actually a remotely closed connection from the pool, as a result attempts to query against that connection would throw exceptions.

Furthermore, because of the nature of the problem there was no clear workaround that could be used. We could trap the exception but at that point, how do we actually handle it? If we try to re-open the connection we’d just get the same closed connection back. It would be possible to disable pooling to get the connection open in that case but there isn’t much reason to have that only take place when that specific exception occurs, and it means having added handling such that we handle the error everywhere we perform an SQL query- and that exception might not specifically be caused by the pooling consideration either.The fix we used was to add new global configuration options to our software which would add parameters to the connection string to disable pooling and/or enable Npgsql’s keepalive feature. The former of which would sidestep the issue by not using pooled connections and the latter which would prevent the connections from being closed remotely (except when there was a serious problem of course, in which case we want to exception to take place). So far it has been successful in resolving the problem on affected systems.

Posted By: BC_Programming
Last Edit: 07 May 2017 @ 10:03 AM

EmailPermalinkComments Off on Npgsql 3 And the mysterious Socket Error
Tags
Categories: .NET, C#, Database, Programming

 Last 50 Posts
 Back
Change Theme...
  • Users » 47469
  • Posts/Pages » 383
  • Comments » 105

PP



    No Child Pages.

Windows optimization tips



    No Child Pages.

Soft. Picks



    No Child Pages.

VS Fixes



    No Child Pages.