23 Mar 2019 @ 2:15 PM 

There are a lot of components of Windows 10 that we, as users, are not “allowed” to modify. It isn’t even enough when we find a way to do so, such as by disabling services or scheduled tasks by using command prompt running under the system account. This is because when you next install updates, those settings are often reset. There are also background tasks and services intended specifically for “healing” tasks, which is a pretty friendly way to describe a  trojan downloader.

One common way to “assert” control is using the registry and the Image File Execution Options key, found as:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options

By adding a Key here with the name of the executable, one can add additional execution options. The one of importance here is a string value called debugger. When you add a debugger value, Windows will basically not start the executable and will instead launch the executable listed for the “debugger” value, with the executable that was being run as a parameter.

We can use this for two purposes. The most obvious is that we can simply swap in an executable that does nothing at all, and basically prevent any executable from running. For example, if we add “C:\Windows\System32\systray.exe” as the debugger value for an executable, when the executable in question is run, instead the systray.exe stub will run, do nothing, and exit, and the executable that was being launched will not. As a quick aside- systray.exe is a stub that doesn’t actually do anything- it used to have built-in notifications icons for Windows 9x, and it remains because some software would actually check if that file existed to know whether it was running on Windows 95 or later.

The second way we can use it is to instead insert our own executable as the debugger value. Then we can log and record each invocation of any redirected program. I wanted to record the invocations of some built-in Windows executables I had disabled, so I created a simple stub program for this purpose:

IFEOSettings.cs

I Decided to separate the settings for future editing. For my usage, I just have it hard-coded to C:\\IMEO_Logs right now and create the folder beforehand. The bulk of the program of course is the entry point class:

I’ve used this for a few weeks by manually altering the Image File Execution Options to change my existing settings that redirected some executables (compattelrunner.exe, wsqmcons.exe, and a number of others) to systray.exe to instead redirect to this program- It then logs all the attempts to invoke that executable alongside details like the arguments that were passed in.

Posted By: BC_Programming
Last Edit: 23 Mar 2019 @ 02:15 PM

EmailPermalinkComments (0)
Tags
 14 Mar 2019 @ 6:56 PM 

Over the last few years it’s become apparent, of course, that many people building and using PCs are using physical media less and less. One thing I have noticed is that a lot of people that go “Optical Drive free” seem to evangelize it and assume everybody who uses DVDs or physical media is some kind of intellectually incognizant doofus- That Optical Media is Unnecessary in general and nobody needs it, which is of course a very silly statement.

Of course it’s "unnecessary"; a Graphics card is "unnecessary" and a sound card is "unnecessary" and both have been for years, but people still buy and use them; Optical drives are probably more in the camp of the latter than the former since the former is arguably a necessity for "gaming" whereas Optical drives are certainly not- at least not, in general.

But like they said- different people have different needs; or, perhaps, a better term, would be different uses for them.

Just speaking personally, my Main System has both a Blu Ray Burner and a DVD Drive installed. I use the Blu Ray burner for watching Blu Rays, as I prefer physical media, and I’ve found BD-R discs great for making hard-copy backups. Why not use a USB Drive? I have USB drives and external USB Drives/enclosures, but I’ve found them incredibly uneconomical for long term hard backups. With Blu-Ray Discs, if I want a hard copy backup it’s something I want to burn, label, and basically file away. Flash Drives and External Drives wouldn’t work like that- they would constantly be changing alongside the data source being backed up Making them more a redundancy rather than an actual backup solution over a longer term. Another problem is that a good one isn’t cheap. The External Drives Seagate and WD sell are reasonably cheap for the capacity, mind, but- those are dogshit; WD/Seagate both use their shittiest drives and create externals out of them. Sorry but I don’t trust a Seagate Sunfish (or whatever they call their low-end model) or WD Green drive as a safe backup drive anymore than I’d trust the safe deposit boxes of a bank that operates out of the back of a Toyota Tercel. Which makes a good backup drive less economical because it means getting a good enclosure (eSATA and USB 3 is an obvious must here) as well as a good drive.

Space per GB is also better with BD-R (perhaps less with spindles of DVDs).

Another aspect is that I also have a number of older game titles on DVDs. Some are available on Steam, but I’m not about to buy them again. Fuck that noise.

of course, I *could* do all this with an External drive. But because I actually utilize the physical media it is for, it’s not economical time-wise. I use them rather frequently. So it’s sort of like somebody who for some physiological reason never shits saying that having a toilet inside is unnecessary. they are strictly right, but I’m not going to start shitting in a chamberpot or outhouse.

Conversely, it’s not Necessary- My laptop doesn’t have an optical drive, for example, and it’s not affected anything, as it’s not a gaming machine and doesn’t actually keep anything special that I need to backup to start with.

Posted By: BC_Programming
Last Edit: 14 Mar 2019 @ 06:56 PM

EmailPermalinkComments (0)
Tags
Categories: Programming
 28 Jan 2019 @ 4:53 PM 

Recently, a Microsoft engineer had this to say with regards to Mozilla and Firefox:

Thought: It’s time for @mozilla to get down from their philosophical ivory tower. The web is dominated by Chromium, if they really *cared* about the web they would be contributing instead of building a parallel universe that’s used by less than 5%?

As written this naturally got a lot of less-than optimistic responses. Here are some follow up tweets wherein they explain their position:

I don’t neglect the important work Mozilla has contributed, but here’s a few observations shapes my perspective:

1) The modern web platform is incredible complex. Today it’s an application runtime comparable to the Java or .net framework.

2) This complexity it’s incredibly expensive to implement a web runtime. Even for Google/Microsoft it’s hard to justify such investment that would take thousands of engineers in multiple years. The web has become too capable for multi engines, just like many frameworks.

3) Contribution can happen on many levels, and why is it given that each browser vendor has to land their contributions in *their own* engine? What isn’t the question what drives most impact for the web as a holistic platform?

4) My problem with Mozilla’s current approach is that they are *preaching* their own technology instead of asking themselves how they can contribute most and deliver most impact for the web? Deliver value to 65% of the market or less than 5%?

5) This leads to my bigger point: In a world where the web platform has evolved into a complex .application runtime, maybe it’s time to revise the operation and contribution model. Does the web need a common project and an open governance model like fx Node Foundation?

6) What if browser vendors contributed to a "common webplat core" built together and each vendor did their platform specific optimizations instead of building their own reference implementations off a specification from a WG? That’s what I mean by "parallel universes".

7) I believe Mozilla can be much more impactful on the holistic web platform if they took a step back and revised their strategy instead of throwing rocks after Google/MS/etc.

8) I want the web to win, but we need collaboration not parallel universes. Writing specs together is no longer enough. The real threat to the web platform is not another browser engine, but native platforms, as they don’t give a damn about an open platform.

That’s a lot to take in, however, my general “summary” would be “Why have these separate implementations of the same thing when there can be one” which is pretty much a case for promoting code reuse. However, that idea doesn’t really hold fast in this context. This may be why the statement was so widely criticized on Twitter.

In an ideal world, of course, the idea that we could have as they describe, a single, “common webplat core” that every vendor can freely contribute to and for which no one vendor has any absolute or direct control or veto power over, is a good one. But it is definitely not what we have nor is it something that seems to be in development right now. That “common webplat core built together by every vendor” is most definitely NOT Chromium, or the Blink engine, so it’s sort of a red herring argument here. Chromium is heavily influenced and practically “under the control” of Google, an advertising company. Microsoft- another company that has a large advertising component, has now opted to use the same Blink rendering engine and chromium underpinnings that are used in Chrome, via a re-engineering of the Microsoft Edge browser. That’s two companies that are shoulder deep in the advertising and marketing space that have a history of working in their own best interests rather than the best interests of end users with a hand on the reins of Chromium. Not exactly the open and free ‘common webplat core’ that they described!

Given this, Mozilla seems to be the only browser/rendering engine vendor that is committed to an open web, The idyllic scenario they have described only makes sense if we were to start with an assumption that all Open Source software is inherently free of any sort of corporate influence, which simply is not the case. Furthermore, the entire point of Open Source projects is to provide alternatives, not to provide a single be-all end all implementation- The entire idea of Open Source is to provide choices, not take them away. There is no single Desktop Environment, Shell, Email Server,  Web Server, text editor, etc. Think of a type of software and the Open Source community has numerous different implementations. This is because realistically there is no “be all end all” implementation for any non-trivial software product, and implementations of an Open Web fall under that umbrella. Suggesting that there be only one standard implementation that is used for every single web browser is actually completely contrary to the way Open Source already works.

Posted By: BC_Programming
Last Edit: 28 Jan 2019 @ 04:53 PM

EmailPermalinkComments (0)
Tags
Categories: Programming
 26 Sep 2018 @ 1:38 PM 

I have a feeling this will be a topic I will cover at length repeatedly, and each time I will have learned things since my previous installments. The Topic? Programming Languages.

I find it quite astonishing just how much Polarizations and fanaticism we can find over what is essentially a syntax for describing operations to a computer. A quick google can reveal any number of arguments about languages, people telling you why Java sucks, people telling you why C# is crap, people telling you why Haskell is useless for real-world applications, people telling you that Delphi has no future, people telling you that there is no need for value semantics on variables, people telling you mutable state is evil, people telling you that Garbage collection is bad, people telling you that manual memory management is bad, etc. It’s an astonishing, never-ending trend. And it’s really quite fascinating.

Why?

I suppose the big question is- Why? Why do people argue about languages, language semantics, capabilities, and paradigms? This is a very difficult question to answer. I’ve always felt that polarization and fanaticism is far more likely to occur when you only know and understand one Programming Language. Of course, I cannot speak for everybody, only from experience. When I only knew one language “fluently”, I was quick to leap to it’s defense. It had massive issues that I can see now, looking back, but which I didn’t see at the time. I justified omissions as being things you didn’t need or could create yourself. I called features in newer languages ‘unnecessary’ and ‘weird’. So the question really is, who was I trying to prove this to? Was I arguing against those I was replying too- or was it all for my own benefit? I’m animate that the reasons for my own behaviour -and, to jump to a premature and biased conclusion, possibly those in which I see similar behaviour over other Languages- was the result of feeling trivialized by the attacks on the language I was using. Basically, it’s the result of programmers rating themselves based on what languages they know and use everyday. This is a natural- if erroneous – method of measuring one’s capabilities. I’ve always been a strong proponent that it isn’t the Programming Language that matters, but rather your understanding of Programming concepts, and how you apply them, as well as not subverting to the religious dogmas that generally surround a specific language design. (I’m trying very hard not to cite specific languages here). Programming Languages generally have set design goals. As a result, they typically encourage a style of programming- or even enforce it through artificial limitations. Additionally, those limitations that do exist (generally for design reasons) are worked around by competent programmers in the language. So when the topic enters the domain of their favourite language not supporting Feature X, they can quickly retort that “you don’t need feature X, because you can use Feature Q, P and R to create something that functions the same”. But that rather misses the point, I feel.

I’ve been careful not to mention specific languages, but here I go. Take Visual Basic 6. That is, Pre .NET. As a confession, I was trapped knowing only Visual Basic 6 well enough to do anything particularly useful with it for a very long time. Looking back- and having to support my legacy applications, such as BCSearch- and I’m astonished by two things that are almost polar opposites; The first is simply how limited the language is. For example, If you had a Object Type CSomeList and wanted to ‘cast’ it to a IList interface, you would have to do this:

Basically, you ‘cast’ by setting it directly to a variable of the desired type you want to cast. These types of little issues and limitations really add up. The other thing that astonished me was the ingenuity of how I dealt with the limitations. At the time, I didn’t really consider some of these things limitations, and I didn’t think of how I dealt with them as workarounds. For example, the above casting requirement I found annoying, so I ended up creating a GlobalMultiUse Class (which means all the Procedures within are public); in this case the Function might be called “ToIList()” and would attempt to cast the parameter to a IList and return it. Additionally, at some point I must have learned about Exception handling in other languages, and I actually created a full-on implementation of Exception handling for Visual Basic 6. Visual Basic 6’s Error Handling was, for those that aren’t aware, rather simple. You could basically say “On Error Goto…” and redirect program flow to a specific label when an error occured. All you would know about the error is the error number, though. My “Exception” implementation built upon this. To Throw an exception, you would create it (usually with an aforementioned public helper), and then throw it. in the Exception’s “Throw()” method, it would save itself as the active Unwind Exception (global variable) and then raise an Application defined error. Handlers were required to recognize that error number, and grab the active exception (using GetException(), if memory serves). GetException would also recognize many Error codes and construct instances of the appropriate Exception type to represent them, so in many cases you didn’t need to check for that error code at all. The result? Code like this:

would become:

There was also a facility to throw inner exceptions, by using ThrowInner() with the retrieved Exception Type.

So what is wrong with it? well, everything. The Language doesn’t provide these capabilities, so I basically have to nip and tuck it to provide them, and the result is some freakish plastic surgery where I’ve grafted Exceptions onto somebody who didn’t want Exceptions. Fact is that, once I moved to other languages, I now see just how freakish some of the stuff I implemented in VB was. That implementation was obviously not thread safe, but that didn’t matter because there was no threading support, for example.

Looking forward

With that in mind, it can be valuable to consider one’s current perspectives and how they may be misguided by that same sort of devotion. This is particularly important when dealing with things that you only have a passing knowledge of. It’s perhaps more applicable if you’ve gained experience with something to be able to recognize flaws, but it’s easy to find flaws or repaint features or aspects as flaws for the purpose of making yourself feel wiser for not having used it sooner.

Posted By: BC_Programming
Last Edit: 26 Sep 2018 @ 01:38 PM

EmailPermalinkComments Off on Programming Languages (2)
Tags
 31 Aug 2018 @ 7:45 PM 

When I was implementing BASeTris, my Tetris Clone, I thought it would be nifty to have Controller support, so I could use my XBox One Controller that I have attached to my PC. My last adventure with Game Controllers ended poorly- BASeBlock has incredibly poor support for them, overall – In revisiting it with the consideration towards XInput rather than DirectInput this time, however, I eventually found XInput.Wrapper, which is a rather simple, single-class approach to handling XInput Keys.

The way that BASeTris handles input is my attempt at separating different Input methods from the start. The Game State interface has a single HandleGameKey routine which effectively handles a single press. That itself get’s called by the actual Input routines, which also include some additional management for features like DAS repeat for certain game keys. The XInput Wrapper, of course, was not like this. It is not particularly event driven and works differently.

I did mess about with it’s “Polling” feature for some time before eventually creating my own implementation of the same. The biggest thing I needed was a “translation” where I could see when keys were pressed and released and therefore track that information and translate it to appropriate GameKey presses. This was the rather small class that I settled on for this purpose and currently have implemented in BASeTris:

It is a bit strange that I needed to create a wrapper for what is itself a wrapper, but it wasn’t like I was going to find a ready-made solution that integrated into how I had designed Input in BASeTris anyway- some massaging was therefore quite expected to be necessary.

Posted By: BC_Programming
Last Edit: 31 Aug 2018 @ 07:45 PM

EmailPermalinkComments Off on A Wrapper for… The XInput Wrapper (?)
Tags
Categories: C#, Programming
 10 Jun 2018 @ 10:23 AM 

It suddenly occurred to me in the last week that I don’t really have a proper system in place for not only software downloads here on my website but appropriate build integration with the Source Control in building the projects as needed when commits are made. Having set up a Jenkins build environment for the software I work on for my Job, I thought it reasonable that I make the same demands of myself.

One big reason to do this, IMO, is that it can actually encourage me to create new projects. The idea of packaging up the result and making it more easily accessible or usable is often a demotivator I find for creating some new projects. Having an established “system” in place whereby I can make changes on github and have say, installer files “appear” properly on my website as needed can be a motivator- I don’t have to build the program, copy files, run installation scripts, etc. manually every time- I just need to configure it all once and it all “works” by itself.

To that end, I’ve setup Jenkins appropriately on one of my “backup” computers. It’s rather tame in it’s capabilities but it should get the job done, I think- only 4GB of RAM and an AMD 5350. I would use my QX6700 based system, but the AMD system uses far less power. I also considered having Jenkins straight-up on my main system but thought that could get in the way, and just be annoying. Besides- this gives that system a job to do.

With the implementation for work, there were so many projects interdependent and we pretty much always want “everything” that I just made it a single project which builds all at once. This way everything is properly up to date. The alternative was fiddling with 50+ different projects and figuring out the appropriate dependencies to build based on when other projects were updated and such- something of a mess. Not to mention it’s all in one  repository anyway which goes against that idea as well.

In the case of my personal projects on Github, They are already separate repositories. So I will simply have them built as separate projects, with jenkins itself understanding upstream/downstream, I can use that as needed.

I’ve successfully configured the new jenkins setup and it is now building BASeTris, a Tetris clone game I decided to write a while ago. It depends on BASeScores and Elementizer, so those two projects are in jenkins as well.

BASeTris’s final artifact is an installer.

But of course, that installer isn’t much good just sitting on my CI Server! However, I also don’t want to expose the CI Server as a “public” page- there are security considerations, even if I disregard upload bandwidth issues. To that end, I constructed a small program which uploads files using SSH to my website. It will run once a day and is given a directory. It will look in all the immediate subdirectories of that directory, get the most recent file, and upload it to a corresponding remote directory if it hasn’t already been uploaded. I configured BASeTris to copy it’s final artifact there into an appropriate folder.

Alternatively, it is possible to have each project configured to upload the artifacts via SSH as a post-build step. However, I opted to not do that because I would rather not a series of changes throughout the day result in a bunch of new uploads- those would consume space and not be particularly useful. Instead, I’ve opted to have all projects that I want to upload uploaded once a day and only if there have been changes. This should help reduce redundancy (and space usage) of those uploads.

My “plan” is to have a proper PHP script or something that can enumerate the folders and provide a better interface for downloads. If nothing else I would like each CI projects folder to have a "project_current.php” file which automatically sends the latest build- then I can simply link to that on blog download pages for each project and only update it to indicate new features or content.

As an example, http://bc-programming.com/downloads/CI/basetris/ is the location that will contain BASeTris version downloads.

There is still much work to do, however- the program(s) do have git hash metadata added to the project build, so they do have access to their git commit hash, but currently they do not actually present that information. I think it should for example be displayed in the title bar, alongside other build information such as build date, if possible. I’ve tried to come up with a good way to have the version auto-increment but I think I’ll just tweak that as the project(s) change.

Heck- the SSH Uploader utility seems like a good candidate for yet another project to add to github, if I can genericize it so it isn’t hard-coded for my site and purpose.

Posted By: BC_Programming
Last Edit: 10 Jun 2018 @ 10:23 AM

EmailPermalinkComments Off on About Time I had a CI Server, Methinks
Tags
 06 Jun 2018 @ 7:46 PM 

Flash Memory, like anything, is no stranger to illegitimate products. You can find 2TB Flash drives on eBay that are 40 bucks, for example. These claim to be 2TB, show up as 2TB- but, attempting to write data beyond a much smaller size, and the Flash data is corrupted because it actually writes to an earlier location on the drive. My first experience with this was actually with my younger brother’s Gamecube system; when he got it, he also got two "16MB" Memory cards (16 megabit, so 2 Megabytes) However, they would rather frequently corrupt data. I suspect, looking back, it was much the same mechanism- the Memory card was "reporting" as larger than it was and writing beyond the end was corrupting the information on it.

This brings me to today. You can still find cheap Memory cards for those systems which claim sizes such as 128MB. even at the "real" 128 Megabits size that it is, that’s still 16MB which is quite substantial. I’ve recently done some experiments with 4 cheap "128MB" Gamecube Memory Cards that I picked up. Some of the results are quite interesting.

First, I should note that my "main" memory cards for the system are similar cheap cards I picked up online 12 years ago or thereabouts. one is a black card that simply says "Wii/NGC 128 MEGA" on it, the other is a KMD Brand 128MB. The cheap ones I picked up recently have the same case as the KMD and, internally, look much the same, though they feel cheaper; They are branded "HDE". Now, for the ones I have, I’m fairly sure they are legitimate, but not 100%- the Flash chips inside are 128 Megabit and one is even 256Megabit. (Of course this means "128 Mega" and "128 MB" actually means 16MB and 128 Megabits, but whatever).

Since the 4 cards were blank, I decided to do a bit of experimenting with a little program called GCMM, or Gamecube Memory Manager. This is a piece of homebrew that allows you to pretty much do whatever you want with the data on memory cards, including making backups to an SD Card, restoring from an SD Card, copying any file between memory cards, etc. The first simple test is easy- just do a backup and a restore. it shouldn’t matter too much that the card is blank. I backed up the new card no problem. However, when I tried to restore it- it gives a write error at block 1024. This is right at the halfway point. No matter what I couldn’t get passed that point for any of the "new" cards. This indicates to me that the card(s) are actually 8MB cards, with approximately 1024 blocks of storage. What a weird "counterfeit" approach. 8MB is already a rather substantial amount of space, why "ruin" the device by having it report the wrong size and allow data corruption? I found that I could cause raw restores to succeed if I was able to take the card out during the restore process right before it got to 1024.

This discovery is consistent with what I understand of counterfeit flash- the controller will basically write to earlier areas of the memory when instructed to read beyond the "real" size, and will usually overwrite, say, file system structures, needing it to be formatted. Interestingly, If I rip it out before it get’s there, everything backed up up to that point is intact. Something else interesting I found was by looking inside the raw dump I originally created on one of the "new" cards. I found some very interesting data in the raw image. the File system itself was clean but that data remains in the memory, and was still there for viewing. I could see that Wrestlemania 2002 was probably used for testing the card at some point, as there was "w_mania2002" in the raw data, as well as a number of other tidbits that referenced wrestler’s that appeared in that game. What I found much more interesting, however, were a number of other strings: "V402021 2010-06-08" suggests a date that the card might have been manufactured. "Linux-2.6.23.17_stm23_A18B-pdk71"… Now this is interesting! Linux was involved in some way? This wouldn’t be surprising if it was constructed with some sort of embedded system, however it doesn’t make a lot of sense that this would appear on the memory card data itself. Similarly, I found various configuration files:

NetType=1
Language=0
Em10Mode=0
ConTimeout=30
ProductID=00100199007011400002D0154ADF986E
Licence=222
ServiceUser=0512200052225
ServicePwd=282026
PppoeUser=0512200052225@vod
PppoePwd=282026
DHCPPUser=0512200052225@vod
DHCPPPw=282026
IpAddr=192.168.1.12
NetMask=255.255.0.0
GateWay=
DNS=
MacAddr=D0:15:4A:DF:98:6E
Volume=100
TimeZone=8
ProxyFlag=0
AcceptCookie=99

Due to a lot of network information, WLAN IDs, etc. my suspicion is that these flash chips are not actually new, but were taken from some sort of networking device, such as a router or switch. This is supported because googling a few of the configuration settings seems to always lead me to some sort of Chinese ADSL Provider, so I suspect perhaps these Flash chips were re-used from old networking equipment. That, in itself, does add another concern to these Memory Cards- if they were used before they found themselves in these Memory Cards- how much were they used? And how? Were they used to contain the firmware, for example? or were they used to hold a small file system for the networking device?

Overall, for something so seemingly mundane, I found this ti be a very interesting distraction and perhaps this information could prove useful or at least interesting to others.

Posted By: BC_Programming
Last Edit: 06 Jun 2018 @ 07:46 PM

EmailPermalinkComments Off on Memory Card Adventures
Tags
Categories: Programming
 25 Nov 2017 @ 7:11 PM 

The code for this post can be found in this github project.

Occasionally you may present an interface which allows the user to select a subset of specific items. You may have a setting which allows the user to configure for example a set of plugins, turning on or off certain plugins or features.

At the same time it may be desirable to present an abbreviated notation for those items. As an example, if you were presenting a selection from alphabetic characters, you may want to present them as a series of ranges; if you had A,B,C,D,E, and Q selected, for example, you may want to show it as “A-E,Q”.

The first step, then, would be to define a range. We can then take appropriate inputs, generate a list of ranges, and then convert that list of ranges into a string expression to provide the output we are looking for.

For flexibility we would like many aspects to be adjustable, in particular, it would be nice to be able to adjust the formatting of each range based on other information, so rather than hard-coding an appropriate ToString() routine, we’ll have it call a custom function.

Pretty straightforward- a starting point, an ending point, and some string formatting. Now, one might wonder about the lack of an IComparable constraint on the type parameter. That would make sense for certain types of data being collated but in some cases the “data” doesn’t have a type-specific succession.

Now, we need to write a routine that will return an enumeration of these ranges given a list of all the items and a list of the selected items. This, too, is relatively straightforward. Instead of a routine this could also be encapsulated as a separate class with member variables to customize the formatted output. As with any programming problem, there are many ways to do things, and the trick is finding the right balance, and in some cases a structured approach to a problem can be suitable.

Sometimes you might not have a full list of the items in question, but you might be able to indicate what an item is followed by. For this, I constructed a separate routine with a similar structure which instead uses a function to callback and determine the item that follows another. For integer types we can just add 1, for example.

This is expanded in the github project I linked above, which also features a number of other helper routines for a few primitive types as well as example usage. In particular, the most useful “helper” is the routine that simply joins the results of these functions into a resulting string:

Posted By: BC_Programming
Last Edit: 25 Nov 2017 @ 07:11 PM

EmailPermalinkComments Off on List Selection Formatting
Tags
Categories: .NET, C#, Programming
 16 Jun 2017 @ 3:43 PM 

Windows 10 introduced a new software development platform- the Universal Windows Platform, or UWP. In some respects it builds upon the earlier Windows Runtime that was introduced with Windows 8. One interesting aspect of the platform is that- properly used- it can be utilized to have software that can be built once and distributed to a number of platforms running Microsoft Operating Systems, such as the XBox One.

I’ve fiddled a bit with UWP but honestly I found it tricky to determine what it’s for; As it is, it’s API as well as set of third-party portable libraries simply isn’t anywhere near a typical Application targeting the Desktop via WPF or even Windows Forms. But I think that is intentional; these aren’t built towards the same purpose. Instead, the main advantage of UWP appears to be in being able to deploy to multiple Windows Platforms. Unfortunately that is an advantage that I don’t think I can really utilize. However, I expect it will be well used for future applications- and it has already been well used for games like Forza Horizon 3, which utilized it for “Play anywhere” so it can be played not only on the XBox console but on any capable Windows 10 system. Forza 7 will also be using it to much the same effect.

Even if I won’t utilize it, it probably makes a lot of sense to cover it. My recent coding-related posts always seem to involve Windows Forms.  Perhaps I should work instead to learn UWP and then cover that learning experience within new posts? If I an encountering these hurdles then I don’t think it is entirely unreasonable to think perhaps others are as well.

I’ve also got to thinking that perhaps I have become  stuck in my ways, as I’m not partial to the approach that appears to bring web technologies to the desktop; Even today I find web applications and UI designed around the web to have a “feel” that is behind a traditional desktop application in usability. That said, I’m also not about to quit my job just because it involves “legacy” frameworks; We’re talking about quite an old codebase- bringing it forward based on library and platform upgrades would mean no time for adding new features that customers actually want. That, and the upgrade path is incredibly murky and unclear; with about 50 different approaches  for every 50 different problems we might encounter, not to mention things like deciding on the Framework versions and editions and such.

I know I was stuck in my ways previously so it’s hardly something that isn’t worth considering- I stuck with VB6 for far too long and figured it fine and these newfangled .NET things were unnecessary and complicated. But as it happens I was wrong about that. So it’s possible I am wrong about UWP; and if so then a lot of the negative discussion about UWP may be started by the same attitude and thinking. Is it that it is something rather large and imposing that I would need to learn that results in me perceiving it so poorly? I think that is very likely.

Which is  not to suggest of course that UWP is perfect and it is I who is wrong for not recognizing it; but perhaps it is the potential of UWP as a platform that I have failed to assess. While there are many shortcomings, future revisions and additions to the Platform are likely to resolve those problems as long as enough developers hop on board. And it does make sense for there to be a reasonable Trust Model where you “Know” what information an application actually uses or requests, rather then it being pretty much either limited user accounts or Administrator accounts and you don’t know exactly what is being used.

It may be time to come up with a project idea and implement it as a UWP application to start that learning experience. I did it for C# and Windows Forms, I did it for WPF, and I don’t see how the same approach couldn’t work for UWP. Unless it’s impossible to learn new stuff after turning 30, which I’m pretty sure is not the case!) If there is a way to execute other programs from UWP, perhaps the Repeater program I’m working on could be adapted. That is a fairly straightforward program.

Posted By: BC_Programming
Last Edit: 19 Jun 2017 @ 08:59 PM

EmailPermalinkComments Off on A UWP Discussion
Tags
Tags: ,
Categories: Programming
 09 Jun 2017 @ 11:06 PM 

This is part of a occasionally updated series on various programming languages. It should not be interpreted as a benchmark, but rather as a casual look at various programming languages and how they might be used by somebody for a practical purpose.
Currently, there are Articles written regarding Python,C#, Java and VB6 (merged for some reason),Scala,F# & Ruby,Perl, Delphi, PHP,C++,Haskell,D, VB.NET, and even QuickBASIC

QuickBASIC is an out of place choice when compared to most other languages that I’ve written in this series. Why would I jump so far backwards to QuickBASIC?

There are actually an umber of reasons. The first is that QuickBASIC actually imposes a number of limitations. Aside from the more limited programming language compared to, say C#, it also means any solution needs to appropriately contend with issues such as Memory usage and Open File Handles on MS-DOS. At the same time, a lot of the development task is actually more simple; one doesn’t need to fiddle with designers, or property pages or configuration tabs, or anything of that sort. You open a text file and start writing the program.

The first task is to determine an algorithm. Of course, we know the Algorithm- it’s been described previously- However, in this instance, we don’t have hashmaps available; furthermore, even if we want to implement that ourself, we cannot even keep all the information in memory. As a result, one compromise is to instead keep an array of index information in memory; that array can contain the sorted word as well as a record index into another random-access file, so, to start, we have these two TYPE structures:

By writing and reading directly from a scratch file when we need to add a new file to the “hash” we can avoid having any of the SORTRECORD structures in memory except the one we are working with. This drastically reduces our memory usage. As did determining that the longest word in SORTINDEX is 28 characters/bytes. The algorithm thus becomes similar- basically, with a word, we sort the words letters, and then we consult the array of SORTINDEX types. If we find one with the sorted word, we take the OFFSET and we read in the SORTRECORD at that offset, increment wordcount, and add the word to the SORTWORDS array, then PUT it back into the scratch file. And if it isn’t found in the SORTINDEX, we create a new entry- saving a new record with the word to the scratch file and recording the offset and sorted text in the index for that record.

Of course this does have several inefficiencies that I won’t address; The first is that the search for the matching sorted word is effectively a sequential search. Ideally, the in-memory index would be kept sorted and searches could use a Binary Search. I guess if somebody is interested I “left it as an exercise for the reader”.

Otherwise all seems well. But not so fast- the dict.txt file has 45402 words. Our type definition is 32 bytes, which means for all words to be stored in the index, we would need 1,452,864 bytes, which is far beyond the conventional memory limits that we are under. So we need to drastically reduce the memory usage of our algorithm. And we had something so promising! Seems like it’s back to the drawing board.

Or is it? instead of trying to reduce how much our algorithm uses, we could reduce how much data it is working with. At a time. We can split the original dictionary file into chunks, and as it happens since words of different lengths cannot be anagrams of each other, we can merely split the file into separate file organized by length. Then we perform the earlier algorithm on each of those files and output the resulting anagram list of each to one file. That would give us one file listing all anagrams without exceeding memory limitations!

Before we get too excited, let’s make sure that the largest “chunk” would be small enough. using another QuickBASIC program (because, what the hell, right?) I checked over the count of files of particular lengths. In this case, the chunk with the most is words of 7 letters in length, of which there are 7371 in the test dictionary file. This would require 235,872 Bytes of storage, which is well within our 640K conventional memory limit.

Of course, there is a minor caveat; we do need to start QuickBASIC with certain command line arguments, as, by default, the dynamic array maximum is actually 64K. We do this by launching it with the /Ah command line parameter. Otherwise, we might find that it encounters Subscript out of range errors once we get beyond around the 2000 mark for our 32-byte index record type.

Another consideration I encountered was open files. I had it opening all the dictionary output files at once, but it maxed out at 16 files, so I had to refactor it to be much slower by reading a line, determining the file to open, writing the line, and them closing the file. Again, there may be a better technique here to increase performance. For reference, I wasn’t able to find how to increase the limit, either (adjusting config.sys didn’t help).

After that, it worked a treat- the primary algorithm runs on each length subset, and writes the results to an output file.

Without further Adieu- The Full source of this “solution”:

And there you have it. an Anagram search program written in QuickBASIC. Of course, it is arather basic and is a bit picky about preconditions (hard-coded for a specific file, for example) but it was largely written against my test VM.

Posted By: BC_Programming
Last Edit: 09 Jun 2017 @ 11:06 PM

EmailPermalinkComments Off on Anagram Search Program Part XVI: QuickBASIC
Tags
Categories: Programming

 Last 50 Posts
 Back
Change Theme...
  • Users » 47469
  • Posts/Pages » 390
  • Comments » 105

PP



    No Child Pages.

Windows optimization tips



    No Child Pages.

Soft. Picks



    No Child Pages.

VS Fixes



    No Child Pages.

PC Build 1: “FASTLORD”



    No Child Pages.