23 Nov 2019 @ 7:21 PM 

Since the software that I contribute to as part of my day job involves printing to receipt printers, I’ve been keeping my finger on the pulse of eBay and watching for cheap listings for models I’m familiar with. Recently I stumbled upon a “too good too be true” listing- an Epson TM-T88IV listed for $35. The only caveat that I could see was that it was a serial printer; that is, it used the old-style RS-232 ports. I figured that might be annoying but, hey, I’ve got Windows 10 PCs that have Serial Ports on the motherboard, and a Null Modem Serial cable, how hard could it be?.

Famous last words, as it happened, because some might call the ensuing struggle to be a nightmare.

When the printer arrived, my first act was to of course verify it worked on it’s own. it powered up, And would correctly print  the test page, so it printed fine. Next up, of course, was to get it to communicate with a computer.  I used a Null-modem cable to connect it, adjusted the DIP switches for 38400 Baud, 8 data bits, 1 stop bit, no parity DSR/DTR control, printed the test page again, then installed the Epson OPOS ADK for .NET (as I was intending to use it with .NET). I configured everything identically, but Checkhealth failed. I fiddled with it for some time- trying all the different connection methods and trying to get better results to no avail.

I fired up RealTerm, and squirted data directly over the COM port. I could get text garbage to print out- I tried changing the COM Port settings in Device Manager to set a specific baud rate as well, but that didn’t work.

I had a second computer- my computer built in 2008, which while it didn’t have a COM *port*, It did have the header for one. I took the LPT and COM bracket from an older Pentium system and slapped it in there for testing, and spent a similar amount of time with exactly the same results. I was starting to think that the printer simply was broken, or the Interface card inside it was broken in some way.

Then, I connected it to a computer running Windows XP. I was able to get it to work exactly as intended; I could squirt data directly to the printer and it would print, I could even set up an older version of the OPOS ADK and CheckHealth went through. Clearly, the receipt printer was working- so there was something messed up with how I was using it. I put an install of Windows 7 on one of the Windows 10 PCs I was testing and found I got the same results. Nonetheless, after some more research and testing, it seems like Windows 10 no longer allows the use of motherboard Serial or Parallel ports. Whether this is a bug or intentional it’s unclear. I would guess it was imposed at the same time during development that Windows 10 had dropped Floppy support; people spoke up and got Floppy support back in, but perhaps parallel and Serial/RS-232 stayed unavailable. Unlike that case though they do appear in Device Manager and are accessible as devices, they just don’t work correctly when utilized.

Since the software I wanted to work on would be running on Windows 10- or if nothing else, certainly not Windows XP, I had to get it working there. I found that using a USB Adapter for an RS-232 Port worked, which meant I could finally start writing code.

The first thing was that a Receipt printer shouldn’t be necessary to test the code, or for, say, unit tests. So I developed an interface. This interface could be used for mocking, and would implement basic features as required. The absolute basics were:

  • Ability to print lines of text
  • Enable and Disable any underlying device
  • Ability to Claim and Release the “printer” for exclusive use
  • Ability to Open, and Close the Printer
  • Ability to retrieve the length of a line in characters
  • Ability to Print a bitmap
  • Ability to Cut the paper
  • boolean property indicating whether OPOS Format characters were supported

I came up with a IPosPrinter interface that allowed for this:

From there, I could make a “mock” implementation, which effectively implemented a ‘receipt print’ by sending it directly to a text file.

This implementation can also optionally shell the resulting text data to the default text editor, providing a quick way of testing a “printout”. However, this interface isn’t sophisticated enough to be usable for a nice receipt printer implementation; In particular, The actual code to print is going to want to use columns to separate data. That shouldn’t be directly in the interface, however- instead, a separate class can be defined which composites an interface implementation of IPOSPrinter and provides the additional functionality. This allows any implementation of IPOSPrinter to benefit, without requiring they have additional implementations.

Since our primary feature is having columns, we’ll want to define those columns. a ColumnDefinition class would be just the ticket. We can then tell the main ReceiptPrinter class the columns, then have a params array accept the print data and it could handle the columns automatically. Here is the ColumnDefinition class:

At this point, we just want a primary helper routine within said ReceiptPrinter class. That could be used within that class to handle printing from more easily used methods intended for use by client code:

This implementation also incorporates a shrink priority that can be given to each column. Columns with a higher priority will be given precedence to remain, but otherwise columns may be entirely eliminated from the output if the width of the receipt output is too low. This allows for some “intelligent” customization for specific printers, as some may have less characters per line and redundant columns or less needed columns can be eliminated, on those, but they can be included on printers with wider outputs. The actual ReceiptPrinter class in all it’s glory- not to mention the implementations of IPOSPrinter beyond the text output, particularly the one that actually delegates to a .NET OPOS ADK Device and outputs to a physical printer, will require more explanation, so will appear later in a Part 2.

Posted By: BC_Programming
Last Edit: 23 Nov 2019 @ 07:21 PM

EmailPermalinkComments (0)
Tags
 19 Oct 2019 @ 3:22 PM 

Over the last few years – more than a decade, really – it seems that, somehow, *nix- and Linux in particular, has been tagged as being some sort of OS ideal. It’s often been cited as a "programmers OS" and I’ve seen claims that WIn32 is terrible, and people would take the "Linux API" over it anyday. That is a verbatim quote as well.

However, I’ve thought about it some and after some rather mixed experiences trying to develop software for Linux I think I feel something of the opposite. Perhaps, in some ways, it really depends what you learned first.

One of the advantages of a Linux-based (or to some extent, UNIX-based) system is there is a lot of compartmentalization. The user has a number of choices and these choices will affect what is available to the applications.

I’d say that generally, The closest thing to a "Linux API" that applications utilize would probably just be the Linux Kernel userspace API.

Beyond that, though, as a developer, you have to start making choices.

Since the user can swap out different parts, outside of the Linux Kernel userspace API, pretty much nothing is really "standardized". Truth be told most of that kernel userspace API isn’t even used directly by applications- usually they get utilized through function calls to the C Standard library.

The Win32 API has much more breadth but tends to be more "simple", which tends to make using it more complicated. Since it’s a C API, you aren’t going to be passing around OO instances or interfaces; typically more complicated functions accept a struct. Hard to do much better than that with a C API.

However, with Windows, every Window is created with CreateWindowEx() or CreateWindow(). No exceptions. None. Even UWP Windows use CreateWindow() and have registered Window classes. Even if perhaps it’s not the most pleasant base to look at at least there is some certainty that, on Windows, everything is dealt with at that level and with those functions.

With Linux, because of the choices, things get more complicated. Since so many parts are interchangable, you can’t strictly call most of what is made available and used by applications a "Linux API" since it isn’t going to exist on many distributions. X11, for example, is available most of the time, but there are still Linux distributions that use Wayland or Mir. Even using just X11, it only defines the very basic functions- It’s a rare piece of software that actually interacts directly with X11 via it’s server protocol, usually software is going to use a programming interface instead. But which one? For X11 you’ve got Xlib or xcb. Which is better? I dunno. Which is standard? Neither. And then once you get down to it you find that it only actually provides the very basics- what you really need are X11 extensions. X11 is really only designed to be built on top of, with a Desktop Environment.

Each Desktop environment provides it’s own Programming Interface. Gnome as I recall uses "dbus"; KDE uses- what was it? kdetool? Both of these are CLI software that other programs are supposed to call to interact with the desktop environment. I’m actually not 100% on this but all the docs I’ve found seem to suggest that at the lowest level aspects of the desktop environment is handled through calls to those CLI tools.

So at this point our UI API consists of calling a CLI application which interacts with a desktop environment which utilizes X11 (or other supported GUI endpoints) to show a window on screen.

How many software applications are built by directly interacting with and calling these CLI application endpoints? Not many- they are really only useful for one-off tasks.

Now you get to the real UI "API"; the UI toolkits. Things like GTK+ or Qt. These abstract yet again and more or less provide a function-based, C-style API for UI interaction. Which, yes- accepts pointers to structs which themselves often have pointers to other structs, making the Win32 API criticism that some make rather ironic. I think it may arise when those raising the criticism are using GTK+ through specific language bindings, which typically make those C Bindings more pleasant- typically with some sort of OO. Now, you have to choose your toolkit carefully. Code written in GTK+ can’t simply be recompiled to work with Qt, for example. And different UI toolkits have different available language bindings as well as different supported UI endpoints. Many of them actually support Windows as well, which is a nice bonus. usually they can be made to look rather platform native- also a great benefit.

It seems, however, that a lot of people who raise greivance with Win32 aren’t comparing it to the direct equivalents on Linux. Instead they are perhaps looking at Python GTK+ bindings and comparing it to interacting directly with the Win32 API. It should really be no surprise that the Python GTK+ Bindings are better; that’s several layers higher than the Win32 API. It’s like comparing Windows Forms to X11’s server protocol, and claiming Windows is better.

Interestingly, over the years, I’ve come to have a slight distaste for Linux for some of the same reasons that everybody seems to love about it, which is how it was modelled so heavily on UNIX.

Just in the last few years the amount of people who seem to be flocking to OS X or Linux and holding up their UNIX origins (obviously more so OSX than Linux, strictly speaking) as if that somehow stands on it’s own absolutely boggles my mind. I can’t stand much about the UNIX design or philosophy. I don’t know why it is so constantly held up as some superior OS design.

And don’t think I’m comparing it to Windows- or heaven forbid, MS-DOS here. Those don’t even enter this consideration at this point- If anything can be said it’s that Windows wasn’t even a proper competitor until Windows NT anyway, and even then, Windows NT’s kernel definitely had a lot of hardware capability and experience to build off that UNIX never had in the 70’s- specifically, a lot of concepts were adopted from some of the contemporaries that UNIX competed against.

IMO, ITS and MULTICS were both far better designed, engineered, and constructed than any UNIX was. And yet they faded into obscurity. often People point at Windows and say "The worst seems to get the most popular!" But if anything UNIX is the best example of that. So now we’re stuck with people who think the best OS design is one where the the shell is responsible for wildcard expansion and the underlying scheduler is non-preemptive. I wouldn’t be surprised if the UNIX interrupt-during-syscall issue was still present, and instead of re-entering the syscall it returned an error code, making it the application’s responsibility to check for the error and re-enter the syscall.

It seems to me that one of the axioms behind many of the proclamations that "*nix is better designed" seems to be based on definitions of "better designed" that correspond to how *NIX does things- conclusion before the reason, basically.

Posted By: BC_Programming
Last Edit: 19 Oct 2019 @ 03:22 PM

EmailPermalinkComments (0)
Tags
 12 Sep 2019 @ 11:33 AM 

C# 1.0 was something of a first-pass as a language design. It received refinements and improvements, and started to create it’s own unique identity with 2.0. C# 2.0, like many C# versions, relied heavily on changes made to the CLR; that is, a C# 2.0 program typically could not run on the .NET 1.1 Framework runtime. This set it apart from Java, where language features are typically designed to run on any Java Virtual Machine.

Generics

One of the biggest features added to the language with C# 2.0 was Generics. Generics allow you to define type parameters which can “stand for” other classes, based on how you use the Generic class itself. Good examples of Generics can be found in their use within the .NET Base Class Library. With .NET 2.0 we got new strongly-typed classes such as List<T> and Dictionary<K,V>; These classes allow you to create strongly typed collections of any type and dictionaries using almost any type for the key and value types. This provides a wealth of flexibility to programming, as well as providing additional type safety by adding compile-time type checking. Their non-generic counterparts, ArrayList and Hashtable, would let you add anything. Even if your code expected all items in the ArrayList to be one type, you could introduce errors without realizing it by adding a String or number to the ArrayList, and it would still compile and run but you would receive run-time errors.

C#’s generics implementation is “first-class”; what this means is that the generic classes are preserved and are part of the compiled assembly. As a result, through reflection, it is possible to construct instances of a generic class with any type parameters. This is a result of the feature being implemented as part of the run-time. This is in contrast to how the same feature was implemented in Java. in Java, in order to allow the code that utilized Generics to run on older Virtual Machine implementations, Generics are implemented as part of the compilation. This results in something known as “Type erasure”- effectively, the generic class ceases to exist and java instead compiles the class as if the strongly-typed generic type definitions were the base Object type, after performing, of course, appropriate compile-time checks. In some instances the definition is replaced with the first bound class when type constraints are utilized. In either case, the downside to this implementation is that usage of reflection at run time is unable to construct generic type instances in the same manner as can be done in C#, resulting in much more complicated workarounds if that is desired.

Static Classes

C# 2.0 also introduced the idea of a static class. This is a class that cannot be instantiated and can only contain static members. Strictly speaking classes that function identically could be constructed by merely enforcing that a class would only contain such members, but with the static keyword being supported by class definitions this became a compile time check.

Nullable Types

Nullable types also got their start as early as C# 2.0. A nullable type allows you to effectively treat a value type as a reference type. This could allow you to indicate an extra data point (eg. a Nullable<int> field on a data class could accept null to indicate specific behaviour when being interpreted) This can be particularly useful when working with certain databases, as many fields may map to value types within .NET except for the fact that they can also be null within the database.

Anonymous Methods

Anonymous methods allowed C## 2.0 to define code methods that defined delegates within the body of other routines. This was in contrast to defining a separate routine and referencing it to construct the delegate. The big difference and advantage to an anonymous method is that it can eliminate the use of fields in that the anonymous method implementation can use the local variables where it is defined. There are a few caveats involved regarding what variables are closed over, but suffice it to say that there was some debate over whether anonymous methods actually allowed closures due to some of the specifics of handling local variables at the same scope as the anonymous method.

Partial Types

Partial types primarily benefit code generators, as they allow a single class definition to be split across multiple files, with each definition indicating it is a partial definition with the partial keyword. This primarily benefits code generators largely because if a class was large enough that it would be sensible to split into multiple files, then it would be more prudent to consider more extensive refactoring, such as creating new classes to handle some of the behaviour and code implementations.

Property Access Modifiers

With  C# 2.0, we got the ability to restrict accessibility with the get or set accessors of a property beyond the access level of the property itself. A property could be marked public, but a setter could be protected or even private. This allowed for the creation of properties that were for example read-only from the perspective of public clients, but with derived or internal routines could access. the way that was accomplished previously was by only defining the get accessor, and then making the backing field protected or private and accessing the backing field directly.

In addition to the additions to the language itself, since C# 2.0 paired with .NET Framework 2.0, it came with a bunch of new improvements, changes, and additions to the .NET Base Class Library. I won’t be covering those in depth here but aside from things like the introduction of things utilizing new CLR features such as generic collection classes, .NET 2.0 also added features to XMLDocument, Remoting, ASP.NET and ADO.NET.

Posted By: BC_Programming
Last Edit: 12 Sep 2019 @ 11:33 AM

EmailPermalinkComments (0)
Tags
Tags: ,
Categories: C#, Programming
 22 Jun 2019 @ 8:34 AM 

For some time now, I’ve occasionally created a relatively simple game and typically I’m not bothered to get into fancy “game engines” or using special rendering. usually I just have a Windows Form Application, a Game loop, and paint routines working with the System.Drawing.Graphics canvas. “BASeTris”, a Tetris clone, was my latest effort using this technique.

While much maligned, it is indeed possible to make that work and maintain fairly high framerates; one has to be careful what and when things get drawn, and eliminate unnecessary operations. By way of example, within my Tetris implementation, the blocks that are “set” on the field are drawn onto a separate bitmap only when they change; For the main paint routine, that bitmap get’s drawn in one go, instead of individually drawing each block, which involves bitmap scaling and such each time. Effectively I attack the problem by using separate “layers” which get rendered to individually and then those layers are painted unscaled each “frame”.

Nonetheless, it is a rather outdated approach. Because of that I decided I’d give SkiaSharp a go. SkiaSharp is a cross-platform implementation that is a wrapper around the Skia Graphics Library. This is used in many programs, such as Google Chrome. For the most part, the featureset is very similar conceptually to GDI+, though it tends to be more powerful, reliable, and, of course, portable, since it runs across different systems as well as other languages. It’s also hardware accelerated which is a nice-to-have.

The first problem, of course, was that much of the project was tightly coupled to GDI+. For example, elements that appear within the game will typically have a routine to Perform a Frame of animation and a routine that is capable of drawing to a System.Drawing.Graphics. Now, it would be possible to amend the interface such that there is an added Draw routine for each implementation, But this would clog up a lot of the internals of the logic classes.

Render Providers

I hit upon the idea, which is obviously not original, to separate the rendering logic into separate classes. I came up with this basic interface for those definitions:

The idea being that implementations would implement the appropriate generic interface for the class they can draw, the “Canvas” object they are able to draw onto (the target) and additional information which can vary based on said implementation. I also expanded things to create an interface specific to “Game States”; The game, of course, would be in one state at a time, which is represented by an abstract class implementation for the Menu, the gameplay itself, the pause screen, as well as the pause transitions and so on.

Even at this point I can already observe many issues with the design. The biggest one is that all the details of drawing each object and each state would effectively need to be duplicated. The alternative it seems would be to construct a “wrapper” that is for example able to handle various operations, in a generic but still powerful way, to paint on both SKCanvas as well as System.Drawing.Graphics. I’ve decided against this approach because realistically once a SkiaSharp implementation is working, GDI+ is pretty much just legacy stuff that I could arguably remove altogether anyway. Furthermore, that sort of abstraction would prevent or at least make more difficult utilization of features specific to one implementation or another within the client code doing the drawing, and would just mean that now the drawing logic is coupled to whatever abstraction I created.

There is still the problem of Game Elements using data types such as PointF or RectangleF and so forth, and particularly Image and Bitmap to represent positions, bounds, and loaded images, so I suspect things outside the game “engine” will require modification, but it has provided a scaffolding upon which I can build the new implementations. Seeing working code, I find, tends to motivate further changes. Sort of a tame form of Test Driven Development, I suppose.

I have managed to implement some basic handlers so hopefully I can get a SkiaSharp implementation utilizing a SKControl as the drawing surface sorted out. I decided to implement this stuff before for example trying to create a title screen menu because that would be yet another state and drawing code I’d need to port over.

Some of the direct translations were interesting. They also gave peripheral exposure to what looks like very powerful features that are available in SkiaSharp that would give a lot of power in terms of drawing special effects compared to GDI+. For example, using BlendFilters, it appears it would be fairly straightforward to apply a blur effect to the play field while the game is paused, which I think would look pretty cool.

Posted By: BC_Programming
Last Edit: 22 Jun 2019 @ 08:40 AM

EmailPermalinkComments (0)
Tags
Tags: , ,
Categories: .NET, C#, Programming
 23 Mar 2019 @ 2:15 PM 

There are a lot of components of Windows 10 that we, as users, are not “allowed” to modify. It isn’t even enough when we find a way to do so, such as by disabling services or scheduled tasks by using command prompt running under the system account. This is because when you next install updates, those settings are often reset. There are also background tasks and services intended specifically for “healing” tasks, which is a pretty friendly way to describe a  trojan downloader.

One common way to “assert” control is using the registry and the Image File Execution Options key, found as:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options

By adding a Key here with the name of the executable, one can add additional execution options. The one of importance here is a string value called debugger. When you add a debugger value, Windows will basically not start the executable and will instead launch the executable listed for the “debugger” value, with the executable that was being run as a parameter.

We can use this for two purposes. The most obvious is that we can simply swap in an executable that does nothing at all, and basically prevent any executable from running. For example, if we add “C:\Windows\System32\systray.exe” as the debugger value for an executable, when the executable in question is run, instead the systray.exe stub will run, do nothing, and exit, and the executable that was being launched will not. As a quick aside- systray.exe is a stub that doesn’t actually do anything- it used to have built-in notifications icons for Windows 9x, and it remains because some software would actually check if that file existed to know whether it was running on Windows 95 or later.

The second way we can use it is to instead insert our own executable as the debugger value. Then we can log and record each invocation of any redirected program. I wanted to record the invocations of some built-in Windows executables I had disabled, so I created a simple stub program for this purpose:

IFEOSettings.cs

I Decided to separate the settings for future editing. For my usage, I just have it hard-coded to C:\\IMEO_Logs right now and create the folder beforehand. The bulk of the program of course is the entry point class:

I’ve used this for a few weeks by manually altering the Image File Execution Options to change my existing settings that redirected some executables (compattelrunner.exe, wsqmcons.exe, and a number of others) to systray.exe to instead redirect to this program- It then logs all the attempts to invoke that executable alongside details like the arguments that were passed in.

Posted By: BC_Programming
Last Edit: 23 Mar 2019 @ 02:15 PM

EmailPermalinkComments Off on Taking Control of Windows 10 with Image File Execution Options
Tags
 14 Mar 2019 @ 6:56 PM 

Over the last few years it’s become apparent, of course, that many people building and using PCs are using physical media less and less. One thing I have noticed is that a lot of people that go “Optical Drive free” seem to evangelize it and assume everybody who uses DVDs or physical media is some kind of intellectually incognizant doofus- That Optical Media is Unnecessary in general and nobody needs it, which is of course a very silly statement.

Of course it’s "unnecessary"; a Graphics card is "unnecessary" and a sound card is "unnecessary" and both have been for years, but people still buy and use them; Optical drives are probably more in the camp of the latter than the former since the former is arguably a necessity for "gaming" whereas Optical drives are certainly not- at least not, in general.

But like they said- different people have different needs; or, perhaps, a better term, would be different uses for them.

Just speaking personally, my Main System has both a Blu Ray Burner and a DVD Drive installed. I use the Blu Ray burner for watching Blu Rays, as I prefer physical media, and I’ve found BD-R discs great for making hard-copy backups. Why not use a USB Drive? I have USB drives and external USB Drives/enclosures, but I’ve found them incredibly uneconomical for long term hard backups. With Blu-Ray Discs, if I want a hard copy backup it’s something I want to burn, label, and basically file away. Flash Drives and External Drives wouldn’t work like that- they would constantly be changing alongside the data source being backed up Making them more a redundancy rather than an actual backup solution over a longer term. Another problem is that a good one isn’t cheap. The External Drives Seagate and WD sell are reasonably cheap for the capacity, mind, but- those are dogshit; WD/Seagate both use their shittiest drives and create externals out of them. Sorry but I don’t trust a Seagate Sunfish (or whatever they call their low-end model) or WD Green drive as a safe backup drive anymore than I’d trust the safe deposit boxes of a bank that operates out of the back of a Toyota Tercel. Which makes a good backup drive less economical because it means getting a good enclosure (eSATA and USB 3 is an obvious must here) as well as a good drive.

Space per GB is also better with BD-R (perhaps less with spindles of DVDs).

Another aspect is that I also have a number of older game titles on DVDs. Some are available on Steam, but I’m not about to buy them again. Fuck that noise.

of course, I *could* do all this with an External drive. But because I actually utilize the physical media it is for, it’s not economical time-wise. I use them rather frequently. So it’s sort of like somebody who for some physiological reason never shits saying that having a toilet inside is unnecessary. they are strictly right, but I’m not going to start shitting in a chamberpot or outhouse.

Conversely, it’s not Necessary- My laptop doesn’t have an optical drive, for example, and it’s not affected anything, as it’s not a gaming machine and doesn’t actually keep anything special that I need to backup to start with.

Posted By: BC_Programming
Last Edit: 14 Mar 2019 @ 06:56 PM

EmailPermalinkComments Off on "DVD Drives are unnecessary in modern PCs"
Tags
Categories: Programming
 28 Jan 2019 @ 4:53 PM 

Recently, a Microsoft engineer had this to say with regards to Mozilla and Firefox:

Thought: It’s time for @mozilla to get down from their philosophical ivory tower. The web is dominated by Chromium, if they really *cared* about the web they would be contributing instead of building a parallel universe that’s used by less than 5%?

As written this naturally got a lot of less-than optimistic responses. Here are some follow up tweets wherein they explain their position:

I don’t neglect the important work Mozilla has contributed, but here’s a few observations shapes my perspective:

1) The modern web platform is incredible complex. Today it’s an application runtime comparable to the Java or .net framework.

2) This complexity it’s incredibly expensive to implement a web runtime. Even for Google/Microsoft it’s hard to justify such investment that would take thousands of engineers in multiple years. The web has become too capable for multi engines, just like many frameworks.

3) Contribution can happen on many levels, and why is it given that each browser vendor has to land their contributions in *their own* engine? What isn’t the question what drives most impact for the web as a holistic platform?

4) My problem with Mozilla’s current approach is that they are *preaching* their own technology instead of asking themselves how they can contribute most and deliver most impact for the web? Deliver value to 65% of the market or less than 5%?

5) This leads to my bigger point: In a world where the web platform has evolved into a complex .application runtime, maybe it’s time to revise the operation and contribution model. Does the web need a common project and an open governance model like fx Node Foundation?

6) What if browser vendors contributed to a "common webplat core" built together and each vendor did their platform specific optimizations instead of building their own reference implementations off a specification from a WG? That’s what I mean by "parallel universes".

7) I believe Mozilla can be much more impactful on the holistic web platform if they took a step back and revised their strategy instead of throwing rocks after Google/MS/etc.

8) I want the web to win, but we need collaboration not parallel universes. Writing specs together is no longer enough. The real threat to the web platform is not another browser engine, but native platforms, as they don’t give a damn about an open platform.

That’s a lot to take in, however, my general “summary” would be “Why have these separate implementations of the same thing when there can be one” which is pretty much a case for promoting code reuse. However, that idea doesn’t really hold fast in this context. This may be why the statement was so widely criticized on Twitter.

In an ideal world, of course, the idea that we could have as they describe, a single, “common webplat core” that every vendor can freely contribute to and for which no one vendor has any absolute or direct control or veto power over, is a good one. But it is definitely not what we have nor is it something that seems to be in development right now. That “common webplat core built together by every vendor” is most definitely NOT Chromium, or the Blink engine, so it’s sort of a red herring argument here. Chromium is heavily influenced and practically “under the control” of Google, an advertising company. Microsoft- another company that has a large advertising component, has now opted to use the same Blink rendering engine and chromium underpinnings that are used in Chrome, via a re-engineering of the Microsoft Edge browser. That’s two companies that are shoulder deep in the advertising and marketing space that have a history of working in their own best interests rather than the best interests of end users with a hand on the reins of Chromium. Not exactly the open and free ‘common webplat core’ that they described!

Given this, Mozilla seems to be the only browser/rendering engine vendor that is committed to an open web, The idyllic scenario they have described only makes sense if we were to start with an assumption that all Open Source software is inherently free of any sort of corporate influence, which simply is not the case. Furthermore, the entire point of Open Source projects is to provide alternatives, not to provide a single be-all end all implementation- The entire idea of Open Source is to provide choices, not take them away. There is no single Desktop Environment, Shell, Email Server,  Web Server, text editor, etc. Think of a type of software and the Open Source community has numerous different implementations. This is because realistically there is no “be all end all” implementation for any non-trivial software product, and implementations of an Open Web fall under that umbrella. Suggesting that there be only one standard implementation that is used for every single web browser is actually completely contrary to the way Open Source already works.

Posted By: BC_Programming
Last Edit: 28 Jan 2019 @ 04:53 PM

EmailPermalinkComments Off on Why did the Microsoft Engineer Tweet about an Open Web? Because they are now on the other side.
Tags
Categories: Programming
 26 Sep 2018 @ 1:38 PM 

I have a feeling this will be a topic I will cover at length repeatedly, and each time I will have learned things since my previous installments. The Topic? Programming Languages.

I find it quite astonishing just how much Polarizations and fanaticism we can find over what is essentially a syntax for describing operations to a computer. A quick google can reveal any number of arguments about languages, people telling you why Java sucks, people telling you why C# is crap, people telling you why Haskell is useless for real-world applications, people telling you that Delphi has no future, people telling you that there is no need for value semantics on variables, people telling you mutable state is evil, people telling you that Garbage collection is bad, people telling you that manual memory management is bad, etc. It’s an astonishing, never-ending trend. And it’s really quite fascinating.

Why?

I suppose the big question is- Why? Why do people argue about languages, language semantics, capabilities, and paradigms? This is a very difficult question to answer. I’ve always felt that polarization and fanaticism is far more likely to occur when you only know and understand one Programming Language. Of course, I cannot speak for everybody, only from experience. When I only knew one language “fluently”, I was quick to leap to it’s defense. It had massive issues that I can see now, looking back, but which I didn’t see at the time. I justified omissions as being things you didn’t need or could create yourself. I called features in newer languages ‘unnecessary’ and ‘weird’. So the question really is, who was I trying to prove this to? Was I arguing against those I was replying too- or was it all for my own benefit? I’m animate that the reasons for my own behaviour -and, to jump to a premature and biased conclusion, possibly those in which I see similar behaviour over other Languages- was the result of feeling trivialized by the attacks on the language I was using. Basically, it’s the result of programmers rating themselves based on what languages they know and use everyday. This is a natural- if erroneous – method of measuring one’s capabilities. I’ve always been a strong proponent that it isn’t the Programming Language that matters, but rather your understanding of Programming concepts, and how you apply them, as well as not subverting to the religious dogmas that generally surround a specific language design. (I’m trying very hard not to cite specific languages here). Programming Languages generally have set design goals. As a result, they typically encourage a style of programming- or even enforce it through artificial limitations. Additionally, those limitations that do exist (generally for design reasons) are worked around by competent programmers in the language. So when the topic enters the domain of their favourite language not supporting Feature X, they can quickly retort that “you don’t need feature X, because you can use Feature Q, P and R to create something that functions the same”. But that rather misses the point, I feel.

I’ve been careful not to mention specific languages, but here I go. Take Visual Basic 6. That is, Pre .NET. As a confession, I was trapped knowing only Visual Basic 6 well enough to do anything particularly useful with it for a very long time. Looking back- and having to support my legacy applications, such as BCSearch- and I’m astonished by two things that are almost polar opposites; The first is simply how limited the language is. For example, If you had a Object Type CSomeList and wanted to ‘cast’ it to a IList interface, you would have to do this:

Basically, you ‘cast’ by setting it directly to a variable of the desired type you want to cast. These types of little issues and limitations really add up. The other thing that astonished me was the ingenuity of how I dealt with the limitations. At the time, I didn’t really consider some of these things limitations, and I didn’t think of how I dealt with them as workarounds. For example, the above casting requirement I found annoying, so I ended up creating a GlobalMultiUse Class (which means all the Procedures within are public); in this case the Function might be called “ToIList()” and would attempt to cast the parameter to a IList and return it. Additionally, at some point I must have learned about Exception handling in other languages, and I actually created a full-on implementation of Exception handling for Visual Basic 6. Visual Basic 6’s Error Handling was, for those that aren’t aware, rather simple. You could basically say “On Error Goto…” and redirect program flow to a specific label when an error occured. All you would know about the error is the error number, though. My “Exception” implementation built upon this. To Throw an exception, you would create it (usually with an aforementioned public helper), and then throw it. in the Exception’s “Throw()” method, it would save itself as the active Unwind Exception (global variable) and then raise an Application defined error. Handlers were required to recognize that error number, and grab the active exception (using GetException(), if memory serves). GetException would also recognize many Error codes and construct instances of the appropriate Exception type to represent them, so in many cases you didn’t need to check for that error code at all. The result? Code like this:

would become:

There was also a facility to throw inner exceptions, by using ThrowInner() with the retrieved Exception Type.

So what is wrong with it? well, everything. The Language doesn’t provide these capabilities, so I basically have to nip and tuck it to provide them, and the result is some freakish plastic surgery where I’ve grafted Exceptions onto somebody who didn’t want Exceptions. Fact is that, once I moved to other languages, I now see just how freakish some of the stuff I implemented in VB was. That implementation was obviously not thread safe, but that didn’t matter because there was no threading support, for example.

Looking forward

With that in mind, it can be valuable to consider one’s current perspectives and how they may be misguided by that same sort of devotion. This is particularly important when dealing with things that you only have a passing knowledge of. It’s perhaps more applicable if you’ve gained experience with something to be able to recognize flaws, but it’s easy to find flaws or repaint features or aspects as flaws for the purpose of making yourself feel wiser for not having used it sooner.

Posted By: BC_Programming
Last Edit: 26 Sep 2018 @ 01:38 PM

EmailPermalinkComments Off on Programming Languages (2)
Tags
 31 Aug 2018 @ 7:45 PM 

When I was implementing BASeTris, my Tetris Clone, I thought it would be nifty to have Controller support, so I could use my XBox One Controller that I have attached to my PC. My last adventure with Game Controllers ended poorly- BASeBlock has incredibly poor support for them, overall – In revisiting it with the consideration towards XInput rather than DirectInput this time, however, I eventually found XInput.Wrapper, which is a rather simple, single-class approach to handling XInput Keys.

The way that BASeTris handles input is my attempt at separating different Input methods from the start. The Game State interface has a single HandleGameKey routine which effectively handles a single press. That itself get’s called by the actual Input routines, which also include some additional management for features like DAS repeat for certain game keys. The XInput Wrapper, of course, was not like this. It is not particularly event driven and works differently.

I did mess about with it’s “Polling” feature for some time before eventually creating my own implementation of the same. The biggest thing I needed was a “translation” where I could see when keys were pressed and released and therefore track that information and translate it to appropriate GameKey presses. This was the rather small class that I settled on for this purpose and currently have implemented in BASeTris:

It is a bit strange that I needed to create a wrapper for what is itself a wrapper, but it wasn’t like I was going to find a ready-made solution that integrated into how I had designed Input in BASeTris anyway- some massaging was therefore quite expected to be necessary.

Posted By: BC_Programming
Last Edit: 31 Aug 2018 @ 07:45 PM

EmailPermalinkComments Off on A Wrapper for… The XInput Wrapper (?)
Tags
Categories: C#, Programming
 10 Jun 2018 @ 10:23 AM 

It suddenly occurred to me in the last week that I don’t really have a proper system in place for not only software downloads here on my website but appropriate build integration with the Source Control in building the projects as needed when commits are made. Having set up a Jenkins build environment for the software I work on for my Job, I thought it reasonable that I make the same demands of myself.

One big reason to do this, IMO, is that it can actually encourage me to create new projects. The idea of packaging up the result and making it more easily accessible or usable is often a demotivator I find for creating some new projects. Having an established “system” in place whereby I can make changes on github and have say, installer files “appear” properly on my website as needed can be a motivator- I don’t have to build the program, copy files, run installation scripts, etc. manually every time- I just need to configure it all once and it all “works” by itself.

To that end, I’ve setup Jenkins appropriately on one of my “backup” computers. It’s rather tame in it’s capabilities but it should get the job done, I think- only 4GB of RAM and an AMD 5350. I would use my QX6700 based system, but the AMD system uses far less power. I also considered having Jenkins straight-up on my main system but thought that could get in the way, and just be annoying. Besides- this gives that system a job to do.

With the implementation for work, there were so many projects interdependent and we pretty much always want “everything” that I just made it a single project which builds all at once. This way everything is properly up to date. The alternative was fiddling with 50+ different projects and figuring out the appropriate dependencies to build based on when other projects were updated and such- something of a mess. Not to mention it’s all in one  repository anyway which goes against that idea as well.

In the case of my personal projects on Github, They are already separate repositories. So I will simply have them built as separate projects, with jenkins itself understanding upstream/downstream, I can use that as needed.

I’ve successfully configured the new jenkins setup and it is now building BASeTris, a Tetris clone game I decided to write a while ago. It depends on BASeScores and Elementizer, so those two projects are in jenkins as well.

BASeTris’s final artifact is an installer.

But of course, that installer isn’t much good just sitting on my CI Server! However, I also don’t want to expose the CI Server as a “public” page- there are security considerations, even if I disregard upload bandwidth issues. To that end, I constructed a small program which uploads files using SSH to my website. It will run once a day and is given a directory. It will look in all the immediate subdirectories of that directory, get the most recent file, and upload it to a corresponding remote directory if it hasn’t already been uploaded. I configured BASeTris to copy it’s final artifact there into an appropriate folder.

Alternatively, it is possible to have each project configured to upload the artifacts via SSH as a post-build step. However, I opted to not do that because I would rather not a series of changes throughout the day result in a bunch of new uploads- those would consume space and not be particularly useful. Instead, I’ve opted to have all projects that I want to upload uploaded once a day and only if there have been changes. This should help reduce redundancy (and space usage) of those uploads.

My “plan” is to have a proper PHP script or something that can enumerate the folders and provide a better interface for downloads. If nothing else I would like each CI projects folder to have a "project_current.php” file which automatically sends the latest build- then I can simply link to that on blog download pages for each project and only update it to indicate new features or content.

As an example, http://bc-programming.com/downloads/CI/basetris/ is the location that will contain BASeTris version downloads.

There is still much work to do, however- the program(s) do have git hash metadata added to the project build, so they do have access to their git commit hash, but currently they do not actually present that information. I think it should for example be displayed in the title bar, alongside other build information such as build date, if possible. I’ve tried to come up with a good way to have the version auto-increment but I think I’ll just tweak that as the project(s) change.

Heck- the SSH Uploader utility seems like a good candidate for yet another project to add to github, if I can genericize it so it isn’t hard-coded for my site and purpose.

Posted By: BC_Programming
Last Edit: 10 Jun 2018 @ 10:23 AM

EmailPermalinkComments Off on About Time I had a CI Server, Methinks
Tags

 Last 50 Posts
 Back
Change Theme...
  • Users » 47469
  • Posts/Pages » 394
  • Comments » 105

PP



    No Child Pages.

Windows optimization tips



    No Child Pages.

Soft. Picks



    No Child Pages.

VS Fixes



    No Child Pages.

PC Build 1: “FASTLORD”



    No Child Pages.