22 Jun 2019 @ 8:34 AM 

For some time now, I’ve occasionally created a relatively simple game and typically I’m not bothered to get into fancy “game engines” or using special rendering. usually I just have a Windows Form Application, a Game loop, and paint routines working with the System.Drawing.Graphics canvas. “BASeTris”, a Tetris clone, was my latest effort using this technique.

While much maligned, it is indeed possible to make that work and maintain fairly high framerates; one has to be careful what and when things get drawn, and eliminate unnecessary operations. By way of example, within my Tetris implementation, the blocks that are “set” on the field are drawn onto a separate bitmap only when they change; For the main paint routine, that bitmap get’s drawn in one go, instead of individually drawing each block, which involves bitmap scaling and such each time. Effectively I attack the problem by using separate “layers” which get rendered to individually and then those layers are painted unscaled each “frame”.

Nonetheless, it is a rather outdated approach. Because of that I decided I’d give SkiaSharp a go. SkiaSharp is a cross-platform implementation that is a wrapper around the Skia Graphics Library. This is used in many programs, such as Google Chrome. For the most part, the featureset is very similar conceptually to GDI+, though it tends to be more powerful, reliable, and, of course, portable, since it runs across different systems as well as other languages. It’s also hardware accelerated which is a nice-to-have.

The first problem, of course, was that much of the project was tightly coupled to GDI+. For example, elements that appear within the game will typically have a routine to Perform a Frame of animation and a routine that is capable of drawing to a System.Drawing.Graphics. Now, it would be possible to amend the interface such that there is an added Draw routine for each implementation, But this would clog up a lot of the internals of the logic classes.

Render Providers

I hit upon the idea, which is obviously not original, to separate the rendering logic into separate classes. I came up with this basic interface for those definitions:

The idea being that implementations would implement the appropriate generic interface for the class they can draw, the “Canvas” object they are able to draw onto (the target) and additional information which can vary based on said implementation. I also expanded things to create an interface specific to “Game States”; The game, of course, would be in one state at a time, which is represented by an abstract class implementation for the Menu, the gameplay itself, the pause screen, as well as the pause transitions and so on.

Even at this point I can already observe many issues with the design. The biggest one is that all the details of drawing each object and each state would effectively need to be duplicated. The alternative it seems would be to construct a “wrapper” that is for example able to handle various operations, in a generic but still powerful way, to paint on both SKCanvas as well as System.Drawing.Graphics. I’ve decided against this approach because realistically once a SkiaSharp implementation is working, GDI+ is pretty much just legacy stuff that I could arguably remove altogether anyway. Furthermore, that sort of abstraction would prevent or at least make more difficult utilization of features specific to one implementation or another within the client code doing the drawing, and would just mean that now the drawing logic is coupled to whatever abstraction I created.

There is still the problem of Game Elements using data types such as PointF or RectangleF and so forth, and particularly Image and Bitmap to represent positions, bounds, and loaded images, so I suspect things outside the game “engine” will require modification, but it has provided a scaffolding upon which I can build the new implementations. Seeing working code, I find, tends to motivate further changes. Sort of a tame form of Test Driven Development, I suppose.

I have managed to implement some basic handlers so hopefully I can get a SkiaSharp implementation utilizing a SKControl as the drawing surface sorted out. I decided to implement this stuff before for example trying to create a title screen menu because that would be yet another state and drawing code I’d need to port over.

Some of the direct translations were interesting. They also gave peripheral exposure to what looks like very powerful features that are available in SkiaSharp that would give a lot of power in terms of drawing special effects compared to GDI+. For example, using BlendFilters, it appears it would be fairly straightforward to apply a blur effect to the play field while the game is paused, which I think would look pretty cool.

Posted By: BC_Programming
Last Edit: 22 Jun 2019 @ 08:40 AM

EmailPermalinkComments (0)
Tags
Tags: , ,
Categories: .NET, C#, Programming

 20 Apr 2019 @ 12:57 PM 

“Dark Mode” settings have started to be a big ticket concern for the last few years. Applications and Apps have started to add “Dark Mode” Visuals as an option, and more recently, Mac OS X (Now Mac OS, because that’s not confusing when you are interested in old software/hardware!) as well as Windows 10 have introduced their own “Dark Mode” featureset in the OS.

However, I’ve found Windows 10’s implementation confusing and actually a bit disturbing.

To explain, I’ll start at the beginning.

Graphical Environments in general have held to an idea that, for the most part, standard Graphical elements were managed by the OS. For example, on the Macintosh you would create software and it would use standard buttons, listboxes, etc. and the behaviour of those is handled by the OS. Your software didn’t have to handle detecting mouse clicks, drawing the button, changing it’s appearance when it’s clicked, etc. This concept was of course shared by Windows. on Windows 3.0 and 3.1, the system had “System Colors” that defined how different elements were drawn. Windows itself would use those colours where appropriate, for things like title bars and title bar font, and Applications could merely use the setting to use the current system setting. (And respond appropriately to the broadcast message when system settings are changed to deal with those colours changing). The system’s shipped with various “Themes” which were effectively sets of those colours, and you could customize  those colours to your liking.

Windows31

Windows 3.1 Dark Mode

Up through to System 7, the Macintosh held fast to most of it’s original UI designs in terms of visuals. Originally grayscale, later support for colour did add little bits here and there, primarily for the icons, but the main User interface was largely white with black or gray lines, or with rather subtle colouring.

System7Theme

Mac OS 8 with System 7 Theme via Kaleidoscope

System 7 however, on capable systems, also added a new feature that was available as a downloadable add-on from Apple- Appearance Manager. This was effectively a “plugin”  that would take over the task of drawing standard Elements, Elements were given a 3-D appearance; buttons “popped out” instead of being black chamferboxes. Progressbars got fancy gradients, and so on. This was part of the standard install with Mac OS 8 as well. These offered a lot of customization out of the box. Even more with software like Kaleidoscope. The standard appearance provided by the appearance manager was known as “Apple Platinum” and offered a number of colour options. (Mostly, the colours affected selection colour and the colour of progress bars, from what I can tell)

ApplePlatinum

Mac OS 8 Apple Platinum Theme (default)

Not to be outdone, Windows 95 introduced 3-D theming to the Windows environment, providing a similar set of changes to the standard appearance. Unlike Appearance Manager, one could also set the “3-D Colour” which affected the colour of most elements. This facilitated the creation of what could be called “Dark” themes.

Windows_95

Windows 95 with “Dark Theme”

It wasn’t until Windows XP that Windows had a feature similar in concept to Mac OS’s Appearance manager, through the introduction of Visual Styles. Visual Styles worked in much the same way- a Visual Style defined custom images that were used to draw particular window elements,  allowing a richer and more thematic styling to be applied. With Windows XP,  in addition to the default Visual Style, there was also an Olive and Silver Theme/Visual Style that was included. A “Theme”, which previously was a set of system colours, was changed to also include the Visual Style option. Additionally, you could decide to disable Visual Styles and use the “Windows Classic” Theme, which would not use the “Luna” Windows decorator. Interestingly, With the classic Theme style, one could adjust the colour options in much the same way as one could on previous Windows releases, creating “Dark Mode” colour schemes if desired.

xpDefault

XP (Default)

xpOlive

XP (Olive)

xpSilver

XP (Silver)

xpClassic_dark

XP (Classic, “Dark”)

Around that same time frame, The Macintosh Operating System was migrated to OS X, something of a hybrid of the older Mac system and the NeXTStep Operating System. This introduced the concept of a “Composited desktop” to the mainstream. In a traditional desktop environment, it operates on a single output “image”. When you move a window, it get’s redrawn in the new location, and any revealed sections of the screen below need to be redrawn as well. A composited desktop keeps all that necessary information in memory- for example, it may hold the bitmaps that represent each window as a texture, and merely compose them together to create the final image, usually through the use of 3-D Accelerated Video hardware. With capable hardware, this approach was much faster and in general much cleaner. Internally, there was a framework for UI element drawing. However, externally, it was necessary to use third-party software to reskin the styles of the OS (Shapeshifter, for example).

Windows Vista brought this same composited desktop experience to the Windows platform, this new technology was Aero. This underlying composited desktop experience has been used up through to Windows 10. Aero has similar capabilities to Luna, in that Visual Styles can customize almost every element of the system. “Aero Glass”, which many associated with Aero, was an enhancement to allow fancy affects to be done using the 3-D Rendering that is performed on the composited information. In it’s case, providing a sort of “translucent glass” effect which blurs the text behind the “glass” areas of a window (typically, the title bar).

vista

Windows Vista

Basically, over the years, there have been a number of solutions and options for a central, system controlled set of colours and repeated thematic elements such as buttons. Which of course is what brings me finally, to why I find Windows 10’s dark mode both confusing and disturbing- it leverages none of these technologies!

The Dark Mode feature of Windows 10 is implemented effectively as an on-off flag which does not change Windows behaviour. Instead, applications all need to check this flag and operate appropriately. the libraries behind UWP Apps will perform this check and change their visual theming appropriately. That is all. Win32 applications are unaffected. To implement Dark Mode in File Explorer, for example, Microsoft developers have changed File Explorer to see the flag and use different Dark colours for all UI elements if it does.

But it makes no sense. Every piece of Windows now needs to be altered to allow for this. And even if every part of Windows has these changes made to support it, Third party applications aren’t guaranteed to support it, either. Lastly, nothing about the Dark Mode support is standard- From an application perspective, if Dark mode is on, you cannot use the Visual Style- so what should a Button look like in dark mode? A Combo box scrollbar? etc. Even the colours have no standard- it’s all up to the application.

The implementation of Dark Mode makes no sense because it should have been a new Theme with appropriate dark colours that comes with Windows which also uses a new Visual Style that changes all the visual elements to have a darker colour. If Dark Mode is on, no application should see “white” for the Window background colour and be expected to disregard it if it seems Dark mode is on and use “A dark colour” of some sort that isn’t standardized for different elements.

Posted By: BC_Programming
Last Edit: 20 Apr 2019 @ 12:57 PM

EmailPermalinkComments (0)
Tags
Categories: Windows

 23 Mar 2019 @ 2:15 PM 

There are a lot of components of Windows 10 that we, as users, are not “allowed” to modify. It isn’t even enough when we find a way to do so, such as by disabling services or scheduled tasks by using command prompt running under the system account. This is because when you next install updates, those settings are often reset. There are also background tasks and services intended specifically for “healing” tasks, which is a pretty friendly way to describe a  trojan downloader.

One common way to “assert” control is using the registry and the Image File Execution Options key, found as:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options

By adding a Key here with the name of the executable, one can add additional execution options. The one of importance here is a string value called debugger. When you add a debugger value, Windows will basically not start the executable and will instead launch the executable listed for the “debugger” value, with the executable that was being run as a parameter.

We can use this for two purposes. The most obvious is that we can simply swap in an executable that does nothing at all, and basically prevent any executable from running. For example, if we add “C:\Windows\System32\systray.exe” as the debugger value for an executable, when the executable in question is run, instead the systray.exe stub will run, do nothing, and exit, and the executable that was being launched will not. As a quick aside- systray.exe is a stub that doesn’t actually do anything- it used to have built-in notifications icons for Windows 9x, and it remains because some software would actually check if that file existed to know whether it was running on Windows 95 or later.

The second way we can use it is to instead insert our own executable as the debugger value. Then we can log and record each invocation of any redirected program. I wanted to record the invocations of some built-in Windows executables I had disabled, so I created a simple stub program for this purpose:

IFEOSettings.cs

I Decided to separate the settings for future editing. For my usage, I just have it hard-coded to C:\\IMEO_Logs right now and create the folder beforehand. The bulk of the program of course is the entry point class:

I’ve used this for a few weeks by manually altering the Image File Execution Options to change my existing settings that redirected some executables (compattelrunner.exe, wsqmcons.exe, and a number of others) to systray.exe to instead redirect to this program- It then logs all the attempts to invoke that executable alongside details like the arguments that were passed in.

Posted By: BC_Programming
Last Edit: 23 Mar 2019 @ 02:15 PM

EmailPermalinkComments (0)
Tags

 14 Mar 2019 @ 6:56 PM 

Over the last few years it’s become apparent, of course, that many people building and using PCs are using physical media less and less. One thing I have noticed is that a lot of people that go “Optical Drive free” seem to evangelize it and assume everybody who uses DVDs or physical media is some kind of intellectually incognizant doofus- That Optical Media is Unnecessary in general and nobody needs it, which is of course a very silly statement.

Of course it’s "unnecessary"; a Graphics card is "unnecessary" and a sound card is "unnecessary" and both have been for years, but people still buy and use them; Optical drives are probably more in the camp of the latter than the former since the former is arguably a necessity for "gaming" whereas Optical drives are certainly not- at least not, in general.

But like they said- different people have different needs; or, perhaps, a better term, would be different uses for them.

Just speaking personally, my Main System has both a Blu Ray Burner and a DVD Drive installed. I use the Blu Ray burner for watching Blu Rays, as I prefer physical media, and I’ve found BD-R discs great for making hard-copy backups. Why not use a USB Drive? I have USB drives and external USB Drives/enclosures, but I’ve found them incredibly uneconomical for long term hard backups. With Blu-Ray Discs, if I want a hard copy backup it’s something I want to burn, label, and basically file away. Flash Drives and External Drives wouldn’t work like that- they would constantly be changing alongside the data source being backed up Making them more a redundancy rather than an actual backup solution over a longer term. Another problem is that a good one isn’t cheap. The External Drives Seagate and WD sell are reasonably cheap for the capacity, mind, but- those are dogshit; WD/Seagate both use their shittiest drives and create externals out of them. Sorry but I don’t trust a Seagate Sunfish (or whatever they call their low-end model) or WD Green drive as a safe backup drive anymore than I’d trust the safe deposit boxes of a bank that operates out of the back of a Toyota Tercel. Which makes a good backup drive less economical because it means getting a good enclosure (eSATA and USB 3 is an obvious must here) as well as a good drive.

Space per GB is also better with BD-R (perhaps less with spindles of DVDs).

Another aspect is that I also have a number of older game titles on DVDs. Some are available on Steam, but I’m not about to buy them again. Fuck that noise.

of course, I *could* do all this with an External drive. But because I actually utilize the physical media it is for, it’s not economical time-wise. I use them rather frequently. So it’s sort of like somebody who for some physiological reason never shits saying that having a toilet inside is unnecessary. they are strictly right, but I’m not going to start shitting in a chamberpot or outhouse.

Conversely, it’s not Necessary- My laptop doesn’t have an optical drive, for example, and it’s not affected anything, as it’s not a gaming machine and doesn’t actually keep anything special that I need to backup to start with.

Posted By: BC_Programming
Last Edit: 14 Mar 2019 @ 06:56 PM

EmailPermalinkComments (0)
Tags
Categories: Programming

 14 Mar 2019 @ 6:51 PM 

Alternate Title: Software Licenses and implicit trust

It is interesting to note that in many circles proprietary software is inherently considered untrustworthy. That is, of course, not for no reason- it is much more difficult to audit and verify that the software does what it is supposed to and to check for possibly security problems. However, conversely, it seems that a lot of Open Source software seems to get a sort of implicit trust applied to it. The claim is that if there isn’t somebody sifting through and auditing software, you don’t know what is in there- and, conversely, that if something is open source, we do know what is in there.

But, I would argue that the binaries are possibly more trustworthy in attempting to determine what a piece of software is doing simply by virtue of it being literally what is being executed. Even if we consider the scenario of auditing source code and building binaries ourself, we have to trust the binary of the compiler to not be injecting malicious code, too.

I’ve found that this sort of rabbit hole is something that a lot of Open Source advocates will happily woosh downwards as far as possible for proprietary software, but seem to avoid falling for Open Source software. Much of the same logic that get’s applied to justify distrust of proprietary binary code should cause distrust in areas of Open Source, but for some reason a lot of aspects of Open Source and the Free Software Community are free from the same sort of cynicism that is applied to proprietary software, even though there is no reason to think that software falling under a specific license makes it inherently more or less trustworthy. If we can effectively assume malicious motives for proprietary software developers, why do we presume the opposite for Open Source, particularly since it is now such a better target for malicious actors due to the fact that it is so often implicitly trusted?

Source code provided with a binary doesn’t mean anything because- even assuming users capable of auditing said code, there is no way to reliably and verifiably know that the source code is what was used to build the binary. Trust-gaining exercises like hashes or MD5sums can be adjusted, collided, or changed and web servers hacked to make illegitimate binary releases appear legitimate to propagate undesirable code which simply doesn’t appear in the associated source code with a supposed release (Linux Mint). Additionally, The indeterminate nature of modern compilers means that even compiling the same source more than once can often give completely different results as well, so you cannot really verify that the source matches a given binary by rebuilding the source and comparing the resulting binary to the one being verified.

Therefore, it would seem the only reasonable recourse is to only run binaries that you build yourself, from source that has been appropriately audited.

Thusly, we will want to audit the source code. And the first step is getting that source code. A naive person might think a git pull is sufficient. But no no- That is a security risk. What if GitHub is compromised to specifically deliver malicious files with that repository, hiding secret exploits deep within the source codebase? Too dangerous. Even with your careful audit, you could miss those exploits altogether.

Instead, the only reasonable way to acquire the source code to a project is to discover reliable contact details for the project maintainer and send then a PGP encrypted message requesting that they provide the source code either at a designated drop point- Which will have to be inconspicuous and under surveillance by an unaffiliated third party trusted by both of you – Or have him send a secure, asymmetrically encrypted message containing the source tarball.

Once you have the source, now you have to audit the entire codebase. Sure, you could call it quits and go "Developer says it’s clean, I trust him" fine. be a fool. be a foolish fool you fooly foolerson, because even if you know the tarball came from the developer, and you trust them- do you trust their wife? their children? their pets? Their neighbors? You shouldn’t. In fact, you shouldn’t even trust yourself. But you should, because I said you shouldn’t and you shouldn’t trust me. On the other hand, that’s exactly what I might want you to think.

"So what if I don’t trust their hamster, what’s the big deal"

Oh, of course. Mr Security suddenly decides that something is too off-the-wall.

Hamsters can be trained. Let that sink in. Now you know why you should never trust them. Sure, they look all cute running on their little cage, being pet by the developers cute 11 year old daughter, but looks can be deceiving. For all you know their daughter is a secret Microsoft agent and the hamster has been trained or brainwashed- using evil, proprietary and patent encumbered technology, no doubt, to act as a subversive undercurrent within that source repository. With full commit access to that project’s git repository, the hamster can execute remote commands issued using an undocumented wireless protocol that has no man page, which will cause it to perform all sorts of acts of terror on the git repository. Inserting NOP sleds before security code, adding JMP labels where they aren’t necessary. even adding buffer overflows by adding off-by-one errors as part of otherwise benign bugfixes.

Is it very likely? No. But it’s *possible* so cannot be ignored.

Let’s say you find issues and report them.

Now, eventually, the issues will be fixed. The lead developer might accept a pull, and claim it to fix the issue.

Don’t believe the lies. You must audit the pull yourself and find out what sinister motives underly the so-called "fix". "Oh, so you thought you could just change that if condition, did you? Well did you know that on an old version of the PowerPC compiler, this generates code that allows for a sophisticated remote execution exploit if running under Mac OS 9?" Trust nobody. No software is hamster-proof.

Posted By: BC_Programming
Last Edit: 14 Mar 2019 @ 06:51 PM

EmailPermalinkComments (0)
Tags

 28 Jan 2019 @ 4:53 PM 

Recently, a Microsoft engineer had this to say with regards to Mozilla and Firefox:

Thought: It’s time for @mozilla to get down from their philosophical ivory tower. The web is dominated by Chromium, if they really *cared* about the web they would be contributing instead of building a parallel universe that’s used by less than 5%?

As written this naturally got a lot of less-than optimistic responses. Here are some follow up tweets wherein they explain their position:

I don’t neglect the important work Mozilla has contributed, but here’s a few observations shapes my perspective:

1) The modern web platform is incredible complex. Today it’s an application runtime comparable to the Java or .net framework.

2) This complexity it’s incredibly expensive to implement a web runtime. Even for Google/Microsoft it’s hard to justify such investment that would take thousands of engineers in multiple years. The web has become too capable for multi engines, just like many frameworks.

3) Contribution can happen on many levels, and why is it given that each browser vendor has to land their contributions in *their own* engine? What isn’t the question what drives most impact for the web as a holistic platform?

4) My problem with Mozilla’s current approach is that they are *preaching* their own technology instead of asking themselves how they can contribute most and deliver most impact for the web? Deliver value to 65% of the market or less than 5%?

5) This leads to my bigger point: In a world where the web platform has evolved into a complex .application runtime, maybe it’s time to revise the operation and contribution model. Does the web need a common project and an open governance model like fx Node Foundation?

6) What if browser vendors contributed to a "common webplat core" built together and each vendor did their platform specific optimizations instead of building their own reference implementations off a specification from a WG? That’s what I mean by "parallel universes".

7) I believe Mozilla can be much more impactful on the holistic web platform if they took a step back and revised their strategy instead of throwing rocks after Google/MS/etc.

8) I want the web to win, but we need collaboration not parallel universes. Writing specs together is no longer enough. The real threat to the web platform is not another browser engine, but native platforms, as they don’t give a damn about an open platform.

That’s a lot to take in, however, my general “summary” would be “Why have these separate implementations of the same thing when there can be one” which is pretty much a case for promoting code reuse. However, that idea doesn’t really hold fast in this context. This may be why the statement was so widely criticized on Twitter.

In an ideal world, of course, the idea that we could have as they describe, a single, “common webplat core” that every vendor can freely contribute to and for which no one vendor has any absolute or direct control or veto power over, is a good one. But it is definitely not what we have nor is it something that seems to be in development right now. That “common webplat core built together by every vendor” is most definitely NOT Chromium, or the Blink engine, so it’s sort of a red herring argument here. Chromium is heavily influenced and practically “under the control” of Google, an advertising company. Microsoft- another company that has a large advertising component, has now opted to use the same Blink rendering engine and chromium underpinnings that are used in Chrome, via a re-engineering of the Microsoft Edge browser. That’s two companies that are shoulder deep in the advertising and marketing space that have a history of working in their own best interests rather than the best interests of end users with a hand on the reins of Chromium. Not exactly the open and free ‘common webplat core’ that they described!

Given this, Mozilla seems to be the only browser/rendering engine vendor that is committed to an open web, The idyllic scenario they have described only makes sense if we were to start with an assumption that all Open Source software is inherently free of any sort of corporate influence, which simply is not the case. Furthermore, the entire point of Open Source projects is to provide alternatives, not to provide a single be-all end all implementation- The entire idea of Open Source is to provide choices, not take them away. There is no single Desktop Environment, Shell, Email Server,  Web Server, text editor, etc. Think of a type of software and the Open Source community has numerous different implementations. This is because realistically there is no “be all end all” implementation for any non-trivial software product, and implementations of an Open Web fall under that umbrella. Suggesting that there be only one standard implementation that is used for every single web browser is actually completely contrary to the way Open Source already works.

Posted By: BC_Programming
Last Edit: 28 Jan 2019 @ 04:53 PM

EmailPermalinkComments Off on Why did the Microsoft Engineer Tweet about an Open Web? Because they are now on the other side.
Tags
Categories: Programming

 22 Dec 2018 @ 5:14 PM 

There has been a lot of recent noise regarding the demographic makeup of typical software developers and people working in CS. There is a lot of “pushback” against it, which is a bit unusual. There is really no denying it. Look at almost any CS-related software team and you will find they are almost completely made up of nerdy, young white males. They think they got there through hard work and that the demographic dominates is because they are simply the best, but that is simply not true- it’s a self-perpetuating monoculture. Hell I’m a nerdy white male (Not so young now, mind…), I like programming and do it in my spare time but somehow that "feature" has become this almost implicit requirement. You need to find somebody who has this healthy Github contribution history and wastes a lot of their spare time fucking around on computers. That fits me but the fact is it simply shouldn’t be a requirement. a Team shouldn’t be comprised of one type of software developer. And that applies to both attitude as well as demographic.

There is also this weird idea that a software developer that doesn’t work on software in their spare time is some kind of perversion. "So what personal projects do you have?" is a question I can answer but if somebody cannot answer it or the answer is "none" I don’t get why that is an instant minus point. I mean bridge building engineers/contractors don’t get points taken off in interviews if they don’t spend their spare time designing and building bridges, but somehow in software development there is this implicit idea that we must all dedicate all of our spare time to it. Just because somebody doesn’t like to work on software in their spare time doesn’t mean they aren’t going to be absolutely spectacular at it. Hell if anything it’s those of us who finish work and basically just switch to a personal project that are trying to compensate by constantly cramming for the next workday. As if we have to constantly combat our own ineptitude by repetition at all times.

I think the relatively recent "pushback" to the idea of actually introducing any sort of diversity by trying to break up the self-perpetuating loop of young white guys only wanting to work with other young white guys really illustrated how necessary it was. You had people (young white male nerds, surprise) complaining about "diversity quotas" and basically starting with the flawed assumption that the reason that their team consisted of young white male nerds was because they were the most qualified. No, it was because the rest of the team was young white male nerds and anybody else being considered had to meet these ridiculous lengths to prove themselves before they even get considered as fitting the "culture" because the culture is one of- you guessed it, young, white, male nerds. A mediocre "young white male nerd" is often more likely to get hired than a demonstrably more skilled person of a different race or (god forbid, apparently), a woman.

Even an older guy is probably less likely to be brought on board. You can have some grizzled older software veteran at 50 who has forgotten more than the rest of the team knows put together but him not memorizing modern frameworks and buzzwords is going to prevent him from coming on board, even though he’s bringing on board countless skills and experience that no amount of github commits can hope to bring a "young white male nerd". Can you imagine how much ridiculous skill and ability a woman who is 60 would have to bring to the table to get hired as a software developer? You get these 24 something white dudes "well I wrote an expression evaluator" and the interviewer is like "Oh cool and it even does complex numbers, awesome" but a 60-year old woman could be like "well, I wrote a perfect simulation of the entire universe down to the atom, with a speed of 1 plank every 2 seconds, as you can see on my resume" And the completely unimpressed interviewer would be like "Yeah but we’re looking for somebody with CakePHP experience"

I think "young white male nerds" reject the idea that they have any sort of privilege in this field because they feel it means they didn’t work as hard. Well, yeah. We didn’t. get over it. We had things handed to us easily that we wouldn’t have if we were older, a different race, or women. We need to stop complaining that reality doesn’t match our ego and trying to stonewall what we term "diversity hires" and actually respect the fact that we aren’t a fucking master race of developers and women and minorities are fully capable of working in software, and cherrypicking racist and sexist statistics to support the perpetuation of the blindingly-white sausage fest just makes us look like babies trying to deny reality.

Posted By: BC_Programming
Last Edit: 22 Dec 2018 @ 05:14 PM

EmailPermalinkComments Off on The self-perpetuating monoculture of Software Development
Tags

 17 Nov 2018 @ 2:23 PM 

“Rogue-like” Games have existed for quite a while. Effectively, these titles have some form of permadeath or perhaps a heavy penalty when your character dies or loses, but the most important part is that many aspects of the game are randomized. The idea is, effectively, that every play through is going to be different. This is contrary to most titles, where each play through is the same.

However, while the idea dates back some ways, it’s lately started to gain popularity to use “randomizer” software to randomize those formerly static games, in order to effectively “create” new games from them. Typically, they have restrictions such that items are randomized such that you can get them all.

Over the past week or so I’ve been using my SD2SNES SNES Flash Cartridge to play a Randomized Link to the Past. It’s been an interesting experience; A number of times it seems like I got “stuck” only to find a required item secluded in some random treasure chest. Effectively, the randomizer will randomize the locations of items and will also randomize change a number of other aspects. It has certainly made the game interesting, to say the least.

One can find a list of Game Randomizers here. I personally have had a lot of fun previously with a tool called “ObHack” which was a Doom II Randomized WAD generator which was a modified version of Oblige. I also changed the Lua so that I could generate ridiculous quantities of monsters, because why not. Often in the “open” Levels, I would be immediately spotted by vast hordes of enemies, as well as bosses in the distance, so I would have to be dodging attacks while attempting to battle through the enemies nearby. When this was the first level- staying alive became a trick of it’s own.

Paired with User-generated content such as ROM hacks and custom level sets, Game Randomizers can make a playthrough of your favourite games less samey than usual.

Posted By: BC_Programming
Last Edit: 17 Nov 2018 @ 02:23 PM

EmailPermalinkComments Off on Video Game Randomizers
Tags
Categories: Games

 22 Oct 2018 @ 7:26 PM 

Nowadays, we’ve got two official ways of measuring memory and storage. They can be measured via standard Metric Prefixes, such as Megabytes (MB) or Kilobytes (KB), or they can use the binary prefixes, such as Mibibytes (MiB) and Kibibytes(KiB). And yet, a lot of software seems to use the former, when referring to the latter. Why is that, exactly?

Well, part of the reason is that the official Binary SI prefixes didn’t exist until 2008, and were used to address growing ambiguities between them. Those ambiguities had been growing for decades.

From the outset, when Memory and storage were first developed in respect to computers, it became clear it would be necessary for some sort of notation to be used to measure the memory size and storage space, other than by directly indicating bytes.

Initially, Storage was measured in bits. A bit, of course, was a single element of data- a 0 or a 1. In order to represent other data and numbers, multiple bits would be utilized. While the sizes being in common discussion were small, bits were commonly referenced. In fact even early on there arose something of an ambiguity; often when discussing transfer rates and/or memory chip sizes, one would here "kilobit" or "megabit"; these would be 1,000 bits and 1,000 kilobits respectively, and were not base 2; however, when referring to either storage space or memory in terms of bytes, a kilobyte or a megabyte would be 1,024 bytes or 1,024 kilobits respective.

One of the simplest ways of organizing memory was using powers of two; this allowed a minimum of logic to access specific areas of the memory unit. Because the smallest addressable unit of storage was the byte, which were 8bits, it meant that most memory was manufactured to be a multiple of 1,024 bits, possible because it was the nearest power of 2 to 1,000 that was also divisible by 8. For the most part, rather than adhering strictly to the SI definitions for the prefixes, there was a industry convention that effective indicated that, within the context of computer storage, the SI prefixes were binary prefixes.

For Storage, for a time the same conveniences applied that resulted in total capacities measured in the same units. For example, A Single-sided 180K Floppy Diskette had 512 bytes per sector, 40 sectors a track, and 9 tracks a side.

A single sided 180K diskette had 512 bytes a sector, 40 sectors per track, and 9 tracks per side. That was 184320 Bytes. In today’s terms with the standardized binary prefixes, this would be 180KiB.

360K Diskettes had a similar arrangement but were double-sided. they were 368640 Bytes- again, Binary prefix was being used in advertising.

Same with 720 K 3-1/4" diskettes. 512 bytes/sector, 9 sectors per track, 80 tracks/side, two sides. That’s 737280 bytes. or 720KiB.

The IBM XT 5160 came with a drive advertised as 10MB in size. The disk has 512 bytes per sector, 306 cylinders, 4 heads, and 17 tracks. One cylinder is for diagnostic purposes and unusable. That gives us a CHS of 305/4/17. At 512 bytes/sector, that was 10,618,880 bytes of addressable space. (This was actually more than 10MiB as some defects were expected from the factory). The 20MB drive had a similar story as well. 615(-1 diag) cylinders, 4 heads, 17 sectors per track at 512 bytes a sector- 20.38MiB. The later 62MB drive was 940(-1 diag) cylinders, 8 heads, 17 sectors/track at 512 bytes/sector which gives ~62.36 MiB…

The "1.2MB" and "1.44MB" Floppy diskettes are when things started to get spitballed by marketing departments for ease of advertising and blazed an early trail for things to get even more misleading. The High density "1.2MB" diskettes were 512 bytes a sector, 15 sectors per track, 80 sectors per side, and double sided. That’s a total of 1,228,800 Bytes. or 1200 KiB, But they were then advertised as 1.2MB, Which is simply wrong altogether. It’s either ~1.7MiB, or it is ~1.23MB. it is NOT 1.2MB because that figure is determined by dividing the KiB by 1000 which doesn’t make sense. Same applies to "1.44MB" floppy diskettes, which are actually 1440KB due to having 18 sectors/track. (512 * 18 * 80 * 2=1474560 Bytes. That is either 1.47456MB, or 1.40625MiB, but was advertised as 1.44MB because it was 1440KiB (and presumably easier to write).

Hard drive manufacturers started to take it from there. First by rounding up a tiny bit- A 1987 Quantum LPS Prodrive advertised as 50MB was for example 49.87MB (752 cylinders, 8 heads, 17 sectors per track). I mean, OK- sure, 49.87 is a weird number to advertise I suppose…

it’s unclear when the first intentional and gross misrepresentation of HDD size was actually done where the SI Prefix definition was used to call a drive X MB. But, it was a gradual change. People started to accept the rounding and HDD manufacturers got more bold- eventually one of them released an X MB Drive that they KNEW full well people would interpret as X MiB, and when called out on it claimed they were using the "official SI Prefix" as if there wasn’t already a decades old de-facto standard in the industry regarding how storage was represented.

For the most part this confusion persisting forward is how we ended up with the official Binary Prefixes.

And yet- somewhat ironically – most OS software doesn’t use it. Microsoft Windows still uses the standard Prefixes. As I recall OSX provides for it as an option. Older Operating Systems and software will never use it as they won’t be updated.

The way I see it, HDD manufacturers have won. They are now selling Drives listed as "1TB" which are 930GiB, but because it’s 1,000,000,000,000 bytes or somewhere close, it’s totally cool because they are using the SI prefix.

Posted By: BC_Programming
Last Edit: 23 Oct 2018 @ 07:15 PM

EmailPermalinkComments Off on How HDD Manufacturer’s Shaped the Metric System
Tags

 26 Sep 2018 @ 1:38 PM 

I have a feeling this will be a topic I will cover at length repeatedly, and each time I will have learned things since my previous installments. The Topic? Programming Languages.

I find it quite astonishing just how much Polarizations and fanaticism we can find over what is essentially a syntax for describing operations to a computer. A quick google can reveal any number of arguments about languages, people telling you why Java sucks, people telling you why C# is crap, people telling you why Haskell is useless for real-world applications, people telling you that Delphi has no future, people telling you that there is no need for value semantics on variables, people telling you mutable state is evil, people telling you that Garbage collection is bad, people telling you that manual memory management is bad, etc. It’s an astonishing, never-ending trend. And it’s really quite fascinating.

Why?

I suppose the big question is- Why? Why do people argue about languages, language semantics, capabilities, and paradigms? This is a very difficult question to answer. I’ve always felt that polarization and fanaticism is far more likely to occur when you only know and understand one Programming Language. Of course, I cannot speak for everybody, only from experience. When I only knew one language “fluently”, I was quick to leap to it’s defense. It had massive issues that I can see now, looking back, but which I didn’t see at the time. I justified omissions as being things you didn’t need or could create yourself. I called features in newer languages ‘unnecessary’ and ‘weird’. So the question really is, who was I trying to prove this to? Was I arguing against those I was replying too- or was it all for my own benefit? I’m animate that the reasons for my own behaviour -and, to jump to a premature and biased conclusion, possibly those in which I see similar behaviour over other Languages- was the result of feeling trivialized by the attacks on the language I was using. Basically, it’s the result of programmers rating themselves based on what languages they know and use everyday. This is a natural- if erroneous – method of measuring one’s capabilities. I’ve always been a strong proponent that it isn’t the Programming Language that matters, but rather your understanding of Programming concepts, and how you apply them, as well as not subverting to the religious dogmas that generally surround a specific language design. (I’m trying very hard not to cite specific languages here). Programming Languages generally have set design goals. As a result, they typically encourage a style of programming- or even enforce it through artificial limitations. Additionally, those limitations that do exist (generally for design reasons) are worked around by competent programmers in the language. So when the topic enters the domain of their favourite language not supporting Feature X, they can quickly retort that “you don’t need feature X, because you can use Feature Q, P and R to create something that functions the same”. But that rather misses the point, I feel.

I’ve been careful not to mention specific languages, but here I go. Take Visual Basic 6. That is, Pre .NET. As a confession, I was trapped knowing only Visual Basic 6 well enough to do anything particularly useful with it for a very long time. Looking back- and having to support my legacy applications, such as BCSearch- and I’m astonished by two things that are almost polar opposites; The first is simply how limited the language is. For example, If you had a Object Type CSomeList and wanted to ‘cast’ it to a IList interface, you would have to do this:

Basically, you ‘cast’ by setting it directly to a variable of the desired type you want to cast. These types of little issues and limitations really add up. The other thing that astonished me was the ingenuity of how I dealt with the limitations. At the time, I didn’t really consider some of these things limitations, and I didn’t think of how I dealt with them as workarounds. For example, the above casting requirement I found annoying, so I ended up creating a GlobalMultiUse Class (which means all the Procedures within are public); in this case the Function might be called “ToIList()” and would attempt to cast the parameter to a IList and return it. Additionally, at some point I must have learned about Exception handling in other languages, and I actually created a full-on implementation of Exception handling for Visual Basic 6. Visual Basic 6’s Error Handling was, for those that aren’t aware, rather simple. You could basically say “On Error Goto…” and redirect program flow to a specific label when an error occured. All you would know about the error is the error number, though. My “Exception” implementation built upon this. To Throw an exception, you would create it (usually with an aforementioned public helper), and then throw it. in the Exception’s “Throw()” method, it would save itself as the active Unwind Exception (global variable) and then raise an Application defined error. Handlers were required to recognize that error number, and grab the active exception (using GetException(), if memory serves). GetException would also recognize many Error codes and construct instances of the appropriate Exception type to represent them, so in many cases you didn’t need to check for that error code at all. The result? Code like this:

would become:

There was also a facility to throw inner exceptions, by using ThrowInner() with the retrieved Exception Type.

So what is wrong with it? well, everything. The Language doesn’t provide these capabilities, so I basically have to nip and tuck it to provide them, and the result is some freakish plastic surgery where I’ve grafted Exceptions onto somebody who didn’t want Exceptions. Fact is that, once I moved to other languages, I now see just how freakish some of the stuff I implemented in VB was. That implementation was obviously not thread safe, but that didn’t matter because there was no threading support, for example.

Looking forward

With that in mind, it can be valuable to consider one’s current perspectives and how they may be misguided by that same sort of devotion. This is particularly important when dealing with things that you only have a passing knowledge of. It’s perhaps more applicable if you’ve gained experience with something to be able to recognize flaws, but it’s easy to find flaws or repaint features or aspects as flaws for the purpose of making yourself feel wiser for not having used it sooner.

Posted By: BC_Programming
Last Edit: 26 Sep 2018 @ 01:38 PM

EmailPermalinkComments Off on Programming Languages (2)
Tags





 Last 50 Posts
 Back
Change Theme...
  • Users » 47469
  • Posts/Pages » 391
  • Comments » 105

PP



    No Child Pages.

Windows optimization tips



    No Child Pages.

Soft. Picks



    No Child Pages.

VS Fixes



    No Child Pages.

PC Build 1: “FASTLORD”



    No Child Pages.