23 Mar 2019 @ 2:15 PM 

There are a lot of components of Windows 10 that we, as users, are not “allowed” to modify. It isn’t even enough when we find a way to do so, such as by disabling services or scheduled tasks by using command prompt running under the system account. This is because when you next install updates, those settings are often reset. There are also background tasks and services intended specifically for “healing” tasks, which is a pretty friendly way to describe a  trojan downloader.

One common way to “assert” control is using the registry and the Image File Execution Options key, found as:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options

By adding a Key here with the name of the executable, one can add additional execution options. The one of importance here is a string value called debugger. When you add a debugger value, Windows will basically not start the executable and will instead launch the executable listed for the “debugger” value, with the executable that was being run as a parameter.

We can use this for two purposes. The most obvious is that we can simply swap in an executable that does nothing at all, and basically prevent any executable from running. For example, if we add “C:\Windows\System32\systray.exe” as the debugger value for an executable, when the executable in question is run, instead the systray.exe stub will run, do nothing, and exit, and the executable that was being launched will not. As a quick aside- systray.exe is a stub that doesn’t actually do anything- it used to have built-in notifications icons for Windows 9x, and it remains because some software would actually check if that file existed to know whether it was running on Windows 95 or later.

The second way we can use it is to instead insert our own executable as the debugger value. Then we can log and record each invocation of any redirected program. I wanted to record the invocations of some built-in Windows executables I had disabled, so I created a simple stub program for this purpose:

IFEOSettings.cs

I Decided to separate the settings for future editing. For my usage, I just have it hard-coded to C:\\IMEO_Logs right now and create the folder beforehand. The bulk of the program of course is the entry point class:

I’ve used this for a few weeks by manually altering the Image File Execution Options to change my existing settings that redirected some executables (compattelrunner.exe, wsqmcons.exe, and a number of others) to systray.exe to instead redirect to this program- It then logs all the attempts to invoke that executable alongside details like the arguments that were passed in.

Posted By: BC_Programming
Last Edit: 23 Mar 2019 @ 02:15 PM

EmailPermalinkComments (0)
Tags
 14 Mar 2019 @ 6:51 PM 

Alternate Title: Software Licenses and implicit trust

It is interesting to note that in many circles proprietary software is inherently considered untrustworthy. That is, of course, not for no reason- it is much more difficult to audit and verify that the software does what it is supposed to and to check for possibly security problems. However, conversely, it seems that a lot of Open Source software seems to get a sort of implicit trust applied to it. The claim is that if there isn’t somebody sifting through and auditing software, you don’t know what is in there- and, conversely, that if something is open source, we do know what is in there.

But, I would argue that the binaries are possibly more trustworthy in attempting to determine what a piece of software is doing simply by virtue of it being literally what is being executed. Even if we consider the scenario of auditing source code and building binaries ourself, we have to trust the binary of the compiler to not be injecting malicious code, too.

I’ve found that this sort of rabbit hole is something that a lot of Open Source advocates will happily woosh downwards as far as possible for proprietary software, but seem to avoid falling for Open Source software. Much of the same logic that get’s applied to justify distrust of proprietary binary code should cause distrust in areas of Open Source, but for some reason a lot of aspects of Open Source and the Free Software Community are free from the same sort of cynicism that is applied to proprietary software, even though there is no reason to think that software falling under a specific license makes it inherently more or less trustworthy. If we can effectively assume malicious motives for proprietary software developers, why do we presume the opposite for Open Source, particularly since it is now such a better target for malicious actors due to the fact that it is so often implicitly trusted?

Source code provided with a binary doesn’t mean anything because- even assuming users capable of auditing said code, there is no way to reliably and verifiably know that the source code is what was used to build the binary. Trust-gaining exercises like hashes or MD5sums can be adjusted, collided, or changed and web servers hacked to make illegitimate binary releases appear legitimate to propagate undesirable code which simply doesn’t appear in the associated source code with a supposed release (Linux Mint). Additionally, The indeterminate nature of modern compilers means that even compiling the same source more than once can often give completely different results as well, so you cannot really verify that the source matches a given binary by rebuilding the source and comparing the resulting binary to the one being verified.

Therefore, it would seem the only reasonable recourse is to only run binaries that you build yourself, from source that has been appropriately audited.

Thusly, we will want to audit the source code. And the first step is getting that source code. A naive person might think a git pull is sufficient. But no no- That is a security risk. What if GitHub is compromised to specifically deliver malicious files with that repository, hiding secret exploits deep within the source codebase? Too dangerous. Even with your careful audit, you could miss those exploits altogether.

Instead, the only reasonable way to acquire the source code to a project is to discover reliable contact details for the project maintainer and send then a PGP encrypted message requesting that they provide the source code either at a designated drop point- Which will have to be inconspicuous and under surveillance by an unaffiliated third party trusted by both of you – Or have him send a secure, asymmetrically encrypted message containing the source tarball.

Once you have the source, now you have to audit the entire codebase. Sure, you could call it quits and go "Developer says it’s clean, I trust him" fine. be a fool. be a foolish fool you fooly foolerson, because even if you know the tarball came from the developer, and you trust them- do you trust their wife? their children? their pets? Their neighbors? You shouldn’t. In fact, you shouldn’t even trust yourself. But you should, because I said you shouldn’t and you shouldn’t trust me. On the other hand, that’s exactly what I might want you to think.

"So what if I don’t trust their hamster, what’s the big deal"

Oh, of course. Mr Security suddenly decides that something is too off-the-wall.

Hamsters can be trained. Let that sink in. Now you know why you should never trust them. Sure, they look all cute running on their little cage, being pet by the developers cute 11 year old daughter, but looks can be deceiving. For all you know their daughter is a secret Microsoft agent and the hamster has been trained or brainwashed- using evil, proprietary and patent encumbered technology, no doubt, to act as a subversive undercurrent within that source repository. With full commit access to that project’s git repository, the hamster can execute remote commands issued using an undocumented wireless protocol that has no man page, which will cause it to perform all sorts of acts of terror on the git repository. Inserting NOP sleds before security code, adding JMP labels where they aren’t necessary. even adding buffer overflows by adding off-by-one errors as part of otherwise benign bugfixes.

Is it very likely? No. But it’s *possible* so cannot be ignored.

Let’s say you find issues and report them.

Now, eventually, the issues will be fixed. The lead developer might accept a pull, and claim it to fix the issue.

Don’t believe the lies. You must audit the pull yourself and find out what sinister motives underly the so-called "fix". "Oh, so you thought you could just change that if condition, did you? Well did you know that on an old version of the PowerPC compiler, this generates code that allows for a sophisticated remote execution exploit if running under Mac OS 9?" Trust nobody. No software is hamster-proof.

Posted By: BC_Programming
Last Edit: 14 Mar 2019 @ 06:51 PM

EmailPermalinkComments (0)
Tags
 22 Dec 2018 @ 5:14 PM 

There has been a lot of recent noise regarding the demographic makeup of typical software developers and people working in CS. There is a lot of “pushback” against it, which is a bit unusual. There is really no denying it. Look at almost any CS-related software team and you will find they are almost completely made up of nerdy, young white males. They think they got there through hard work and that the demographic dominates is because they are simply the best, but that is simply not true- it’s a self-perpetuating monoculture. Hell I’m a nerdy white male (Not so young now, mind…), I like programming and do it in my spare time but somehow that "feature" has become this almost implicit requirement. You need to find somebody who has this healthy Github contribution history and wastes a lot of their spare time fucking around on computers. That fits me but the fact is it simply shouldn’t be a requirement. a Team shouldn’t be comprised of one type of software developer. And that applies to both attitude as well as demographic.

There is also this weird idea that a software developer that doesn’t work on software in their spare time is some kind of perversion. "So what personal projects do you have?" is a question I can answer but if somebody cannot answer it or the answer is "none" I don’t get why that is an instant minus point. I mean bridge building engineers/contractors don’t get points taken off in interviews if they don’t spend their spare time designing and building bridges, but somehow in software development there is this implicit idea that we must all dedicate all of our spare time to it. Just because somebody doesn’t like to work on software in their spare time doesn’t mean they aren’t going to be absolutely spectacular at it. Hell if anything it’s those of us who finish work and basically just switch to a personal project that are trying to compensate by constantly cramming for the next workday. As if we have to constantly combat our own ineptitude by repetition at all times.

I think the relatively recent "pushback" to the idea of actually introducing any sort of diversity by trying to break up the self-perpetuating loop of young white guys only wanting to work with other young white guys really illustrated how necessary it was. You had people (young white male nerds, surprise) complaining about "diversity quotas" and basically starting with the flawed assumption that the reason that their team consisted of young white male nerds was because they were the most qualified. No, it was because the rest of the team was young white male nerds and anybody else being considered had to meet these ridiculous lengths to prove themselves before they even get considered as fitting the "culture" because the culture is one of- you guessed it, young, white, male nerds. A mediocre "young white male nerd" is often more likely to get hired than a demonstrably more skilled person of a different race or (god forbid, apparently), a woman.

Even an older guy is probably less likely to be brought on board. You can have some grizzled older software veteran at 50 who has forgotten more than the rest of the team knows put together but him not memorizing modern frameworks and buzzwords is going to prevent him from coming on board, even though he’s bringing on board countless skills and experience that no amount of github commits can hope to bring a "young white male nerd". Can you imagine how much ridiculous skill and ability a woman who is 60 would have to bring to the table to get hired as a software developer? You get these 24 something white dudes "well I wrote an expression evaluator" and the interviewer is like "Oh cool and it even does complex numbers, awesome" but a 60-year old woman could be like "well, I wrote a perfect simulation of the entire universe down to the atom, with a speed of 1 plank every 2 seconds, as you can see on my resume" And the completely unimpressed interviewer would be like "Yeah but we’re looking for somebody with CakePHP experience"

I think "young white male nerds" reject the idea that they have any sort of privilege in this field because they feel it means they didn’t work as hard. Well, yeah. We didn’t. get over it. We had things handed to us easily that we wouldn’t have if we were older, a different race, or women. We need to stop complaining that reality doesn’t match our ego and trying to stonewall what we term "diversity hires" and actually respect the fact that we aren’t a fucking master race of developers and women and minorities are fully capable of working in software, and cherrypicking racist and sexist statistics to support the perpetuation of the blindingly-white sausage fest just makes us look like babies trying to deny reality.

Posted By: BC_Programming
Last Edit: 22 Dec 2018 @ 05:14 PM

EmailPermalinkComments (0)
Tags
 22 Oct 2018 @ 7:26 PM 

Nowadays, we’ve got two official ways of measuring memory and storage. They can be measured via standard Metric Prefixes, such as Megabytes (MB) or Kilobytes (KB), or they can use the binary prefixes, such as Mibibytes (MiB) and Kibibytes(KiB). And yet, a lot of software seems to use the former, when referring to the latter. Why is that, exactly?

Well, part of the reason is that the official Binary SI prefixes didn’t exist until 2008, and were used to address growing ambiguities between them. Those ambiguities had been growing for decades.

From the outset, when Memory and storage were first developed in respect to computers, it became clear it would be necessary for some sort of notation to be used to measure the memory size and storage space, other than by directly indicating bytes.

Initially, Storage was measured in bits. A bit, of course, was a single element of data- a 0 or a 1. In order to represent other data and numbers, multiple bits would be utilized. While the sizes being in common discussion were small, bits were commonly referenced. In fact even early on there arose something of an ambiguity; often when discussing transfer rates and/or memory chip sizes, one would here "kilobit" or "megabit"; these would be 1,000 bits and 1,000 kilobits respectively, and were not base 2; however, when referring to either storage space or memory in terms of bytes, a kilobyte or a megabyte would be 1,024 bytes or 1,024 kilobits respective.

One of the simplest ways of organizing memory was using powers of two; this allowed a minimum of logic to access specific areas of the memory unit. Because the smallest addressable unit of storage was the byte, which were 8bits, it meant that most memory was manufactured to be a multiple of 1,024 bits, possible because it was the nearest power of 2 to 1,000 that was also divisible by 8. For the most part, rather than adhering strictly to the SI definitions for the prefixes, there was a industry convention that effective indicated that, within the context of computer storage, the SI prefixes were binary prefixes.

For Storage, for a time the same conveniences applied that resulted in total capacities measured in the same units. For example, A Single-sided 180K Floppy Diskette had 512 bytes per sector, 40 sectors a track, and 9 tracks a side.

A single sided 180K diskette had 512 bytes a sector, 40 sectors per track, and 9 tracks per side. That was 184320 Bytes. In today’s terms with the standardized binary prefixes, this would be 180KiB.

360K Diskettes had a similar arrangement but were double-sided. they were 368640 Bytes- again, Binary prefix was being used in advertising.

Same with 720 K 3-1/4" diskettes. 512 bytes/sector, 9 sectors per track, 80 tracks/side, two sides. That’s 737280 bytes. or 720KiB.

The IBM XT 5160 came with a drive advertised as 10MB in size. The disk has 512 bytes per sector, 306 cylinders, 4 heads, and 17 tracks. One cylinder is for diagnostic purposes and unusable. That gives us a CHS of 305/4/17. At 512 bytes/sector, that was 10,618,880 bytes of addressable space. (This was actually more than 10MiB as some defects were expected from the factory). The 20MB drive had a similar story as well. 615(-1 diag) cylinders, 4 heads, 17 sectors per track at 512 bytes a sector- 20.38MiB. The later 62MB drive was 940(-1 diag) cylinders, 8 heads, 17 sectors/track at 512 bytes/sector which gives ~62.36 MiB…

The "1.2MB" and "1.44MB" Floppy diskettes are when things started to get spitballed by marketing departments for ease of advertising and blazed an early trail for things to get even more misleading. The High density "1.2MB" diskettes were 512 bytes a sector, 15 sectors per track, 80 sectors per side, and double sided. That’s a total of 1,228,800 Bytes. or 1200 KiB, But they were then advertised as 1.2MB, Which is simply wrong altogether. It’s either ~1.7MiB, or it is ~1.23MB. it is NOT 1.2MB because that figure is determined by dividing the KiB by 1000 which doesn’t make sense. Same applies to "1.44MB" floppy diskettes, which are actually 1440KB due to having 18 sectors/track. (512 * 18 * 80 * 2=1474560 Bytes. That is either 1.47456MB, or 1.40625MiB, but was advertised as 1.44MB because it was 1440KiB (and presumably easier to write).

Hard drive manufacturers started to take it from there. First by rounding up a tiny bit- A 1987 Quantum LPS Prodrive advertised as 50MB was for example 49.87MB (752 cylinders, 8 heads, 17 sectors per track). I mean, OK- sure, 49.87 is a weird number to advertise I suppose…

it’s unclear when the first intentional and gross misrepresentation of HDD size was actually done where the SI Prefix definition was used to call a drive X MB. But, it was a gradual change. People started to accept the rounding and HDD manufacturers got more bold- eventually one of them released an X MB Drive that they KNEW full well people would interpret as X MiB, and when called out on it claimed they were using the "official SI Prefix" as if there wasn’t already a decades old de-facto standard in the industry regarding how storage was represented.

For the most part this confusion persisting forward is how we ended up with the official Binary Prefixes.

And yet- somewhat ironically – most OS software doesn’t use it. Microsoft Windows still uses the standard Prefixes. As I recall OSX provides for it as an option. Older Operating Systems and software will never use it as they won’t be updated.

The way I see it, HDD manufacturers have won. They are now selling Drives listed as "1TB" which are 930GiB, but because it’s 1,000,000,000,000 bytes or somewhere close, it’s totally cool because they are using the SI prefix.

Posted By: BC_Programming
Last Edit: 23 Oct 2018 @ 07:15 PM

EmailPermalinkComments (0)
Tags
 26 Sep 2018 @ 1:38 PM 

I have a feeling this will be a topic I will cover at length repeatedly, and each time I will have learned things since my previous installments. The Topic? Programming Languages.

I find it quite astonishing just how much Polarizations and fanaticism we can find over what is essentially a syntax for describing operations to a computer. A quick google can reveal any number of arguments about languages, people telling you why Java sucks, people telling you why C# is crap, people telling you why Haskell is useless for real-world applications, people telling you that Delphi has no future, people telling you that there is no need for value semantics on variables, people telling you mutable state is evil, people telling you that Garbage collection is bad, people telling you that manual memory management is bad, etc. It’s an astonishing, never-ending trend. And it’s really quite fascinating.

Why?

I suppose the big question is- Why? Why do people argue about languages, language semantics, capabilities, and paradigms? This is a very difficult question to answer. I’ve always felt that polarization and fanaticism is far more likely to occur when you only know and understand one Programming Language. Of course, I cannot speak for everybody, only from experience. When I only knew one language “fluently”, I was quick to leap to it’s defense. It had massive issues that I can see now, looking back, but which I didn’t see at the time. I justified omissions as being things you didn’t need or could create yourself. I called features in newer languages ‘unnecessary’ and ‘weird’. So the question really is, who was I trying to prove this to? Was I arguing against those I was replying too- or was it all for my own benefit? I’m animate that the reasons for my own behaviour -and, to jump to a premature and biased conclusion, possibly those in which I see similar behaviour over other Languages- was the result of feeling trivialized by the attacks on the language I was using. Basically, it’s the result of programmers rating themselves based on what languages they know and use everyday. This is a natural- if erroneous – method of measuring one’s capabilities. I’ve always been a strong proponent that it isn’t the Programming Language that matters, but rather your understanding of Programming concepts, and how you apply them, as well as not subverting to the religious dogmas that generally surround a specific language design. (I’m trying very hard not to cite specific languages here). Programming Languages generally have set design goals. As a result, they typically encourage a style of programming- or even enforce it through artificial limitations. Additionally, those limitations that do exist (generally for design reasons) are worked around by competent programmers in the language. So when the topic enters the domain of their favourite language not supporting Feature X, they can quickly retort that “you don’t need feature X, because you can use Feature Q, P and R to create something that functions the same”. But that rather misses the point, I feel.

I’ve been careful not to mention specific languages, but here I go. Take Visual Basic 6. That is, Pre .NET. As a confession, I was trapped knowing only Visual Basic 6 well enough to do anything particularly useful with it for a very long time. Looking back- and having to support my legacy applications, such as BCSearch- and I’m astonished by two things that are almost polar opposites; The first is simply how limited the language is. For example, If you had a Object Type CSomeList and wanted to ‘cast’ it to a IList interface, you would have to do this:

Basically, you ‘cast’ by setting it directly to a variable of the desired type you want to cast. These types of little issues and limitations really add up. The other thing that astonished me was the ingenuity of how I dealt with the limitations. At the time, I didn’t really consider some of these things limitations, and I didn’t think of how I dealt with them as workarounds. For example, the above casting requirement I found annoying, so I ended up creating a GlobalMultiUse Class (which means all the Procedures within are public); in this case the Function might be called “ToIList()” and would attempt to cast the parameter to a IList and return it. Additionally, at some point I must have learned about Exception handling in other languages, and I actually created a full-on implementation of Exception handling for Visual Basic 6. Visual Basic 6’s Error Handling was, for those that aren’t aware, rather simple. You could basically say “On Error Goto…” and redirect program flow to a specific label when an error occured. All you would know about the error is the error number, though. My “Exception” implementation built upon this. To Throw an exception, you would create it (usually with an aforementioned public helper), and then throw it. in the Exception’s “Throw()” method, it would save itself as the active Unwind Exception (global variable) and then raise an Application defined error. Handlers were required to recognize that error number, and grab the active exception (using GetException(), if memory serves). GetException would also recognize many Error codes and construct instances of the appropriate Exception type to represent them, so in many cases you didn’t need to check for that error code at all. The result? Code like this:

would become:

There was also a facility to throw inner exceptions, by using ThrowInner() with the retrieved Exception Type.

So what is wrong with it? well, everything. The Language doesn’t provide these capabilities, so I basically have to nip and tuck it to provide them, and the result is some freakish plastic surgery where I’ve grafted Exceptions onto somebody who didn’t want Exceptions. Fact is that, once I moved to other languages, I now see just how freakish some of the stuff I implemented in VB was. That implementation was obviously not thread safe, but that didn’t matter because there was no threading support, for example.

Looking forward

With that in mind, it can be valuable to consider one’s current perspectives and how they may be misguided by that same sort of devotion. This is particularly important when dealing with things that you only have a passing knowledge of. It’s perhaps more applicable if you’ve gained experience with something to be able to recognize flaws, but it’s easy to find flaws or repaint features or aspects as flaws for the purpose of making yourself feel wiser for not having used it sooner.

Posted By: BC_Programming
Last Edit: 26 Sep 2018 @ 01:38 PM

EmailPermalinkComments (0)
Tags
 28 Feb 2018 @ 9:59 PM 

Nowadays, game music is all digitized. For the most part, it sounds identical between different systems- with only small variations, and the speakers are typically the deciding factor when it comes to sound.

But this was not always the case. There was a time when computers were simply not performant enough- and disk space was at too high a premium- to use digital audio tracks directly as game music.

Instead, if games had music, they would typically use sequenced music. Early on, there were a number of standards, but eventually General MIDI was settled on as a standard. The idea was that the software would instruct the hardware what notes to play and how to play them, and the synthesizer would handle the nitty-gritty details of turning that into audio you could hear.

The result of this implementation was that the same music could sound quite different because of the way the MIDI sequence was synthesized.

FM Synth

The lowest end implementation dealt with FM Synthesis. This was typically found in lower-cost sound cards and devices. The instrument sounds were simulated via math functions, and oftentimes the approximation was poor. However, this also contributed a “unique” feel to the music. Nowadays FM Synth has become popular for enthusiasts of old hardware. Products like the Yamaha OPL3 for example are particularly popular as a “good” sound card for DOS. In fact, the OPL3 has something of a cult following, to the point that source ports of some older games which use MIDI music will often incorporate “emulators” that mimic the output of the OPL3. It’s also possible to find “SoundFonts” which work with more recent audio cards that mimic the audio output of an OPL3, too.

Sample-Based Synth

Sample-based synth is the most common form of MIDI synthesis. Creative Labs referred to their implementation as “Wavetable synthesis” but that is not an accurate description of what their synthesizer actually does. A sample-based synthesizer has a sampled piece of audio from the instrument and will adjust it’s pitch and other qualities based on playback parameters. So for example it might have a sampled piece of audio from a Tuba and then adjust the pitch as needed to generate other notes. This produces a typically “Better” and more realistic sound than FM Synth.

Wavetable Synthesis

Wavetable synthesis is a much more involved form of synthesis which is like FM Synth on steroids; where FM Synth tended to use simpler waveforms, Wavetable synth attempts to reproduce the sound of instruments by having the sound of those instruments modelled with a large number of complicated math functions and calculations as well as mixing numerous pieces of synthesized audio together to create a believable instrument sound. I’m not personally aware of any hardware implementations- though not being anything of a music expert I’m sure there are some- but Software implementations tend to be present and plugins or features of most Music Creation Software.

Personally, I’m of the mind that the best Sample-based Synthesis is better than the FM Synth  that seems to be held on a pedestal; they were lower-end cards built down to a price which is why they used the much more simplistic FM synthesis approach, after all. It’s unique audio captured a lot of people playing games using that sort of low end audio hardware, so to a lot of people, FM Synth is  how games like Doom or Monkey Island are “Supposed” to sound. I think that Sample-based synth is better- but, on the other hand, that is how I first played most of those games, so I’m really just falling into the same trap.

Posted By: BC_Programming
Last Edit: 28 Feb 2018 @ 09:59 PM

EmailPermalinkComments Off on MIDI Madness
Tags
 30 Jan 2018 @ 9:02 PM 

Software and computer security has always been a rather important topic. As our systems become more interdependent and connected- and we expose ourselves and our important information more and more, it is becoming even more important. Operating System and Software updates are issued to address security problems, and these security problems are given the utmost importance as users are urged to install them as soon as possible. Many Operating Systems- such as Windows 10, disable or restrict the ability to prevent updates (it seems to require Pro to adjust the settings to update only when the user initiates it, for example). This is considered by many to be a positive change; the idea being that this will prevent systems from being compromised through those security exploits.

And, certainly, that is true. Installing security patches will, obviously, prevent the exploits that they resolve from being exploited for malicious purposes. However I think the impact that those exploits have in terms of your typical end user have been overstated.

Based on my own experiences, I an animate that the vast majority of end-user malware infections are not perpetuated or contributed to in any notable way by the sort of issues resolved by security updates. Those updates are more applicable to servers, data centers, and corporate environments. For end-user PCs, it is seldom heard of to find a malware infection that was not caused in some way by trojan horse malware — something which the user explicitly downloaded and ran themselves which had the unintended side effect of releasing malware onto their system. Pirated software; game mods, “keygens”, and so on and so forth. Screensavers, greeting card executables, applications that disguise as images. Something as seemingly innocuous as an aftermarket Windows Theme could very easily contain an unwanted payload, and it won’t matter if the system is fully up to date or not if you allow it to install.

The Security Circus

I call the general concept of overstating those concerns the “security circus”. It infects certain styles of thinking and sort of makes itself a self-perpetuating concept over time. As an example scenario, a user may come to an IT repairperson with issues on their PC; it may be determined that those issues are caused by malware. the “security circus” contribution to this scenario could be that the repair person discovers that the system is out of date and missing a number of critical security updates. Because they have learned, over time, that security updates are critical and prevent infections, they may — and very often do — assume that the malware made it’s way onto the PC via that infection vector. Over time these occurrences pile up and that particular IT staff can state, without any intention of lying, that they have seen plenty of systems that were compromised by vulnerabilities, even though, realistically, they don’t actually know if the vulnerabilities were even responsible.

The “Acts” of this security circus seem to largely stem around Fear, Uncertainty, and Doubt being spread and manipulated. Coincidentally, I notice that oftentimes these efforts work in favour of a number of corporate interests. Forced OS Updates for example benefit the OS manufacturer, particularly as updates may very well provide any number of other pieces of software which provide “diagnostic” information which can be used by that company for marketing efforts. Security updates benefit security firms and security software vendors, who’s products are used to “prevent” the problems that are caused until they receive that critical patch to fix the issue, or who release security “scanners” which analyze and report whether a system is susceptible to the vulnerability.

Some recent security scares come to mind when I think about the “security circus”.

Wannacry

The Wannacry ransomware provides some good examples of the operation of this “security circus”. Articles and postings on the issue are often decidedly vague about the extent of the vulnerability that causes it, and often overstate it’s capability; users are urged to update as soon as possible, and in some cases I’ve seen it argued that the vulnerability allows the malware to be installed over the Internet.

The reality, however, is that Wannacry had a distribution method that could exploit a vulnerability in SMBv1 within a LAN in order to spread to another system that was accessible on a LAN from the infected system. This means that a network that has systems that is vulnerable will have those vulnerable systems spread the infection if one get’s infected, however, that “patient zero” cannot be infected remotely. Wannacry would still only be installed and infect a “patient zero” LAN system through some other infection vector. and that infection vector was almost certainly through trojan-horse malware of some description.

Which is not, of course, to understate that that is certainly a concern. If Little Jimmy runs an infected game mod installer, and their system get’s infected, Other vulnerable computers on the same network would eventually be compromised. However, I think the critical thing is not the security updates, but, in that scenario, the user education to avoid installing malicious software to begin with. In the scenario, for example, Why did Little Jimmy Trust the game mod installer? Should Little Jimmy even have user permissions to install software? What sort of education can be provided to allow users that have “vulnerable” habits to adjust those habits to avoid problems? Installing security Updates, Security software, firewall’s etc is, IMO, largely a way of avoiding that question, and unfortunately it pairs poorly because a user with “vulnerable” habits is often the sort who will happily disable their Anti-virus when say a game modification installer says it is a “false positive”, or who will happily give administrator permissions — if they can — to applications based in promised functionality.

Game “cheat” software often takes that approach, making a promise and then requesting elevation with that promise in mind is enough to convince some users to “take a chance”; thewse same “vulnerable” users are also susceptible to phishing scams or other things such as software programs stealing login information from say online accounts. A specific example of that would be for example simple applications which claim to “give op” to a Minecraft player. All you need to do is give your username and password! But of course it does not give you OP. Instead it simply E-mails your login information to a specified account. It doesn’t work, the user deletes the program, but perhaps never thinks about changing their login information because as far as they know the program just “didn’t work”. Until one day they cannot log in. Or, For MMOs, they suddenly find their character is poorly equipped or perhaps banned for activity because it was used for some nefarious in-game activity.

Speaking for myself, aside from the occasional Malwarebytes scan, I don’t run any sort of background AV software or firewall. In fact, I disable the built-in Windows software that provides those features. To my knowledge, I’ve not been infected in over 10 years.  And even then, what I was infected with, Virut/Sality, wasn’t being picked up by any Security software, even fully updated. Since then I’ve had systems that have lacked important security updates magically not be infected in the ways that the aforementioned “security circus” would have me believe. It seems — at least from where I am standing — that the implications of security vulnerabilities to your typical end-user system are vastly overstated, and the focus on those as a means to prevent people from getting infected may be a focus in the wrong area. Instead, Users should receive education such that their “vulnerable” habits can be eliminated or at the very least they can be made aware of them. Better education for computer systems in general can help as well; knowing the difference between an svchost.exe where it should be and an svchost.exe where it isn’t can make it possible to identify unwanted software even if any installed security software isn’t picking it up.

Spectre/Meltdown

Another topic of some interest that has taken the security world by storm is the Meltdown and Spectre prefetch cache security problems found in many Microprocessors (Meltdown being specific to Intel chips). These security concerns relate to an ability to access memory that would be otherwise unavailable by exploiting speculative cache behaviour. Meltdown involves writing carefully crafted machine language instructions in order to trick the speculative execution into providing access to small pockets of memory that would not otherwise be accessible; there are kernel-mode pages that are part of that applications virtual address space. Spectre functions similarly but requires those carefully crafted machine code instructions in order to try to perform various memory operations and carefully measure them in order to guess at the data found in certain areas of those kernel-mode pages within that processes virtual address space.

I feel, again, that the security circus has somewhat overstated the dangers involved with these security problems; in particular, it is too common to see statements that t his could be exploited through web-based code, such as Javascript, which itself would require escaping the Javascript sandbox which has wider security implications anyway. Additionally, it seems ot presume that this could be used to steal web or system passwords, when realistically it will only enable viewing tiny pockets of driver-related process memory. and things like ASLR could very easily mitigate any directed attack looking for specific data.

But, the reality hardly sells articles, and certainly doesn’t sell security software- which, I might add, by sheer coincidence tends to be either a sponsor or major advertiser for many of the wider publicized security publications. Coincidence, no doubt.

Posted By: BC_Programming
Last Edit: 31 Jan 2018 @ 07:51 PM

EmailPermalinkComments Off on The Various Acts of the Security Circus
Tags
 09 Dec 2017 @ 12:27 PM 

Winamp is a rather old program, and to some people it represents a bygone era- the late 90’s and early 2000’s in particular. However I’ve not found any “modern” software that compares. There is plenty of software- MediaMonkey, MusicBee- etc which attempts to mimic Winamp, or provides the same general capability of managing a local music library, but they either don’t support Winamp Plugins, don’t work properly with many such plugins- or, most importantly, don’t add anything.

Not adding anything is the important one here. At best, I’m getting the same experience as I do with Winamp, so I’m not gaining anything. People ask, “Why don’t you switch” and the default answer is “Why should I?” If the only reason is because what I am currently using is “outdated” and no longer cool, then maybe I should stick with it because we have something in common.

Typically, I’m losing functionality, though. With Winamp I’ve got everything setup largely how I want. More importantly, it  spans not only FLAC and MP3 Music files, but my Music Library also incorporated various Video Game Music formats for various systems, with complete audio libraries for any number of game titles that I can pull up easily. These are native formats which are much smaller  than if those tracks were encoded as MP3 or FLAC and since they are native formats they use Winamp plugins, Which provide additional features for adjusting audio capabilities. These plugins simply don’t exist or don’t work with modern software, so I’d have to relegate those video game music formats to specific, individual players if I was to switch to say “MusicBee” for my local music library.

Nowadays, even the concept of a local Audio Library is practically unheard of. People “Listen to music” by using streaming services or even just via youtube videos, and typically it is all done via a smartphone where storage space tends to be at a greater premium as well. I find that I detest playing music on my Phone (Nexus 6) simply because there is no good software for managing Music saved to the local storage, and it get’s awful battery life if used this way. This is why I use an older 16GB Sony Walkman MP3 player instead; the battery could probably playback for a good continuous 48 hours, and it is much more compact than the phone is. And even if this means an extra piece of “equipment” when I go somewhere, it means that I’m not wasting my phone’s battery life to play music.

Recently, I had the need to do something that is nearly as “outdated” as the program I elected to do it, which is burning an Audio CD. I’ve found this to be the easiest way to transfer music to my Original XBox Console to create custom soundtracks (something which seems to be unique among consoles altogether). So I popped in a CD-RW, opened winamp, clicked on the CD Recorder…. and got a BSOD. DPC_WATCHDOG_VIOLATION.

Well, not that isn’t supposed to happen. After determining it was reproducible, I looked further into it. In particular I found that within the Current Control Set information for my hardware CDROM had an LowerFilters driver specified for PxHlpa64. So, I set about searching what this was.

I found that PxHlpa64 is a Driver by “Sonic Solutions” which is used by some CD Recording software. I couldn’t find any such software that uses it installed, so I merely renamed the affected key and rebooted. The problem went away and everything was as it should be. (And I subsequently wiped out the directory containing the driver file) I suspect that I installed a program previously which used the driver file and the uninstall didn’t remove it for any of a number of reasons.

One of the advantages of having a bit of an idea what is going on with Windows (or any OS really) is that you can more intelligently attempt to solve these sorts of unexpected problems you may encounter. Since I was aware of issues involving Optical drivers and driver “Filter” settings I was able to find and fix the cause of my issues fairly quickly.

Posted By: BC_Programming
Last Edit: 09 Dec 2017 @ 12:27 PM

EmailPermalinkComments Off on Music Applications, CD Burning, and the BSOD
Tags
 30 Oct 2017 @ 6:52 AM 

A couple of weeks ago, I thought it would be neat to get a computer similar to my first PC; which was a 286. I’d actually been considering the prospect for some time, but the prices on a DTK 286 (DTK was the brand I had) were a bit high. However I stumbled on a rather cheap listing for a DTK 286 PC; it wasn’t identical to the one I had but was a similar model, which had a slightly reduced case size but seemed otherwise the same, so I snapped it up.

It arrived a little worse for wear from the journey- the front of the case, which was attached via plastic standoffs screwed into the metal case itself, had all those plasticsnaps come off- However this shouldn’t be too much of a problem as I’m sure I can get it to stay attached for presentation purposes.

When I opened it up to see if anything else had been damaged, I found the network card was out of it’s slot. So I pushed it in. Then I noticed the slot was PCI. 286 systems had 8-bit and 16-bit ISA, so already I knew something was up. That the Processor had a heatsink, and was a Socket 7 meant this was clearly not a 286 system.

Instead, the system ins a Pentium 133 (non-MMX) Socket 7, with 64MB of RAM, a 900MB hard drive, an ATI Mach 64, and 10/10 Ethernet. The Floppy diskette drive wasn’t working correctly so I swapped it for one of my other floppy drives. I also attached one of my CD-RW drives so I could burn data and install programs, to the Windows 95 install that was running on the system.

Pentium133_reduuced

Now, arguably this could be a claim to be made against the seller but I think that it was sold this way by accident; It seems like it is using a specialized industrial motherboard intended to be placed in these sort of Baby AT cases- I don’t think a standard consumer case had Socket 7 and used the large, older Keyboard DIN connector. The motherboard is apparently quite uncommon and more so with the Socket 7 rather than Socket 5. It also has a motherboard Cache “card” installed which doesn’t look to be particularly difficult to find but goes for about half what I paid for the entire unit. The motherboard is unusual in that it seems to be missing things such as Shrouds around the IDE connections as well as having no serial number listed where specified in the center of the board.

My original intent was to fiddle with MS-DOS and Windows 3.1, so realistically this Pentium system could work for that purpose; I have a few older IDE Hard drives I could swap in and set up a dual-boot between MS-DOS/Windows 3.1 and Windows 95. The Mach64 is an older card but is well supported on both Windows 95 and Windows 3.1 as well as MS-DOS, so it seems like a good fit. It only has 1MB of RAM so higher resolutions drop the colour depth- 1024×768 is only doable with 256 color modes, for example- I might want to get some DIP chips to install and upgrade the VRAM, as it has two empty sockets. (Might be cheaper, ironically, to actually get another Mach64 with the chips installed altogether, which is odd) I was also able to add a Creative AudioPCI Card I had lying around without too much hassle; Though there are better options for ideal MS-DOS and Windows 95 audio I might explore later. My main limitation so far is the lack of a PS/2 connector for the mouse and I don’t have a serial Mouse- I found an old InPort Mouse with a Serial Adapter on eBay to serve that purpose, however- As having a mouse would be nice.

One thing I was struck by- much as with things like the iMac G3 I wrote about previously, is that despite being quite old, it still performs rather well with things like Office 97. Basically it just proves my theory that if you fit your software choices to the hardware, old hardware is still quite capable. I could write up documents in Word or create spreadsheets in Excel without too much bother and without really missing anything available on a newer system; and the system would work well with older MS-DOS games as well for most titles- and older titles are facilitated by the Turbo Switch, which oddly doesn’t actually do anything with the button but uses Control-Alt-Minus and Control-Alt-Plus to change the speed and the turbo switch light changes accordingly (it goes between 133Mhz and 25Mhz, making the latter about equivalent to a fast 386).

I might even experiment with connecting it to my network,  Perhaps even try to get Win95 able to work with shared directories from Windows 10 which would be rather funny. (Though I suspect I might need to open up security holes like SMBv1 to get that working….)

Posted By: BC_Programming
Last Edit: 30 Oct 2017 @ 06:52 AM

EmailPermalinkComments Off on The 286 that isn’t
Tags
 07 Aug 2017 @ 6:24 PM 

A while ago, it came out that Microsoft Paint would be deprecated going forward on Windows 10, replaced, instead, with Paint 3D. There have been loads of articles, forum threads, and general griping about this across the Internet. Nonetheless, Paint is hardly the first “casualty” of Windows as it moved forward; nor is it’s loss, realistically, a big one.

A History

“Paint” existed in some form or another dating back to the original Windows release. Like many parts of Windows, it was based on an existing product, but stripped down. In this case Windows Paintbrush was effectively PC Paintbrush 1.05 for Windows but stripped down so as to not compete with the full product.

Windows 1.04

Paint on Windows 1.04

Aside from a smaller set of tools, it appears that another limitation of the included program is that it can only work with monochrome bitmaps. For the time period, that isn’t a surprising limitation though- The Apple Macintosh’s MacDraw program had a similar color limitation.

Windows /286

PAINT running on Windows /286

Windows/286 didn’t change the included PAINT program very much- I wasn’t able to find any significant differences myself, at least. it seems to have the same limitations. I wasn’t able to get Windows /386 to work however I presume PAINT is the same program between them, being that the major difference is enhancements for the 386.

Windows 3.0

Paintbrush running on Windows 3.0

It was with Windows 3.0 that PBRUSH was effectively created. While still seeming to be based largely on PC Paintbrush, the Windows 3.0 version, aside from changing the program title to “Windows Paintbrush” from “PAINT” as well as the executable, also redesigned part of the User Interface. Interestingly, this interface is more similar to the more complete PC Paintbrush product as provided on Windows /286, but of course it did not provide the full toolset of the commercial product either.

Windows 3.1

Paintbrush on Windows 3.1

PBRUSH didn’t see any significant changes from Windows 3.0. It still had a number of annoying limitations that plagued previous releases; in particular, tools couldn’t work with data outside the visible canvas. This meant you couldn’t even paste a screenshot into the program- It would be cropped. You can see this below- this is after performing a floodfill on the outer area of the above, then scrolling down- the exposed canvas was not affected by the operation.

Win 3.1 Paint floodfill failure

Windows 95

MSPaint on Windows 95

Windows 95 saw PBRUSH deprecated in favour of MSPAINT; Not just deprecated, mind you- but altogether removed; however, you could still invoke PBRUSH, due to a new “App paths” feature of Windows. This capability exists to today- Like Win95 there is no PBRUSH.EXE in Windows 10, but running PBRUSH will start MSPaint, as it has since Windows 95. The new Windows 95 version of Paint is now “Microsoft Paint” rather than “Windows Paintbrush” and sports a new executable as well. It also redesigns the interface to adhere to the new “3D” style that Windows 95 introduced, as well as making use of other Windows features that had been enhanced; for example, while you could edit colors in the older Windows Paintbrush, the program used a set of three sliders for that customization. Windows 95 added a new Custom Color dialog, which Microsoft Paint made use of for customizing the palette entries. Thanks to how that dialog worked it meant you could save several custom colors outside of the normal palette and swap between them, too. It also adds a Status bar, which was coming into it’s own with Windows 95 as a convention; This included “tip” text appearing on the left as well as other information appearing in additional panes on the status bar.

Windows 98

MSPaint on Windows 98SE

Windows 98’s release of Microsoft Paint seems to have removed the ability to load and save Custom Colour Palettes. Additionally, it also dropped the ability to save to the .PCX format, while gaining the ability to use certain installed image filters, allowing it to save to .PNG for example, if certain other software is installed.

Windows ME

MSPaint on Windows ME

The Windows ME version of MSPaint appears to be identical to the Windows 98SE Version, however, the executables are not identical- I’m not sure what difference there might be beyond the header indicating it is for a different Windows Version, though. It’s here for completeness.

Windows 2000

MSPaint on Windows 2000

Another entry for completeness as, like Windows ME, I cannot find any differences between it and the Windows 98SE release of MSPaint.

Windows XP

MSPaint on Windows XP

Windows XP introduced a few major revisions to MSPaint. First, it could acquire information from a Scanner or any TWAIN device (Such as a digital Camera). Moreover, it now had native support for JPEG, GIF, TIFF and PNG File formats, without any additional software installs.

Windows Vista

MSPaint running on Windows Vista

The WIndows Vista release of paint changes the default colour palette, has a set of new tool icons, And Reorganizes some of the UI (the Color palette is moved, for example). It changes the undo stack to 10 deep rather than 3, and saves to JPEG by default- which suggests that it was intended or expected largely to be used for acquiring and saving photos.

Windows 7

MSPaint as included in Windows 7.

Windows 7 is another major overhaul of the program, on the same level as the change from the PaintBrush program in Windows 3.1 to MSPaint in Windows 95. This redesigns the interface around the “Ribbon” concept, and adds a number of capabilities, brushes, and a few tools. It also now has anti-aliasing.

Windows 8

This version is pretty much identical to the Windows 7 release; though there are some minor adjustments to the Ribbon.

Future

Microsoft Paint is now deprecated, but this doesn’t prevent you from using it; even when it is removed from the default installation, it will still be made available as a free download from the store. You can also copy/paste a version of paint from a previous Windows 10 install to avoid dealing with an appx container file or any tracking that comes with using the Windows Store, if desired. I think the fuss over this change is a bit of an overreaction. There are plenty of other free programs that can accomplish the same tasks and while it is a bit annoying to have to download them, Windows will still include Paint 3D which should be capable of the same standard tasks people want the older Paint program for, such as screenshots.

The old PBRUSH application running on Windows 10. It’s a Miracle.

What is this witchcraft? Windows NT 3.51 was 32-bit, but was based around Windows 3.1, so it got a 32-bit version of the same old PBRUSH program from Windows 3.1. That can be copied from an NT 3.51 install and run directly on Windows 10. Pretty interesting- Though of arguably limited usefulness, beyond putting it at the end of blog posts to pad out the length for no reason.

Posted By: BC_Programming
Last Edit: 07 Aug 2017 @ 06:24 PM

EmailPermalinkComments Off on The deprecation of Microsoft Paint
Tags

 Last 50 Posts
 Back
Change Theme...
  • Users » 47469
  • Posts/Pages » 389
  • Comments » 105

PP



    No Child Pages.

Windows optimization tips



    No Child Pages.

Soft. Picks



    No Child Pages.

VS Fixes



    No Child Pages.

PC Build 1: “FASTLORD”



    No Child Pages.