10 Jun 2018 @ 10:23 AM 

It suddenly occurred to me in the last week that I don’t really have a proper system in place for not only software downloads here on my website but appropriate build integration with the Source Control in building the projects as needed when commits are made. Having set up a Jenkins build environment for the software I work on for my Job, I thought it reasonable that I make the same demands of myself.

One big reason to do this, IMO, is that it can actually encourage me to create new projects. The idea of packaging up the result and making it more easily accessible or usable is often a demotivator I find for creating some new projects. Having an established “system” in place whereby I can make changes on github and have say, installer files “appear” properly on my website as needed can be a motivator- I don’t have to build the program, copy files, run installation scripts, etc. manually every time- I just need to configure it all once and it all “works” by itself.

To that end, I’ve setup Jenkins appropriately on one of my “backup” computers. It’s rather tame in it’s capabilities but it should get the job done, I think- only 4GB of RAM and an AMD 5350. I would use my QX6700 based system, but the AMD system uses far less power. I also considered having Jenkins straight-up on my main system but thought that could get in the way, and just be annoying. Besides- this gives that system a job to do.

With the implementation for work, there were so many projects interdependent and we pretty much always want “everything” that I just made it a single project which builds all at once. This way everything is properly up to date. The alternative was fiddling with 50+ different projects and figuring out the appropriate dependencies to build based on when other projects were updated and such- something of a mess. Not to mention it’s all in one  repository anyway which goes against that idea as well.

In the case of my personal projects on Github, They are already separate repositories. So I will simply have them built as separate projects, with jenkins itself understanding upstream/downstream, I can use that as needed.

I’ve successfully configured the new jenkins setup and it is now building BASeTris, a Tetris clone game I decided to write a while ago. It depends on BASeScores and Elementizer, so those two projects are in jenkins as well.

BASeTris’s final artifact is an installer.

But of course, that installer isn’t much good just sitting on my CI Server! However, I also don’t want to expose the CI Server as a “public” page- there are security considerations, even if I disregard upload bandwidth issues. To that end, I constructed a small program which uploads files using SSH to my website. It will run once a day and is given a directory. It will look in all the immediate subdirectories of that directory, get the most recent file, and upload it to a corresponding remote directory if it hasn’t already been uploaded. I configured BASeTris to copy it’s final artifact there into an appropriate folder.

Alternatively, it is possible to have each project configured to upload the artifacts via SSH as a post-build step. However, I opted to not do that because I would rather not a series of changes throughout the day result in a bunch of new uploads- those would consume space and not be particularly useful. Instead, I’ve opted to have all projects that I want to upload uploaded once a day and only if there have been changes. This should help reduce redundancy (and space usage) of those uploads.

My “plan” is to have a proper PHP script or something that can enumerate the folders and provide a better interface for downloads. If nothing else I would like each CI projects folder to have a "project_current.php” file which automatically sends the latest build- then I can simply link to that on blog download pages for each project and only update it to indicate new features or content.

As an example, http://bc-programming.com/downloads/CI/basetris/ is the location that will contain BASeTris version downloads.

There is still much work to do, however- the program(s) do have git hash metadata added to the project build, so they do have access to their git commit hash, but currently they do not actually present that information. I think it should for example be displayed in the title bar, alongside other build information such as build date, if possible. I’ve tried to come up with a good way to have the version auto-increment but I think I’ll just tweak that as the project(s) change.

Heck- the SSH Uploader utility seems like a good candidate for yet another project to add to github, if I can genericize it so it isn’t hard-coded for my site and purpose.

Posted By: BC_Programming
Last Edit: 10 Jun 2018 @ 10:23 AM

EmailPermalinkComments (0)
Tags

 06 Jun 2018 @ 7:46 PM 

Flash Memory, like anything, is no stranger to illegitimate products. You can find 2TB Flash drives on eBay that are 40 bucks, for example. These claim to be 2TB, show up as 2TB- but, attempting to write data beyond a much smaller size, and the Flash data is corrupted because it actually writes to an earlier location on the drive. My first experience with this was actually with my younger brother’s Gamecube system; when he got it, he also got two "16MB" Memory cards (16 megabit, so 2 Megabytes) However, they would rather frequently corrupt data. I suspect, looking back, it was much the same mechanism- the Memory card was "reporting" as larger than it was and writing beyond the end was corrupting the information on it.

This brings me to today. You can still find cheap Memory cards for those systems which claim sizes such as 128MB. even at the "real" 128 Megabits size that it is, that’s still 16MB which is quite substantial. I’ve recently done some experiments with 4 cheap "128MB" Gamecube Memory Cards that I picked up. Some of the results are quite interesting.

First, I should note that my "main" memory cards for the system are similar cheap cards I picked up online 12 years ago or thereabouts. one is a black card that simply says "Wii/NGC 128 MEGA" on it, the other is a KMD Brand 128MB. The cheap ones I picked up recently have the same case as the KMD and, internally, look much the same, though they feel cheaper; They are branded "HDE". Now, for the ones I have, I’m fairly sure they are legitimate, but not 100%- the Flash chips inside are 128 Megabit and one is even 256Megabit. (Of course this means "128 Mega" and "128 MB" actually means 16MB and 128 Megabits, but whatever).

Since the 4 cards were blank, I decided to do a bit of experimenting with a little program called GCMM, or Gamecube Memory Manager. This is a piece of homebrew that allows you to pretty much do whatever you want with the data on memory cards, including making backups to an SD Card, restoring from an SD Card, copying any file between memory cards, etc. The first simple test is easy- just do a backup and a restore. it shouldn’t matter too much that the card is blank. I backed up the new card no problem. However, when I tried to restore it- it gives a write error at block 1024. This is right at the halfway point. No matter what I couldn’t get passed that point for any of the "new" cards. This indicates to me that the card(s) are actually 8MB cards, with approximately 1024 blocks of storage. What a weird "counterfeit" approach. 8MB is already a rather substantial amount of space, why "ruin" the device by having it report the wrong size and allow data corruption? I found that I could cause raw restores to succeed if I was able to take the card out during the restore process right before it got to 1024.

This discovery is consistent with what I understand of counterfeit flash- the controller will basically write to earlier areas of the memory when instructed to read beyond the "real" size, and will usually overwrite, say, file system structures, needing it to be formatted. Interestingly, If I rip it out before it get’s there, everything backed up up to that point is intact. Something else interesting I found was by looking inside the raw dump I originally created on one of the "new" cards. I found some very interesting data in the raw image. the File system itself was clean but that data remains in the memory, and was still there for viewing. I could see that Wrestlemania 2002 was probably used for testing the card at some point, as there was "w_mania2002" in the raw data, as well as a number of other tidbits that referenced wrestler’s that appeared in that game. What I found much more interesting, however, were a number of other strings: "V402021 2010-06-08" suggests a date that the card might have been manufactured. "Linux-2.6.23.17_stm23_A18B-pdk71"… Now this is interesting! Linux was involved in some way? This wouldn’t be surprising if it was constructed with some sort of embedded system, however it doesn’t make a lot of sense that this would appear on the memory card data itself. Similarly, I found various configuration files:

NetType=1
Language=0
Em10Mode=0
ConTimeout=30
ProductID=00100199007011400002D0154ADF986E
Licence=222
ServiceUser=0512200052225
ServicePwd=282026
PppoeUser=0512200052225@vod
PppoePwd=282026
DHCPPUser=0512200052225@vod
DHCPPPw=282026
IpAddr=192.168.1.12
NetMask=255.255.0.0
GateWay=
DNS=
MacAddr=D0:15:4A:DF:98:6E
Volume=100
TimeZone=8
ProxyFlag=0
AcceptCookie=99

Due to a lot of network information, WLAN IDs, etc. my suspicion is that these flash chips are not actually new, but were taken from some sort of networking device, such as a router or switch. This is supported because googling a few of the configuration settings seems to always lead me to some sort of Chinese ADSL Provider, so I suspect perhaps these Flash chips were re-used from old networking equipment. That, in itself, does add another concern to these Memory Cards- if they were used before they found themselves in these Memory Cards- how much were they used? And how? Were they used to contain the firmware, for example? or were they used to hold a small file system for the networking device?

Overall, for something so seemingly mundane, I found this ti be a very interesting distraction and perhaps this information could prove useful or at least interesting to others.

Posted By: BC_Programming
Last Edit: 06 Jun 2018 @ 07:46 PM

EmailPermalinkComments (0)
Tags
Categories: Programming

 08 Apr 2018 @ 9:45 PM 

I’ve seen, unusually, a few discussions revolving around Apple products which seem to go forward with an assumption that the original 128K Macintosh was a failure. I found it intriguing. Specifically, I’ve seen it said to have failed because it was expensive, underpowered, incompatible with the IBM PC, and didn’t have much memory, and Apple would have had more success if they had released the Macintosh OS as a Desktop Environment on top of Windows. I think this argument comes from a lack of understanding of the early computer ecosystem. Not to mention many of the points are simply incorrect.

One of the bigger draws of the Macintosh was that it was actually fairly affordable for what it provided. At $2,499, the Original Macintosh 128K was cheaper than a IBM PC equipped with 128K of memory by nearly $1,000- And that is compared to a base MDA adapter model without a monitor. Realistically, the only systems that were more affordable than the Macintosh at the time were systems like the Apple II, TRS-80, and Commodore 64.

The system was quite cutting-edge for the time period. the 512×342 display was Black and White, and didn’t have the resolution or colours of the EGA Adapter that was available for the IBM PC, but did not cost anything extra. Additionally, since it was in all Macintosh computers it was, naturally, something that all Macintosh software- at least for a time- was designed for. Unlike MS-DOS Applications, there was no need for special BGI drivers or display card selections/options. Another advantage was that the graphics operated through DMA, meaning that some processing was off-loaded from the otherwise anemic CPU. This was one of the factors which resulted in the desktop environments on the IBM PC being slow until Video Accelerator cards appeared- it is no coincidence, I think, that Windows only truly started to take off only after Video Accelerator cards appeared on the market.

Compatibility is something we take for granted today even between otherwise disparate systems. You can plug in your smartphone and transfer files to your PC or Laptop and then upload them without much fuss, for example, or share Flash Drives or Burned Optical media between different systems with easer. in 1984, this simply wasn’t the case; There was very little in terms of standardization, while many systems used specific floppy diskettes, they seldom used compatible filing systems. TRS-80, Commodore 64, Apple II and IBM PC’s for example could accept the same form factor of 5-1/4" diskettes but you couldn’t share data between them directly because they formatted the disks different and used different file systems. Even as late as 1984 the IBM PC hadn’t truly established itself as a "standard" of it’s own and there were still innumerable standards "vying" for the attention of the typical user. One could just as easily argue that the IBM PC would have been more successful if it had been compatible with Apple II software.

Heck- whether an MS-DOS program even ran on a computer that ran MS-DOS was not really a given. Software often had to be ported between IBM Clone systems for them to work properly. Lotus 1-2-3 often couldn’t be run on many systems that ran MS-DOS anymore than it could run on a Mac 128K… Except that you could run it on a Mac 128K with add-ons like the MacCharlie. Making Equipped Mac 128K more IBM compatible than many clone systems!

Remember that this was a time frame where the idea of a GUI was, in and of itself, a "Killer App" altogether. On MS-DOS for example you couldn’t show charts or graphs in a spreadsheet at the same time as the spreadsheet itself; you had to shift to a graphics mode where you saw only the graph or chart, or possibly there would be a way to preview the printed output, but it was separate from actual editing, all of which would be in a system standard fixed width font, so you got no feedback while actually changing the document regarding things like spacing if you wanted proportional print output. Programs would show certain aspects within those limitations. "Bold" text or headers might be indicated by surrounding them with smiley face characters for example (PC-Write).

The fact that you could manipulate text on-screen and it would reasonably accurately show you what it would look like on the printed page was HUGE. It was the big reason that the Macintosh jump-started the idea of Desktop Publishing and then succeeded in dominating that space for years. THis might all seem redundant since a Mac OS desktop environment would have worked just as well- except it wouldn’t have. One of the things that made it truly possible was the video DMA capabilities of the system which made the fast GUI possible and which therefore made that entire thing possible. If the Mac System software had been a PC OS then it would have simply blended in with the countless other slow and clunky Graphical Environments that had been made available.

Posted By: BC_Programming
Last Edit: 08 Apr 2018 @ 09:45 PM

EmailPermalinkComments (0)
Tags
Tags: , ,
Categories: Macintosh

 11 Mar 2018 @ 4:39 PM 

Just a sort of sidebar note thing- I’ve been running this blog/website since around 2009. A few times, I tried to “monetize” it with advertisements. However, I found they were either annoying, or they were simply not worthwhile. I even fiddled with ad blocker detectors and had a little banner for it. However, For a few years now, I’ve removed all advertisements and things like google analytics from the site wherever I could find it.

I have no plans to change this approach. This blog is my megaphone for presenting information to the world, not a funnel that I can use to make money. And with all the concerns surrounding advertisements and their potential for tracking as well as infection of the system, and websites complaining when you use an ad blocker, I’ve decided that simply not having any advertisements at all is one way to set my blog apart.

One interesting aspect is that a lot of things don’t really declare that they will include tracking. including things like the Facebook “Like” button can often be enough for user tracking on your entire website by Facebook, which means that even if I’m not tracking users outside the site (I’m neither interested nor equipped!), other entities could be tracking traffic to my website and using it for traffic shaping or targeted advertisements towards my website visitors, and yet finding and eliminating all of those sources is not entirely obvious.

For added security as well as to meet some upcoming browser changes in the most popular browsers, I switched the websites default protocol to HTTPS a while ago.

I may not be posting as frequently as I used to but I want my posts and information to be provided in the interest of sharing and not through some implicit moral contract that visitors will support me by allowing advertisements and/or tracking.

Posted By: BC_Programming
Last Edit: 11 Mar 2018 @ 04:39 PM

EmailPermalinkComments (0)
Tags
Categories: Site News

 28 Feb 2018 @ 9:59 PM 

Nowadays, game music is all digitized. For the most part, it sounds identical between different systems- with only small variations, and the speakers are typically the deciding factor when it comes to sound.

But this was not always the case. There was a time when computers were simply not performant enough- and disk space was at too high a premium- to use digital audio tracks directly as game music.

Instead, if games had music, they would typically use sequenced music. Early on, there were a number of standards, but eventually General MIDI was settled on as a standard. The idea was that the software would instruct the hardware what notes to play and how to play them, and the synthesizer would handle the nitty-gritty details of turning that into audio you could hear.

The result of this implementation was that the same music could sound quite different because of the way the MIDI sequence was synthesized.

FM Synth

The lowest end implementation dealt with FM Synthesis. This was typically found in lower-cost sound cards and devices. The instrument sounds were simulated via math functions, and oftentimes the approximation was poor. However, this also contributed a “unique” feel to the music. Nowadays FM Synth has become popular for enthusiasts of old hardware. Products like the Yamaha OPL3 for example are particularly popular as a “good” sound card for DOS. In fact, the OPL3 has something of a cult following, to the point that source ports of some older games which use MIDI music will often incorporate “emulators” that mimic the output of the OPL3. It’s also possible to find “SoundFonts” which work with more recent audio cards that mimic the audio output of an OPL3, too.

Sample-Based Synth

Sample-based synth is the most common form of MIDI synthesis. Creative Labs referred to their implementation as “Wavetable synthesis” but that is not an accurate description of what their synthesizer actually does. A sample-based synthesizer has a sampled piece of audio from the instrument and will adjust it’s pitch and other qualities based on playback parameters. So for example it might have a sampled piece of audio from a Tuba and then adjust the pitch as needed to generate other notes. This produces a typically “Better” and more realistic sound than FM Synth.

Wavetable Synthesis

Wavetable synthesis is a much more involved form of synthesis which is like FM Synth on steroids; where FM Synth tended to use simpler waveforms, Wavetable synth attempts to reproduce the sound of instruments by having the sound of those instruments modelled with a large number of complicated math functions and calculations as well as mixing numerous pieces of synthesized audio together to create a believable instrument sound. I’m not personally aware of any hardware implementations- though not being anything of a music expert I’m sure there are some- but Software implementations tend to be present and plugins or features of most Music Creation Software.

Personally, I’m of the mind that the best Sample-based Synthesis is better than the FM Synth  that seems to be held on a pedestal; they were lower-end cards built down to a price which is why they used the much more simplistic FM synthesis approach, after all. It’s unique audio captured a lot of people playing games using that sort of low end audio hardware, so to a lot of people, FM Synth is  how games like Doom or Monkey Island are “Supposed” to sound. I think that Sample-based synth is better- but, on the other hand, that is how I first played most of those games, so I’m really just falling into the same trap.

Posted By: BC_Programming
Last Edit: 28 Feb 2018 @ 09:59 PM

EmailPermalinkComments (0)
Tags

 02 Feb 2018 @ 5:50 PM 

CPU architectures and referring to them sit in a sort of middle ground. They need to be technically accurate but over time their terminology can change. I thought it would be interesting to look into the name origins of the two most widely used CPU architectures for desktop systems today. And, admittedly this is fresh on my mind from some research I was doing.

x86

Nowadays, most 32-bit software for desktops and laptops is referred to as being  built for “x86”. What does this mean, exactly? Well, as expected, we have to go back quite a ways. After the 8086, Intel released the 80186, 80286, and 80386. The common architectures and instructions behind these CPUs came to be known as “80×86 instructions”- understandably. The 486 that followed the 80386 dropped the 80 officially from the name- inspection tools would “imply” it’s existence but Intel never truly called their 486 CPUs “80486”. It’s possible this is how the 80 got dropped. Another theory could be that it was simply dropped for convenience- x86 was enough to identify what was being referenced, after all.

The term survived up to today, even though, starting with the Pentium, the processors themselves never truly bore the “mark” of an x86 processor.

x64

x64 is slightly more interesting in it’s origins. 64-bit computing had existed on other architectures before but x64 now references the “typical” x86-compatible 64-bit operating mode. Intel’s first foray into this field was with the Itanium processor. This is a 64-bit processor and it’s instruction set is called “IA-64” (as in Intel-Architecture 64”). This did not work well as it was not directly compatible with x86 and therefore required software emulation.

it was AMD who extended the existing x86 instruction set to add support for 64-bit through a new operating mode. Much as 32-bit instructions were added to the 80386 and compatibility preserved by adding a new “operating mode” to the CPU, the same was done here; 64-bit operations would be exclusive to the 64-bit Long Protected Mode, where 32-bit was still 32-bit protected mode, and the CPU was still compatible with real mode.

This AMD architecture was called “AMD64” and the underlying architecture that it implemented was “x86-64”.

Intel, as part of a series of settlements, licensed AMD’s new architecture and implemented x86-64. This implementation went through a few names- x86E, EM64T- but Intel eventually settled on Intel64. Intel64 and AMD64 aren’t identical, so software targets a subset- this subset is where we get  the name x64.

Posted By: BC_Programming
Last Edit: 02 Feb 2018 @ 05:50 PM

EmailPermalinkComments (0)
Tags
Tags: ,
Categories: Hardware

 30 Jan 2018 @ 9:02 PM 

Software and computer security has always been a rather important topic. As our systems become more interdependent and connected- and we expose ourselves and our important information more and more, it is becoming even more important. Operating System and Software updates are issued to address security problems, and these security problems are given the utmost importance as users are urged to install them as soon as possible. Many Operating Systems- such as Windows 10, disable or restrict the ability to prevent updates (it seems to require Pro to adjust the settings to update only when the user initiates it, for example). This is considered by many to be a positive change; the idea being that this will prevent systems from being compromised through those security exploits.

And, certainly, that is true. Installing security patches will, obviously, prevent the exploits that they resolve from being exploited for malicious purposes. However I think the impact that those exploits have in terms of your typical end user have been overstated.

Based on my own experiences, I an animate that the vast majority of end-user malware infections are not perpetuated or contributed to in any notable way by the sort of issues resolved by security updates. Those updates are more applicable to servers, data centers, and corporate environments. For end-user PCs, it is seldom heard of to find a malware infection that was not caused in some way by trojan horse malware — something which the user explicitly downloaded and ran themselves which had the unintended side effect of releasing malware onto their system. Pirated software; game mods, “keygens”, and so on and so forth. Screensavers, greeting card executables, applications that disguise as images. Something as seemingly innocuous as an aftermarket Windows Theme could very easily contain an unwanted payload, and it won’t matter if the system is fully up to date or not if you allow it to install.

The Security Circus

I call the general concept of overstating those concerns the “security circus”. It infects certain styles of thinking and sort of makes itself a self-perpetuating concept over time. As an example scenario, a user may come to an IT repairperson with issues on their PC; it may be determined that those issues are caused by malware. the “security circus” contribution to this scenario could be that the repair person discovers that the system is out of date and missing a number of critical security updates. Because they have learned, over time, that security updates are critical and prevent infections, they may — and very often do — assume that the malware made it’s way onto the PC via that infection vector. Over time these occurrences pile up and that particular IT staff can state, without any intention of lying, that they have seen plenty of systems that were compromised by vulnerabilities, even though, realistically, they don’t actually know if the vulnerabilities were even responsible.

The “Acts” of this security circus seem to largely stem around Fear, Uncertainty, and Doubt being spread and manipulated. Coincidentally, I notice that oftentimes these efforts work in favour of a number of corporate interests. Forced OS Updates for example benefit the OS manufacturer, particularly as updates may very well provide any number of other pieces of software which provide “diagnostic” information which can be used by that company for marketing efforts. Security updates benefit security firms and security software vendors, who’s products are used to “prevent” the problems that are caused until they receive that critical patch to fix the issue, or who release security “scanners” which analyze and report whether a system is susceptible to the vulnerability.

Some recent security scares come to mind when I think about the “security circus”.

Wannacry

The Wannacry ransomware provides some good examples of the operation of this “security circus”. Articles and postings on the issue are often decidedly vague about the extent of the vulnerability that causes it, and often overstate it’s capability; users are urged to update as soon as possible, and in some cases I’ve seen it argued that the vulnerability allows the malware to be installed over the Internet.

The reality, however, is that Wannacry had a distribution method that could exploit a vulnerability in SMBv1 within a LAN in order to spread to another system that was accessible on a LAN from the infected system. This means that a network that has systems that is vulnerable will have those vulnerable systems spread the infection if one get’s infected, however, that “patient zero” cannot be infected remotely. Wannacry would still only be installed and infect a “patient zero” LAN system through some other infection vector. and that infection vector was almost certainly through trojan-horse malware of some description.

Which is not, of course, to understate that that is certainly a concern. If Little Jimmy runs an infected game mod installer, and their system get’s infected, Other vulnerable computers on the same network would eventually be compromised. However, I think the critical thing is not the security updates, but, in that scenario, the user education to avoid installing malicious software to begin with. In the scenario, for example, Why did Little Jimmy Trust the game mod installer? Should Little Jimmy even have user permissions to install software? What sort of education can be provided to allow users that have “vulnerable” habits to adjust those habits to avoid problems? Installing security Updates, Security software, firewall’s etc is, IMO, largely a way of avoiding that question, and unfortunately it pairs poorly because a user with “vulnerable” habits is often the sort who will happily disable their Anti-virus when say a game modification installer says it is a “false positive”, or who will happily give administrator permissions — if they can — to applications based in promised functionality.

Game “cheat” software often takes that approach, making a promise and then requesting elevation with that promise in mind is enough to convince some users to “take a chance”; thewse same “vulnerable” users are also susceptible to phishing scams or other things such as software programs stealing login information from say online accounts. A specific example of that would be for example simple applications which claim to “give op” to a Minecraft player. All you need to do is give your username and password! But of course it does not give you OP. Instead it simply E-mails your login information to a specified account. It doesn’t work, the user deletes the program, but perhaps never thinks about changing their login information because as far as they know the program just “didn’t work”. Until one day they cannot log in. Or, For MMOs, they suddenly find their character is poorly equipped or perhaps banned for activity because it was used for some nefarious in-game activity.

Speaking for myself, aside from the occasional Malwarebytes scan, I don’t run any sort of background AV software or firewall. In fact, I disable the built-in Windows software that provides those features. To my knowledge, I’ve not been infected in over 10 years.  And even then, what I was infected with, Virut/Sality, wasn’t being picked up by any Security software, even fully updated. Since then I’ve had systems that have lacked important security updates magically not be infected in the ways that the aforementioned “security circus” would have me believe. It seems — at least from where I am standing — that the implications of security vulnerabilities to your typical end-user system are vastly overstated, and the focus on those as a means to prevent people from getting infected may be a focus in the wrong area. Instead, Users should receive education such that their “vulnerable” habits can be eliminated or at the very least they can be made aware of them. Better education for computer systems in general can help as well; knowing the difference between an svchost.exe where it should be and an svchost.exe where it isn’t can make it possible to identify unwanted software even if any installed security software isn’t picking it up.

Spectre/Meltdown

Another topic of some interest that has taken the security world by storm is the Meltdown and Spectre prefetch cache security problems found in many Microprocessors (Meltdown being specific to Intel chips). These security concerns relate to an ability to access memory that would be otherwise unavailable by exploiting speculative cache behaviour. Meltdown involves writing carefully crafted machine language instructions in order to trick the speculative execution into providing access to small pockets of memory that would not otherwise be accessible; there are kernel-mode pages that are part of that applications virtual address space. Spectre functions similarly but requires those carefully crafted machine code instructions in order to try to perform various memory operations and carefully measure them in order to guess at the data found in certain areas of those kernel-mode pages within that processes virtual address space.

I feel, again, that the security circus has somewhat overstated the dangers involved with these security problems; in particular, it is too common to see statements that t his could be exploited through web-based code, such as Javascript, which itself would require escaping the Javascript sandbox which has wider security implications anyway. Additionally, it seems ot presume that this could be used to steal web or system passwords, when realistically it will only enable viewing tiny pockets of driver-related process memory. and things like ASLR could very easily mitigate any directed attack looking for specific data.

But, the reality hardly sells articles, and certainly doesn’t sell security software- which, I might add, by sheer coincidence tends to be either a sponsor or major advertiser for many of the wider publicized security publications. Coincidence, no doubt.

Posted By: BC_Programming
Last Edit: 31 Jan 2018 @ 07:51 PM

EmailPermalinkComments (0)
Tags

 09 Dec 2017 @ 12:27 PM 

Winamp is a rather old program, and to some people it represents a bygone era- the late 90’s and early 2000’s in particular. However I’ve not found any “modern” software that compares. There is plenty of software- MediaMonkey, MusicBee- etc which attempts to mimic Winamp, or provides the same general capability of managing a local music library, but they either don’t support Winamp Plugins, don’t work properly with many such plugins- or, most importantly, don’t add anything.

Not adding anything is the important one here. At best, I’m getting the same experience as I do with Winamp, so I’m not gaining anything. People ask, “Why don’t you switch” and the default answer is “Why should I?” If the only reason is because what I am currently using is “outdated” and no longer cool, then maybe I should stick with it because we have something in common.

Typically, I’m losing functionality, though. With Winamp I’ve got everything setup largely how I want. More importantly, it  spans not only FLAC and MP3 Music files, but my Music Library also incorporated various Video Game Music formats for various systems, with complete audio libraries for any number of game titles that I can pull up easily. These are native formats which are much smaller  than if those tracks were encoded as MP3 or FLAC and since they are native formats they use Winamp plugins, Which provide additional features for adjusting audio capabilities. These plugins simply don’t exist or don’t work with modern software, so I’d have to relegate those video game music formats to specific, individual players if I was to switch to say “MusicBee” for my local music library.

Nowadays, even the concept of a local Audio Library is practically unheard of. People “Listen to music” by using streaming services or even just via youtube videos, and typically it is all done via a smartphone where storage space tends to be at a greater premium as well. I find that I detest playing music on my Phone (Nexus 6) simply because there is no good software for managing Music saved to the local storage, and it get’s awful battery life if used this way. This is why I use an older 16GB Sony Walkman MP3 player instead; the battery could probably playback for a good continuous 48 hours, and it is much more compact than the phone is. And even if this means an extra piece of “equipment” when I go somewhere, it means that I’m not wasting my phone’s battery life to play music.

Recently, I had the need to do something that is nearly as “outdated” as the program I elected to do it, which is burning an Audio CD. I’ve found this to be the easiest way to transfer music to my Original XBox Console to create custom soundtracks (something which seems to be unique among consoles altogether). So I popped in a CD-RW, opened winamp, clicked on the CD Recorder…. and got a BSOD. DPC_WATCHDOG_VIOLATION.

Well, not that isn’t supposed to happen. After determining it was reproducible, I looked further into it. In particular I found that within the Current Control Set information for my hardware CDROM had an LowerFilters driver specified for PxHlpa64. So, I set about searching what this was.

I found that PxHlpa64 is a Driver by “Sonic Solutions” which is used by some CD Recording software. I couldn’t find any such software that uses it installed, so I merely renamed the affected key and rebooted. The problem went away and everything was as it should be. (And I subsequently wiped out the directory containing the driver file) I suspect that I installed a program previously which used the driver file and the uninstall didn’t remove it for any of a number of reasons.

One of the advantages of having a bit of an idea what is going on with Windows (or any OS really) is that you can more intelligently attempt to solve these sorts of unexpected problems you may encounter. Since I was aware of issues involving Optical drivers and driver “Filter” settings I was able to find and fix the cause of my issues fairly quickly.

Posted By: BC_Programming
Last Edit: 09 Dec 2017 @ 12:27 PM

EmailPermalinkComments (0)
Tags

 25 Nov 2017 @ 7:11 PM 

The code for this post can be found in this github project.

Occasionally you may present an interface which allows the user to select a subset of specific items. You may have a setting which allows the user to configure for example a set of plugins, turning on or off certain plugins or features.

At the same time it may be desirable to present an abbreviated notation for those items. As an example, if you were presenting a selection from alphabetic characters, you may want to present them as a series of ranges; if you had A,B,C,D,E, and Q selected, for example, you may want to show it as “A-E,Q”.

The first step, then, would be to define a range. We can then take appropriate inputs, generate a list of ranges, and then convert that list of ranges into a string expression to provide the output we are looking for.

For flexibility we would like many aspects to be adjustable, in particular, it would be nice to be able to adjust the formatting of each range based on other information, so rather than hard-coding an appropriate ToString() routine, we’ll have it call a custom function.

Pretty straightforward- a starting point, an ending point, and some string formatting. Now, one might wonder about the lack of an IComparable constraint on the type parameter. That would make sense for certain types of data being collated but in some cases the “data” doesn’t have a type-specific succession.

Now, we need to write a routine that will return an enumeration of these ranges given a list of all the items and a list of the selected items. This, too, is relatively straightforward. Instead of a routine this could also be encapsulated as a separate class with member variables to customize the formatted output. As with any programming problem, there are many ways to do things, and the trick is finding the right balance, and in some cases a structured approach to a problem can be suitable.

Sometimes you might not have a full list of the items in question, but you might be able to indicate what an item is followed by. For this, I constructed a separate routine with a similar structure which instead uses a function to callback and determine the item that follows another. For integer types we can just add 1, for example.

This is expanded in the github project I linked above, which also features a number of other helper routines for a few primitive types as well as example usage. In particular, the most useful “helper” is the routine that simply joins the results of these functions into a resulting string:

Posted By: BC_Programming
Last Edit: 25 Nov 2017 @ 07:11 PM

EmailPermalinkComments (0)
Tags
Categories: .NET, C#, Programming

 30 Oct 2017 @ 6:52 AM 

A couple of weeks ago, I thought it would be neat to get a computer similar to my first PC; which was a 286. I’d actually been considering the prospect for some time, but the prices on a DTK 286 (DTK was the brand I had) were a bit high. However I stumbled on a rather cheap listing for a DTK 286 PC; it wasn’t identical to the one I had but was a similar model, which had a slightly reduced case size but seemed otherwise the same, so I snapped it up.

It arrived a little worse for wear from the journey- the front of the case, which was attached via plastic standoffs screwed into the metal case itself, had all those plasticsnaps come off- However this shouldn’t be too much of a problem as I’m sure I can get it to stay attached for presentation purposes.

When I opened it up to see if anything else had been damaged, I found the network card was out of it’s slot. So I pushed it in. Then I noticed the slot was PCI. 286 systems had 8-bit and 16-bit ISA, so already I knew something was up. That the Processor had a heatsink, and was a Socket 7 meant this was clearly not a 286 system.

Instead, the system ins a Pentium 133 (non-MMX) Socket 7, with 64MB of RAM, a 900MB hard drive, an ATI Mach 64, and 10/10 Ethernet. The Floppy diskette drive wasn’t working correctly so I swapped it for one of my other floppy drives. I also attached one of my CD-RW drives so I could burn data and install programs, to the Windows 95 install that was running on the system.

Pentium133_reduuced

Now, arguably this could be a claim to be made against the seller but I think that it was sold this way by accident; It seems like it is using a specialized industrial motherboard intended to be placed in these sort of Baby AT cases- I don’t think a standard consumer case had Socket 7 and used the large, older Keyboard DIN connector. The motherboard is apparently quite uncommon and more so with the Socket 7 rather than Socket 5. It also has a motherboard Cache “card” installed which doesn’t look to be particularly difficult to find but goes for about half what I paid for the entire unit. The motherboard is unusual in that it seems to be missing things such as Shrouds around the IDE connections as well as having no serial number listed where specified in the center of the board.

My original intent was to fiddle with MS-DOS and Windows 3.1, so realistically this Pentium system could work for that purpose; I have a few older IDE Hard drives I could swap in and set up a dual-boot between MS-DOS/Windows 3.1 and Windows 95. The Mach64 is an older card but is well supported on both Windows 95 and Windows 3.1 as well as MS-DOS, so it seems like a good fit. It only has 1MB of RAM so higher resolutions drop the colour depth- 1024×768 is only doable with 256 color modes, for example- I might want to get some DIP chips to install and upgrade the VRAM, as it has two empty sockets. (Might be cheaper, ironically, to actually get another Mach64 with the chips installed altogether, which is odd) I was also able to add a Creative AudioPCI Card I had lying around without too much hassle; Though there are better options for ideal MS-DOS and Windows 95 audio I might explore later. My main limitation so far is the lack of a PS/2 connector for the mouse and I don’t have a serial Mouse- I found an old InPort Mouse with a Serial Adapter on eBay to serve that purpose, however- As having a mouse would be nice.

One thing I was struck by- much as with things like the iMac G3 I wrote about previously, is that despite being quite old, it still performs rather well with things like Office 97. Basically it just proves my theory that if you fit your software choices to the hardware, old hardware is still quite capable. I could write up documents in Word or create spreadsheets in Excel without too much bother and without really missing anything available on a newer system; and the system would work well with older MS-DOS games as well for most titles- and older titles are facilitated by the Turbo Switch, which oddly doesn’t actually do anything with the button but uses Control-Alt-Minus and Control-Alt-Plus to change the speed and the turbo switch light changes accordingly (it goes between 133Mhz and 25Mhz, making the latter about equivalent to a fast 386).

I might even experiment with connecting it to my network,  Perhaps even try to get Win95 able to work with shared directories from Windows 10 which would be rather funny. (Though I suspect I might need to open up security holes like SMBv1 to get that working….)

Posted By: BC_Programming
Last Edit: 30 Oct 2017 @ 06:52 AM

EmailPermalinkComments (0)
Tags





 Last 50 Posts
 Back
Change Theme...
  • Users » 46430
  • Posts/Pages » 378
  • Comments » 105
Change Theme...
  • VoidVoid « Default
  • LifeLife
  • EarthEarth
  • WindWind
  • WaterWater
  • FireFire
  • LightLight

PP



    No Child Pages.

Windows optimization tips



    No Child Pages.

Soft. Picks



    No Child Pages.

VS Fixes



    No Child Pages.