25 Jan 2020 @ 11:16 PM 

I don’t get why Linux is associated with programming so heavily. Or why it is said to be "good for programmers".

Now, I have some personal systems running Linux, and I’m relatively familiar with it. It’s a fine system. But I still don’t get why people associate Linux with Programming. Or rather why people make that connotation positively. I can’t even use, (realistically) those systems for my work!

I can think of four main reasons. One is that you have GCC as well as any number of other compilers built in to most linux distributions or at least easily accessible. Source control systems are usually developed for Linux and hell Git, one of the most popular, was developed originally for the Linux Kernel.

The second reason I can think is that it takes a programmer to actually customize it properly. Which isn’t strictly speaking a bad thing, but for all the problems people claim of Windows, many problems can be fixed in exactly the same way. People often talk about customizing their Linux distributions by programming new components, and then say "you can’t do that on Windows"… but that doesn’t make sense. Of course you can! I’ve written software specifically to customize my Windows experience on my Windows systems just as I’ve done the same for Linux. When Windows 10’s Network connection “foldout” was for lack of a better word, awful, I took it upon myself to solve my issue myself and created my own program that allowed connecting to networks via the Notification area. It uses rasphone, but what I’ve found is that for some reason it is more reliable than the built-in Windows interface.

Another angle is that perhaps “programmer” nowadays is often used to refer to “web developer”, via the development of web ‘applications’ and websites, for which Linux makes perfect sense since it is the basis of the most common LAMP web stack that is found/supported on most VPS systems, and can be administered via SSH as well quite easily. It’s also common to use things like Postfix and dovecot for managing mail, and, it can all be set up for only the cost of hardware. For that it certainly makes sense to work on Linux rather than try to test your Web application/software on Windows, because if you use Linux your testing environment will be the same as the one you eventually deploy it too.

One contributing factor I think is that a lot of the best programmers in the world have used nix like environments. However, Programming- like most "STEM", is full of hubris, so even though 99% of programmers <em>can’t</em> be the 1%, we all seem to think we are top shelf product. Too many programmers don’t even consider that maybe the top 1% are people that are literally smarter than them, They think instead they must have hit some kind of “glass ceiling” because of their toolset or their OS. Some of us are so used to being "top of our class" or the "smartest one in the room" or rockstar programmers at small companies that we forget that in a larger scope, we are merely average and a lot of programmers/software developers cannot accept that. Clearly, It must SIMPLY be the toolset these top 1% use. So they start mimicking some of the best programmers, people like Paul Graham. They read his essays and see him making negative remarks about "mediocre" developers and the "masses" who use "Visual Blub" (Visual Studio") And they laugh, and go "haha, so true!" or repeat quotes to others, with zero self awareness. They start writing programs in Lisp, but their expressiveness is absolute shit, and they still don’t even know what the fuck a cons pair is, but they’ll pretend if you ask them. They start writing code in eMacs or VI. But they can’t figure out why they aren’t shooting to the top. Could it be that they aren’t as smart as somebody else? "No…. no that isn’t true. That’s impossible!"

Emulating those top 1% by using the same toolsets as them doesn’t magically make you a better programmer. It’s equivalent to finding out they all drive say Toyota’s and figuring that if you drive a Toyota that you will become a good programmer. You don’t become a good programmer by following others; Those top 1% that people try to emulate didn’t follow anybody else to reach there, they followed their own path. Perhaps all those paths lead to *nix. Perhaps not. But you aren’t going to become a good programmer by simply taking a shortcut to the end of the path because what actually makes you more proficient at software development and programming is the journey not the final destination.

Posted By: BC_Programming
Last Edit: 25 Jan 2020 @ 11:16 PM

EmailPermalinkComments (0)
Tags
Tags: , ,
Categories: Linux, Programming
 23 Nov 2019 @ 7:21 PM 

Since the software that I contribute to as part of my day job involves printing to receipt printers, I’ve been keeping my finger on the pulse of eBay and watching for cheap listings for models I’m familiar with. Recently I stumbled upon a “too good too be true” listing- an Epson TM-T88IV listed for $35. The only caveat that I could see was that it was a serial printer; that is, it used the old-style RS-232 ports. I figured that might be annoying but, hey, I’ve got Windows 10 PCs that have Serial Ports on the motherboard, and a Null Modem Serial cable, how hard could it be?.

Famous last words, as it happened, because some might call the ensuing struggle to be a nightmare.

When the printer arrived, my first act was to of course verify it worked on it’s own. it powered up, And would correctly print  the test page, so it printed fine. Next up, of course, was to get it to communicate with a computer.  I used a Null-modem cable to connect it, adjusted the DIP switches for 38400 Baud, 8 data bits, 1 stop bit, no parity DSR/DTR control, printed the test page again, then installed the Epson OPOS ADK for .NET (as I was intending to use it with .NET). I configured everything identically, but Checkhealth failed. I fiddled with it for some time- trying all the different connection methods and trying to get better results to no avail.

I fired up RealTerm, and squirted data directly over the COM port. I could get text garbage to print out- I tried changing the COM Port settings in Device Manager to set a specific baud rate as well, but that didn’t work.

I had a second computer- my computer built in 2008, which while it didn’t have a COM *port*, It did have the header for one. I took the LPT and COM bracket from an older Pentium system and slapped it in there for testing, and spent a similar amount of time with exactly the same results. I was starting to think that the printer simply was broken, or the Interface card inside it was broken in some way.

Then, I connected it to a computer running Windows XP. I was able to get it to work exactly as intended; I could squirt data directly to the printer and it would print, I could even set up an older version of the OPOS ADK and CheckHealth went through. Clearly, the receipt printer was working- so there was something messed up with how I was using it. I put an install of Windows 7 on one of the Windows 10 PCs I was testing and found I got the same results. Nonetheless, after some more research and testing, it seems like Windows 10 no longer allows the use of motherboard Serial or Parallel ports. Whether this is a bug or intentional it’s unclear. I would guess it was imposed at the same time during development that Windows 10 had dropped Floppy support; people spoke up and got Floppy support back in, but perhaps parallel and Serial/RS-232 stayed unavailable. Unlike that case though they do appear in Device Manager and are accessible as devices, they just don’t work correctly when utilized.

Since the software I wanted to work on would be running on Windows 10- or if nothing else, certainly not Windows XP, I had to get it working there. I found that using a USB Adapter for an RS-232 Port worked, which meant I could finally start writing code.

The first thing was that a Receipt printer shouldn’t be necessary to test the code, or for, say, unit tests. So I developed an interface. This interface could be used for mocking, and would implement basic features as required. The absolute basics were:

  • Ability to print lines of text
  • Enable and Disable any underlying device
  • Ability to Claim and Release the “printer” for exclusive use
  • Ability to Open, and Close the Printer
  • Ability to retrieve the length of a line in characters
  • Ability to Print a bitmap
  • Ability to Cut the paper
  • boolean property indicating whether OPOS Format characters were supported

I came up with a IPosPrinter interface that allowed for this:

From there, I could make a “mock” implementation, which effectively implemented a ‘receipt print’ by sending it directly to a text file.

This implementation can also optionally shell the resulting text data to the default text editor, providing a quick way of testing a “printout”. However, this interface isn’t sophisticated enough to be usable for a nice receipt printer implementation; In particular, The actual code to print is going to want to use columns to separate data. That shouldn’t be directly in the interface, however- instead, a separate class can be defined which composites an interface implementation of IPOSPrinter and provides the additional functionality. This allows any implementation of IPOSPrinter to benefit, without requiring they have additional implementations.

Since our primary feature is having columns, we’ll want to define those columns. a ColumnDefinition class would be just the ticket. We can then tell the main ReceiptPrinter class the columns, then have a params array accept the print data and it could handle the columns automatically. Here is the ColumnDefinition class:

At this point, we just want a primary helper routine within said ReceiptPrinter class. That could be used within that class to handle printing from more easily used methods intended for use by client code:

This implementation also incorporates a shrink priority that can be given to each column. Columns with a higher priority will be given precedence to remain, but otherwise columns may be entirely eliminated from the output if the width of the receipt output is too low. This allows for some “intelligent” customization for specific printers, as some may have less characters per line and redundant columns or less needed columns can be eliminated, on those, but they can be included on printers with wider outputs. The actual ReceiptPrinter class in all it’s glory- not to mention the implementations of IPOSPrinter beyond the text output, particularly the one that actually delegates to a .NET OPOS ADK Device and outputs to a physical printer, will require more explanation, so will appear later in a Part 2.

Posted By: BC_Programming
Last Edit: 23 Nov 2019 @ 07:21 PM

EmailPermalinkComments (0)
Tags
 19 Oct 2019 @ 3:22 PM 

Over the last few years – more than a decade, really – it seems that, somehow, *nix- and Linux in particular, has been tagged as being some sort of OS ideal. It’s often been cited as a "programmers OS" and I’ve seen claims that WIn32 is terrible, and people would take the "Linux API" over it anyday. That is a verbatim quote as well.

However, I’ve thought about it some and after some rather mixed experiences trying to develop software for Linux I think I feel something of the opposite. Perhaps, in some ways, it really depends what you learned first.

One of the advantages of a Linux-based (or to some extent, UNIX-based) system is there is a lot of compartmentalization. The user has a number of choices and these choices will affect what is available to the applications.

I’d say that generally, The closest thing to a "Linux API" that applications utilize would probably just be the Linux Kernel userspace API.

Beyond that, though, as a developer, you have to start making choices.

Since the user can swap out different parts, outside of the Linux Kernel userspace API, pretty much nothing is really "standardized". Truth be told most of that kernel userspace API isn’t even used directly by applications- usually they get utilized through function calls to the C Standard library.

The Win32 API has much more breadth but tends to be more "simple", which tends to make using it more complicated. Since it’s a C API, you aren’t going to be passing around OO instances or interfaces; typically more complicated functions accept a struct. Hard to do much better than that with a C API.

However, with Windows, every Window is created with CreateWindowEx() or CreateWindow(). No exceptions. None. Even UWP Windows use CreateWindow() and have registered Window classes. Even if perhaps it’s not the most pleasant base to look at at least there is some certainty that, on Windows, everything is dealt with at that level and with those functions.

With Linux, because of the choices, things get more complicated. Since so many parts are interchangable, you can’t strictly call most of what is made available and used by applications a "Linux API" since it isn’t going to exist on many distributions. X11, for example, is available most of the time, but there are still Linux distributions that use Wayland or Mir. Even using just X11, it only defines the very basic functions- It’s a rare piece of software that actually interacts directly with X11 via it’s server protocol, usually software is going to use a programming interface instead. But which one? For X11 you’ve got Xlib or xcb. Which is better? I dunno. Which is standard? Neither. And then once you get down to it you find that it only actually provides the very basics- what you really need are X11 extensions. X11 is really only designed to be built on top of, with a Desktop Environment.

Each Desktop environment provides it’s own Programming Interface. Gnome as I recall uses "dbus"; KDE uses- what was it? kdetool? Both of these are CLI software that other programs are supposed to call to interact with the desktop environment. I’m actually not 100% on this but all the docs I’ve found seem to suggest that at the lowest level aspects of the desktop environment is handled through calls to those CLI tools.

So at this point our UI API consists of calling a CLI application which interacts with a desktop environment which utilizes X11 (or other supported GUI endpoints) to show a window on screen.

How many software applications are built by directly interacting with and calling these CLI application endpoints? Not many- they are really only useful for one-off tasks.

Now you get to the real UI "API"; the UI toolkits. Things like GTK+ or Qt. These abstract yet again and more or less provide a function-based, C-style API for UI interaction. Which, yes- accepts pointers to structs which themselves often have pointers to other structs, making the Win32 API criticism that some make rather ironic. I think it may arise when those raising the criticism are using GTK+ through specific language bindings, which typically make those C Bindings more pleasant- typically with some sort of OO. Now, you have to choose your toolkit carefully. Code written in GTK+ can’t simply be recompiled to work with Qt, for example. And different UI toolkits have different available language bindings as well as different supported UI endpoints. Many of them actually support Windows as well, which is a nice bonus. usually they can be made to look rather platform native- also a great benefit.

It seems, however, that a lot of people who raise greivance with Win32 aren’t comparing it to the direct equivalents on Linux. Instead they are perhaps looking at Python GTK+ bindings and comparing it to interacting directly with the Win32 API. It should really be no surprise that the Python GTK+ Bindings are better; that’s several layers higher than the Win32 API. It’s like comparing Windows Forms to X11’s server protocol, and claiming Windows is better.

Interestingly, over the years, I’ve come to have a slight distaste for Linux for some of the same reasons that everybody seems to love about it, which is how it was modelled so heavily on UNIX.

Just in the last few years the amount of people who seem to be flocking to OS X or Linux and holding up their UNIX origins (obviously more so OSX than Linux, strictly speaking) as if that somehow stands on it’s own absolutely boggles my mind. I can’t stand much about the UNIX design or philosophy. I don’t know why it is so constantly held up as some superior OS design.

And don’t think I’m comparing it to Windows- or heaven forbid, MS-DOS here. Those don’t even enter this consideration at this point- If anything can be said it’s that Windows wasn’t even a proper competitor until Windows NT anyway, and even then, Windows NT’s kernel definitely had a lot of hardware capability and experience to build off that UNIX never had in the 70’s- specifically, a lot of concepts were adopted from some of the contemporaries that UNIX competed against.

IMO, ITS and MULTICS were both far better designed, engineered, and constructed than any UNIX was. And yet they faded into obscurity. often People point at Windows and say "The worst seems to get the most popular!" But if anything UNIX is the best example of that. So now we’re stuck with people who think the best OS design is one where the the shell is responsible for wildcard expansion and the underlying scheduler is non-preemptive. I wouldn’t be surprised if the UNIX interrupt-during-syscall issue was still present, and instead of re-entering the syscall it returned an error code, making it the application’s responsibility to check for the error and re-enter the syscall.

It seems to me that one of the axioms behind many of the proclamations that "*nix is better designed" seems to be based on definitions of "better designed" that correspond to how *NIX does things- conclusion before the reason, basically.

Posted By: BC_Programming
Last Edit: 19 Oct 2019 @ 03:22 PM

EmailPermalinkComments (0)
Tags
 20 Apr 2019 @ 12:57 PM 

“Dark Mode” settings have started to be a big ticket concern for the last few years. Applications and Apps have started to add “Dark Mode” Visuals as an option, and more recently, Mac OS X (Now Mac OS, because that’s not confusing when you are interested in old software/hardware!) as well as Windows 10 have introduced their own “Dark Mode” featureset in the OS.

However, I’ve found Windows 10’s implementation confusing and actually a bit disturbing.

To explain, I’ll start at the beginning.

Graphical Environments in general have held to an idea that, for the most part, standard Graphical elements were managed by the OS. For example, on the Macintosh you would create software and it would use standard buttons, listboxes, etc. and the behaviour of those is handled by the OS. Your software didn’t have to handle detecting mouse clicks, drawing the button, changing it’s appearance when it’s clicked, etc. This concept was of course shared by Windows. on Windows 3.0 and 3.1, the system had “System Colors” that defined how different elements were drawn. Windows itself would use those colours where appropriate, for things like title bars and title bar font, and Applications could merely use the setting to use the current system setting. (And respond appropriately to the broadcast message when system settings are changed to deal with those colours changing). The system’s shipped with various “Themes” which were effectively sets of those colours, and you could customize  those colours to your liking.

Windows31

Windows 3.1 Dark Mode

Up through to System 7, the Macintosh held fast to most of it’s original UI designs in terms of visuals. Originally grayscale, later support for colour did add little bits here and there, primarily for the icons, but the main User interface was largely white with black or gray lines, or with rather subtle colouring.

System7Theme

Mac OS 8 with System 7 Theme via Kaleidoscope

System 7 however, on capable systems, also added a new feature that was available as a downloadable add-on from Apple- Appearance Manager. This was effectively a “plugin”  that would take over the task of drawing standard Elements, Elements were given a 3-D appearance; buttons “popped out” instead of being black chamferboxes. Progressbars got fancy gradients, and so on. This was part of the standard install with Mac OS 8 as well. These offered a lot of customization out of the box. Even more with software like Kaleidoscope. The standard appearance provided by the appearance manager was known as “Apple Platinum” and offered a number of colour options. (Mostly, the colours affected selection colour and the colour of progress bars, from what I can tell)

ApplePlatinum

Mac OS 8 Apple Platinum Theme (default)

Not to be outdone, Windows 95 introduced 3-D theming to the Windows environment, providing a similar set of changes to the standard appearance. Unlike Appearance Manager, one could also set the “3-D Colour” which affected the colour of most elements. This facilitated the creation of what could be called “Dark” themes.

Windows_95

Windows 95 with “Dark Theme”

It wasn’t until Windows XP that Windows had a feature similar in concept to Mac OS’s Appearance manager, through the introduction of Visual Styles. Visual Styles worked in much the same way- a Visual Style defined custom images that were used to draw particular window elements,  allowing a richer and more thematic styling to be applied. With Windows XP,  in addition to the default Visual Style, there was also an Olive and Silver Theme/Visual Style that was included. A “Theme”, which previously was a set of system colours, was changed to also include the Visual Style option. Additionally, you could decide to disable Visual Styles and use the “Windows Classic” Theme, which would not use the “Luna” Windows decorator. Interestingly, With the classic Theme style, one could adjust the colour options in much the same way as one could on previous Windows releases, creating “Dark Mode” colour schemes if desired.

xpDefault

XP (Default)

xpOlive

XP (Olive)

xpSilver

XP (Silver)

xpClassic_dark

XP (Classic, “Dark”)

Around that same time frame, The Macintosh Operating System was migrated to OS X, something of a hybrid of the older Mac system and the NeXTStep Operating System. This introduced the concept of a “Composited desktop” to the mainstream. In a traditional desktop environment, it operates on a single output “image”. When you move a window, it get’s redrawn in the new location, and any revealed sections of the screen below need to be redrawn as well. A composited desktop keeps all that necessary information in memory- for example, it may hold the bitmaps that represent each window as a texture, and merely compose them together to create the final image, usually through the use of 3-D Accelerated Video hardware. With capable hardware, this approach was much faster and in general much cleaner. Internally, there was a framework for UI element drawing. However, externally, it was necessary to use third-party software to reskin the styles of the OS (Shapeshifter, for example).

Windows Vista brought this same composited desktop experience to the Windows platform, this new technology was Aero. This underlying composited desktop experience has been used up through to Windows 10. Aero has similar capabilities to Luna, in that Visual Styles can customize almost every element of the system. “Aero Glass”, which many associated with Aero, was an enhancement to allow fancy affects to be done using the 3-D Rendering that is performed on the composited information. In it’s case, providing a sort of “translucent glass” effect which blurs the text behind the “glass” areas of a window (typically, the title bar).

vista

Windows Vista

Basically, over the years, there have been a number of solutions and options for a central, system controlled set of colours and repeated thematic elements such as buttons. Which of course is what brings me finally, to why I find Windows 10’s dark mode both confusing and disturbing- it leverages none of these technologies!

The Dark Mode feature of Windows 10 is implemented effectively as an on-off flag which does not change Windows behaviour. Instead, applications all need to check this flag and operate appropriately. the libraries behind UWP Apps will perform this check and change their visual theming appropriately. That is all. Win32 applications are unaffected. To implement Dark Mode in File Explorer, for example, Microsoft developers have changed File Explorer to see the flag and use different Dark colours for all UI elements if it does.

But it makes no sense. Every piece of Windows now needs to be altered to allow for this. And even if every part of Windows has these changes made to support it, Third party applications aren’t guaranteed to support it, either. Lastly, nothing about the Dark Mode support is standard- From an application perspective, if Dark mode is on, you cannot use the Visual Style- so what should a Button look like in dark mode? A Combo box scrollbar? etc. Even the colours have no standard- it’s all up to the application.

The implementation of Dark Mode makes no sense because it should have been a new Theme with appropriate dark colours that comes with Windows which also uses a new Visual Style that changes all the visual elements to have a darker colour. If Dark Mode is on, no application should see “white” for the Window background colour and be expected to disregard it if it seems Dark mode is on and use “A dark colour” of some sort that isn’t standardized for different elements.

Posted By: BC_Programming
Last Edit: 20 Apr 2019 @ 12:57 PM

EmailPermalinkComments Off on The failures of Windows 10 “Dark Mode”
Tags
Categories: Windows
 23 Mar 2019 @ 2:15 PM 

There are a lot of components of Windows 10 that we, as users, are not “allowed” to modify. It isn’t even enough when we find a way to do so, such as by disabling services or scheduled tasks by using command prompt running under the system account. This is because when you next install updates, those settings are often reset. There are also background tasks and services intended specifically for “healing” tasks, which is a pretty friendly way to describe a  trojan downloader.

One common way to “assert” control is using the registry and the Image File Execution Options key, found as:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options

By adding a Key here with the name of the executable, one can add additional execution options. The one of importance here is a string value called debugger. When you add a debugger value, Windows will basically not start the executable and will instead launch the executable listed for the “debugger” value, with the executable that was being run as a parameter.

We can use this for two purposes. The most obvious is that we can simply swap in an executable that does nothing at all, and basically prevent any executable from running. For example, if we add “C:\Windows\System32\systray.exe” as the debugger value for an executable, when the executable in question is run, instead the systray.exe stub will run, do nothing, and exit, and the executable that was being launched will not. As a quick aside- systray.exe is a stub that doesn’t actually do anything- it used to have built-in notifications icons for Windows 9x, and it remains because some software would actually check if that file existed to know whether it was running on Windows 95 or later.

The second way we can use it is to instead insert our own executable as the debugger value. Then we can log and record each invocation of any redirected program. I wanted to record the invocations of some built-in Windows executables I had disabled, so I created a simple stub program for this purpose:

IFEOSettings.cs

I Decided to separate the settings for future editing. For my usage, I just have it hard-coded to C:\\IMEO_Logs right now and create the folder beforehand. The bulk of the program of course is the entry point class:

I’ve used this for a few weeks by manually altering the Image File Execution Options to change my existing settings that redirected some executables (compattelrunner.exe, wsqmcons.exe, and a number of others) to systray.exe to instead redirect to this program- It then logs all the attempts to invoke that executable alongside details like the arguments that were passed in.

Posted By: BC_Programming
Last Edit: 23 Mar 2019 @ 02:15 PM

EmailPermalinkComments Off on Taking Control of Windows 10 with Image File Execution Options
Tags
 14 Mar 2019 @ 6:51 PM 

Alternate Title: Software Licenses and implicit trust

It is interesting to note that in many circles proprietary software is inherently considered untrustworthy. That is, of course, not for no reason- it is much more difficult to audit and verify that the software does what it is supposed to and to check for possibly security problems. However, conversely, it seems that a lot of Open Source software seems to get a sort of implicit trust applied to it. The claim is that if there isn’t somebody sifting through and auditing software, you don’t know what is in there- and, conversely, that if something is open source, we do know what is in there.

But, I would argue that the binaries are possibly more trustworthy in attempting to determine what a piece of software is doing simply by virtue of it being literally what is being executed. Even if we consider the scenario of auditing source code and building binaries ourself, we have to trust the binary of the compiler to not be injecting malicious code, too.

I’ve found that this sort of rabbit hole is something that a lot of Open Source advocates will happily woosh downwards as far as possible for proprietary software, but seem to avoid falling for Open Source software. Much of the same logic that get’s applied to justify distrust of proprietary binary code should cause distrust in areas of Open Source, but for some reason a lot of aspects of Open Source and the Free Software Community are free from the same sort of cynicism that is applied to proprietary software, even though there is no reason to think that software falling under a specific license makes it inherently more or less trustworthy. If we can effectively assume malicious motives for proprietary software developers, why do we presume the opposite for Open Source, particularly since it is now such a better target for malicious actors due to the fact that it is so often implicitly trusted?

Source code provided with a binary doesn’t mean anything because- even assuming users capable of auditing said code, there is no way to reliably and verifiably know that the source code is what was used to build the binary. Trust-gaining exercises like hashes or MD5sums can be adjusted, collided, or changed and web servers hacked to make illegitimate binary releases appear legitimate to propagate undesirable code which simply doesn’t appear in the associated source code with a supposed release (Linux Mint). Additionally, The indeterminate nature of modern compilers means that even compiling the same source more than once can often give completely different results as well, so you cannot really verify that the source matches a given binary by rebuilding the source and comparing the resulting binary to the one being verified.

Therefore, it would seem the only reasonable recourse is to only run binaries that you build yourself, from source that has been appropriately audited.

Thusly, we will want to audit the source code. And the first step is getting that source code. A naive person might think a git pull is sufficient. But no no- That is a security risk. What if GitHub is compromised to specifically deliver malicious files with that repository, hiding secret exploits deep within the source codebase? Too dangerous. Even with your careful audit, you could miss those exploits altogether.

Instead, the only reasonable way to acquire the source code to a project is to discover reliable contact details for the project maintainer and send then a PGP encrypted message requesting that they provide the source code either at a designated drop point- Which will have to be inconspicuous and under surveillance by an unaffiliated third party trusted by both of you – Or have him send a secure, asymmetrically encrypted message containing the source tarball.

Once you have the source, now you have to audit the entire codebase. Sure, you could call it quits and go "Developer says it’s clean, I trust him" fine. be a fool. be a foolish fool you fooly foolerson, because even if you know the tarball came from the developer, and you trust them- do you trust their wife? their children? their pets? Their neighbors? You shouldn’t. In fact, you shouldn’t even trust yourself. But you should, because I said you shouldn’t and you shouldn’t trust me. On the other hand, that’s exactly what I might want you to think.

"So what if I don’t trust their hamster, what’s the big deal"

Oh, of course. Mr Security suddenly decides that something is too off-the-wall.

Hamsters can be trained. Let that sink in. Now you know why you should never trust them. Sure, they look all cute running on their little cage, being pet by the developers cute 11 year old daughter, but looks can be deceiving. For all you know their daughter is a secret Microsoft agent and the hamster has been trained or brainwashed- using evil, proprietary and patent encumbered technology, no doubt, to act as a subversive undercurrent within that source repository. With full commit access to that project’s git repository, the hamster can execute remote commands issued using an undocumented wireless protocol that has no man page, which will cause it to perform all sorts of acts of terror on the git repository. Inserting NOP sleds before security code, adding JMP labels where they aren’t necessary. even adding buffer overflows by adding off-by-one errors as part of otherwise benign bugfixes.

Is it very likely? No. But it’s *possible* so cannot be ignored.

Let’s say you find issues and report them.

Now, eventually, the issues will be fixed. The lead developer might accept a pull, and claim it to fix the issue.

Don’t believe the lies. You must audit the pull yourself and find out what sinister motives underly the so-called "fix". "Oh, so you thought you could just change that if condition, did you? Well did you know that on an old version of the PowerPC compiler, this generates code that allows for a sophisticated remote execution exploit if running under Mac OS 9?" Trust nobody. No software is hamster-proof.

Posted By: BC_Programming
Last Edit: 14 Mar 2019 @ 06:51 PM

EmailPermalinkComments Off on No Software is Hamster-proof
Tags
 22 Dec 2018 @ 5:14 PM 

There has been a lot of recent noise regarding the demographic makeup of typical software developers and people working in CS. There is a lot of “pushback” against it, which is a bit unusual. There is really no denying it. Look at almost any CS-related software team and you will find they are almost completely made up of nerdy, young white males. They think they got there through hard work and that the demographic dominates is because they are simply the best, but that is simply not true- it’s a self-perpetuating monoculture. Hell I’m a nerdy white male (Not so young now, mind…), I like programming and do it in my spare time but somehow that "feature" has become this almost implicit requirement. You need to find somebody who has this healthy Github contribution history and wastes a lot of their spare time fucking around on computers. That fits me but the fact is it simply shouldn’t be a requirement. a Team shouldn’t be comprised of one type of software developer. And that applies to both attitude as well as demographic.

There is also this weird idea that a software developer that doesn’t work on software in their spare time is some kind of perversion. "So what personal projects do you have?" is a question I can answer but if somebody cannot answer it or the answer is "none" I don’t get why that is an instant minus point. I mean bridge building engineers/contractors don’t get points taken off in interviews if they don’t spend their spare time designing and building bridges, but somehow in software development there is this implicit idea that we must all dedicate all of our spare time to it. Just because somebody doesn’t like to work on software in their spare time doesn’t mean they aren’t going to be absolutely spectacular at it. Hell if anything it’s those of us who finish work and basically just switch to a personal project that are trying to compensate by constantly cramming for the next workday. As if we have to constantly combat our own ineptitude by repetition at all times.

I think the relatively recent "pushback" to the idea of actually introducing any sort of diversity by trying to break up the self-perpetuating loop of young white guys only wanting to work with other young white guys really illustrated how necessary it was. You had people (young white male nerds, surprise) complaining about "diversity quotas" and basically starting with the flawed assumption that the reason that their team consisted of young white male nerds was because they were the most qualified. No, it was because the rest of the team was young white male nerds and anybody else being considered had to meet these ridiculous lengths to prove themselves before they even get considered as fitting the "culture" because the culture is one of- you guessed it, young, white, male nerds. A mediocre "young white male nerd" is often more likely to get hired than a demonstrably more skilled person of a different race or (god forbid, apparently), a woman.

Even an older guy is probably less likely to be brought on board. You can have some grizzled older software veteran at 50 who has forgotten more than the rest of the team knows put together but him not memorizing modern frameworks and buzzwords is going to prevent him from coming on board, even though he’s bringing on board countless skills and experience that no amount of github commits can hope to bring a "young white male nerd". Can you imagine how much ridiculous skill and ability a woman who is 60 would have to bring to the table to get hired as a software developer? You get these 24 something white dudes "well I wrote an expression evaluator" and the interviewer is like "Oh cool and it even does complex numbers, awesome" but a 60-year old woman could be like "well, I wrote a perfect simulation of the entire universe down to the atom, with a speed of 1 plank every 2 seconds, as you can see on my resume" And the completely unimpressed interviewer would be like "Yeah but we’re looking for somebody with CakePHP experience"

I think "young white male nerds" reject the idea that they have any sort of privilege in this field because they feel it means they didn’t work as hard. Well, yeah. We didn’t. get over it. We had things handed to us easily that we wouldn’t have if we were older, a different race, or women. We need to stop complaining that reality doesn’t match our ego and trying to stonewall what we term "diversity hires" and actually respect the fact that we aren’t a fucking master race of developers and women and minorities are fully capable of working in software, and cherrypicking racist and sexist statistics to support the perpetuation of the blindingly-white sausage fest just makes us look like babies trying to deny reality.

Posted By: BC_Programming
Last Edit: 22 Dec 2018 @ 05:14 PM

EmailPermalinkComments Off on The self-perpetuating monoculture of Software Development
Tags
 22 Oct 2018 @ 7:26 PM 

Nowadays, we’ve got two official ways of measuring memory and storage. They can be measured via standard Metric Prefixes, such as Megabytes (MB) or Kilobytes (KB), or they can use the binary prefixes, such as Mibibytes (MiB) and Kibibytes(KiB). And yet, a lot of software seems to use the former, when referring to the latter. Why is that, exactly?

Well, part of the reason is that the official Binary SI prefixes didn’t exist until 2008, and were used to address growing ambiguities between them. Those ambiguities had been growing for decades.

From the outset, when Memory and storage were first developed in respect to computers, it became clear it would be necessary for some sort of notation to be used to measure the memory size and storage space, other than by directly indicating bytes.

Initially, Storage was measured in bits. A bit, of course, was a single element of data- a 0 or a 1. In order to represent other data and numbers, multiple bits would be utilized. While the sizes being in common discussion were small, bits were commonly referenced. In fact even early on there arose something of an ambiguity; often when discussing transfer rates and/or memory chip sizes, one would here "kilobit" or "megabit"; these would be 1,000 bits and 1,000 kilobits respectively, and were not base 2; however, when referring to either storage space or memory in terms of bytes, a kilobyte or a megabyte would be 1,024 bytes or 1,024 kilobits respective.

One of the simplest ways of organizing memory was using powers of two; this allowed a minimum of logic to access specific areas of the memory unit. Because the smallest addressable unit of storage was the byte, which were 8bits, it meant that most memory was manufactured to be a multiple of 1,024 bits, possible because it was the nearest power of 2 to 1,000 that was also divisible by 8. For the most part, rather than adhering strictly to the SI definitions for the prefixes, there was a industry convention that effective indicated that, within the context of computer storage, the SI prefixes were binary prefixes.

For Storage, for a time the same conveniences applied that resulted in total capacities measured in the same units. For example, A Single-sided 180K Floppy Diskette had 512 bytes per sector, 40 sectors a track, and 9 tracks a side.

A single sided 180K diskette had 512 bytes a sector, 40 sectors per track, and 9 tracks per side. That was 184320 Bytes. In today’s terms with the standardized binary prefixes, this would be 180KiB.

360K Diskettes had a similar arrangement but were double-sided. they were 368640 Bytes- again, Binary prefix was being used in advertising.

Same with 720 K 3-1/4" diskettes. 512 bytes/sector, 9 sectors per track, 80 tracks/side, two sides. That’s 737280 bytes. or 720KiB.

The IBM XT 5160 came with a drive advertised as 10MB in size. The disk has 512 bytes per sector, 306 cylinders, 4 heads, and 17 tracks. One cylinder is for diagnostic purposes and unusable. That gives us a CHS of 305/4/17. At 512 bytes/sector, that was 10,618,880 bytes of addressable space. (This was actually more than 10MiB as some defects were expected from the factory). The 20MB drive had a similar story as well. 615(-1 diag) cylinders, 4 heads, 17 sectors per track at 512 bytes a sector- 20.38MiB. The later 62MB drive was 940(-1 diag) cylinders, 8 heads, 17 sectors/track at 512 bytes/sector which gives ~62.36 MiB…

The "1.2MB" and "1.44MB" Floppy diskettes are when things started to get spitballed by marketing departments for ease of advertising and blazed an early trail for things to get even more misleading. The High density "1.2MB" diskettes were 512 bytes a sector, 15 sectors per track, 80 sectors per side, and double sided. That’s a total of 1,228,800 Bytes. or 1200 KiB, But they were then advertised as 1.2MB, Which is simply wrong altogether. It’s either ~1.7MiB, or it is ~1.23MB. it is NOT 1.2MB because that figure is determined by dividing the KiB by 1000 which doesn’t make sense. Same applies to "1.44MB" floppy diskettes, which are actually 1440KB due to having 18 sectors/track. (512 * 18 * 80 * 2=1474560 Bytes. That is either 1.47456MB, or 1.40625MiB, but was advertised as 1.44MB because it was 1440KiB (and presumably easier to write).

Hard drive manufacturers started to take it from there. First by rounding up a tiny bit- A 1987 Quantum LPS Prodrive advertised as 50MB was for example 49.87MB (752 cylinders, 8 heads, 17 sectors per track). I mean, OK- sure, 49.87 is a weird number to advertise I suppose…

it’s unclear when the first intentional and gross misrepresentation of HDD size was actually done where the SI Prefix definition was used to call a drive X MB. But, it was a gradual change. People started to accept the rounding and HDD manufacturers got more bold- eventually one of them released an X MB Drive that they KNEW full well people would interpret as X MiB, and when called out on it claimed they were using the "official SI Prefix" as if there wasn’t already a decades old de-facto standard in the industry regarding how storage was represented.

For the most part this confusion persisting forward is how we ended up with the official Binary Prefixes.

And yet- somewhat ironically – most OS software doesn’t use it. Microsoft Windows still uses the standard Prefixes. As I recall OSX provides for it as an option. Older Operating Systems and software will never use it as they won’t be updated.

The way I see it, HDD manufacturers have won. They are now selling Drives listed as "1TB" which are 930GiB, but because it’s 1,000,000,000,000 bytes or somewhere close, it’s totally cool because they are using the SI prefix.

Posted By: BC_Programming
Last Edit: 23 Oct 2018 @ 07:15 PM

EmailPermalinkComments Off on How HDD Manufacturer’s Shaped the Metric System
Tags
 26 Sep 2018 @ 1:38 PM 

I have a feeling this will be a topic I will cover at length repeatedly, and each time I will have learned things since my previous installments. The Topic? Programming Languages.

I find it quite astonishing just how much Polarizations and fanaticism we can find over what is essentially a syntax for describing operations to a computer. A quick google can reveal any number of arguments about languages, people telling you why Java sucks, people telling you why C# is crap, people telling you why Haskell is useless for real-world applications, people telling you that Delphi has no future, people telling you that there is no need for value semantics on variables, people telling you mutable state is evil, people telling you that Garbage collection is bad, people telling you that manual memory management is bad, etc. It’s an astonishing, never-ending trend. And it’s really quite fascinating.

Why?

I suppose the big question is- Why? Why do people argue about languages, language semantics, capabilities, and paradigms? This is a very difficult question to answer. I’ve always felt that polarization and fanaticism is far more likely to occur when you only know and understand one Programming Language. Of course, I cannot speak for everybody, only from experience. When I only knew one language “fluently”, I was quick to leap to it’s defense. It had massive issues that I can see now, looking back, but which I didn’t see at the time. I justified omissions as being things you didn’t need or could create yourself. I called features in newer languages ‘unnecessary’ and ‘weird’. So the question really is, who was I trying to prove this to? Was I arguing against those I was replying too- or was it all for my own benefit? I’m animate that the reasons for my own behaviour -and, to jump to a premature and biased conclusion, possibly those in which I see similar behaviour over other Languages- was the result of feeling trivialized by the attacks on the language I was using. Basically, it’s the result of programmers rating themselves based on what languages they know and use everyday. This is a natural- if erroneous – method of measuring one’s capabilities. I’ve always been a strong proponent that it isn’t the Programming Language that matters, but rather your understanding of Programming concepts, and how you apply them, as well as not subverting to the religious dogmas that generally surround a specific language design. (I’m trying very hard not to cite specific languages here). Programming Languages generally have set design goals. As a result, they typically encourage a style of programming- or even enforce it through artificial limitations. Additionally, those limitations that do exist (generally for design reasons) are worked around by competent programmers in the language. So when the topic enters the domain of their favourite language not supporting Feature X, they can quickly retort that “you don’t need feature X, because you can use Feature Q, P and R to create something that functions the same”. But that rather misses the point, I feel.

I’ve been careful not to mention specific languages, but here I go. Take Visual Basic 6. That is, Pre .NET. As a confession, I was trapped knowing only Visual Basic 6 well enough to do anything particularly useful with it for a very long time. Looking back- and having to support my legacy applications, such as BCSearch- and I’m astonished by two things that are almost polar opposites; The first is simply how limited the language is. For example, If you had a Object Type CSomeList and wanted to ‘cast’ it to a IList interface, you would have to do this:

Basically, you ‘cast’ by setting it directly to a variable of the desired type you want to cast. These types of little issues and limitations really add up. The other thing that astonished me was the ingenuity of how I dealt with the limitations. At the time, I didn’t really consider some of these things limitations, and I didn’t think of how I dealt with them as workarounds. For example, the above casting requirement I found annoying, so I ended up creating a GlobalMultiUse Class (which means all the Procedures within are public); in this case the Function might be called “ToIList()” and would attempt to cast the parameter to a IList and return it. Additionally, at some point I must have learned about Exception handling in other languages, and I actually created a full-on implementation of Exception handling for Visual Basic 6. Visual Basic 6’s Error Handling was, for those that aren’t aware, rather simple. You could basically say “On Error Goto…” and redirect program flow to a specific label when an error occured. All you would know about the error is the error number, though. My “Exception” implementation built upon this. To Throw an exception, you would create it (usually with an aforementioned public helper), and then throw it. in the Exception’s “Throw()” method, it would save itself as the active Unwind Exception (global variable) and then raise an Application defined error. Handlers were required to recognize that error number, and grab the active exception (using GetException(), if memory serves). GetException would also recognize many Error codes and construct instances of the appropriate Exception type to represent them, so in many cases you didn’t need to check for that error code at all. The result? Code like this:

would become:

There was also a facility to throw inner exceptions, by using ThrowInner() with the retrieved Exception Type.

So what is wrong with it? well, everything. The Language doesn’t provide these capabilities, so I basically have to nip and tuck it to provide them, and the result is some freakish plastic surgery where I’ve grafted Exceptions onto somebody who didn’t want Exceptions. Fact is that, once I moved to other languages, I now see just how freakish some of the stuff I implemented in VB was. That implementation was obviously not thread safe, but that didn’t matter because there was no threading support, for example.

Looking forward

With that in mind, it can be valuable to consider one’s current perspectives and how they may be misguided by that same sort of devotion. This is particularly important when dealing with things that you only have a passing knowledge of. It’s perhaps more applicable if you’ve gained experience with something to be able to recognize flaws, but it’s easy to find flaws or repaint features or aspects as flaws for the purpose of making yourself feel wiser for not having used it sooner.

Posted By: BC_Programming
Last Edit: 26 Sep 2018 @ 01:38 PM

EmailPermalinkComments Off on Programming Languages (2)
Tags
 28 Feb 2018 @ 9:59 PM 

Nowadays, game music is all digitized. For the most part, it sounds identical between different systems- with only small variations, and the speakers are typically the deciding factor when it comes to sound.

But this was not always the case. There was a time when computers were simply not performant enough- and disk space was at too high a premium- to use digital audio tracks directly as game music.

Instead, if games had music, they would typically use sequenced music. Early on, there were a number of standards, but eventually General MIDI was settled on as a standard. The idea was that the software would instruct the hardware what notes to play and how to play them, and the synthesizer would handle the nitty-gritty details of turning that into audio you could hear.

The result of this implementation was that the same music could sound quite different because of the way the MIDI sequence was synthesized.

FM Synth

The lowest end implementation dealt with FM Synthesis. This was typically found in lower-cost sound cards and devices. The instrument sounds were simulated via math functions, and oftentimes the approximation was poor. However, this also contributed a “unique” feel to the music. Nowadays FM Synth has become popular for enthusiasts of old hardware. Products like the Yamaha OPL3 for example are particularly popular as a “good” sound card for DOS. In fact, the OPL3 has something of a cult following, to the point that source ports of some older games which use MIDI music will often incorporate “emulators” that mimic the output of the OPL3. It’s also possible to find “SoundFonts” which work with more recent audio cards that mimic the audio output of an OPL3, too.

Sample-Based Synth

Sample-based synth is the most common form of MIDI synthesis. Creative Labs referred to their implementation as “Wavetable synthesis” but that is not an accurate description of what their synthesizer actually does. A sample-based synthesizer has a sampled piece of audio from the instrument and will adjust it’s pitch and other qualities based on playback parameters. So for example it might have a sampled piece of audio from a Tuba and then adjust the pitch as needed to generate other notes. This produces a typically “Better” and more realistic sound than FM Synth.

Wavetable Synthesis

Wavetable synthesis is a much more involved form of synthesis which is like FM Synth on steroids; where FM Synth tended to use simpler waveforms, Wavetable synth attempts to reproduce the sound of instruments by having the sound of those instruments modelled with a large number of complicated math functions and calculations as well as mixing numerous pieces of synthesized audio together to create a believable instrument sound. I’m not personally aware of any hardware implementations- though not being anything of a music expert I’m sure there are some- but Software implementations tend to be present and plugins or features of most Music Creation Software.

Personally, I’m of the mind that the best Sample-based Synthesis is better than the FM Synth  that seems to be held on a pedestal; they were lower-end cards built down to a price which is why they used the much more simplistic FM synthesis approach, after all. It’s unique audio captured a lot of people playing games using that sort of low end audio hardware, so to a lot of people, FM Synth is  how games like Doom or Monkey Island are “Supposed” to sound. I think that Sample-based synth is better- but, on the other hand, that is how I first played most of those games, so I’m really just falling into the same trap.

Posted By: BC_Programming
Last Edit: 28 Feb 2018 @ 09:59 PM

EmailPermalinkComments Off on MIDI Madness
Tags

 Last 50 Posts
 Back
Change Theme...
  • Users » 47469
  • Posts/Pages » 397
  • Comments » 105

PP



    No Child Pages.

Windows optimization tips



    No Child Pages.

Soft. Picks



    No Child Pages.

VS Fixes



    No Child Pages.

PC Build 1: “FASTLORD”



    No Child Pages.