27 Oct 2016 @ 12:39 PM 

This is part of a series of posts covering new C# 6 features. Currently there are posts covering the following:
String Interpolation
Expression-bodied Members
Improved Overload resolution
The Null Conditional operator
Auto-Property Initializers

Yet another new feature introduced into C# 6 are a feature called Dictionary Initializers. These are another “syntax sugar” feature that shortens code and makes it more readable- or, arguably, less readable if you aren’t familiar with the feature.

Let’s say we have a Dictionary of countries, indexed by an abbreviation. We might create it like so:

This is the standard approach to initializing dictionaries as used in previous versions, at least, when you want to initialize them at compile time. C#6 adds “dictionary initializers” which attempt to simplify this:

Here we see what is effectively a series of assignments to the standard this[] operator. It’s usually called a Dictionary Initializer, but realistically it can be used to initialize any class that has a indexed property like this. For example, it can be used to construct “sparse” lists which have many empty entries without a bunch of commas:

The “Dictionary Initializer” which seems more aptly referred to as the Indexing initializer, is a very useful and delicious syntax sugar that can help make code easier to understand and read.

Posted By: BC_Programming
Last Edit: 27 Oct 2016 @ 12:40 PM

EmailPermalinkComments (0)
Tags: , ,
Categories: .NET, C#, Programming

 19 Oct 2016 @ 12:00 PM 

Not strictly programming related, but sparked by some recent UX I had with a few programs, including some I wrote. These are not hard and fast rules, by any stretch of the imagination, but they provide a good checklist of basic things that are sometimes forgotten when it comes to Application development, and presentation. I’ll make additions to these “rules” as I consider them through testing or finding frequent trouble spots.

Don’t Pin yourself when you are installed

To name and shame- Firefox does this last I checked. Another program that pins itself when it is installed is PowerArchiver, an otherwise excellent archiving tool which only appears one other time in this post. “Pinned” programs have somewhat replaced the Quick Launch bar. Personally, I prefer the quick launch toolbar and don’t pin any programs; but the general idea is that the taskbar icon for pinned programs are always present, so you can “switch” to the program and if it’s not running it will start. (I’m oversimplifying it, mind you). Now, Pinning and unpinning is something which has no programming API; there is no Win32 API function to call to add a pinned program or remove one; the idea is that the user decides what is pinned by explicitly pinning it. My understanding of Firefox’s logic is that Internet Explorer is pinned by default (Edge in Windows 10) so they should pin themselves by default. I can sort of see where they are going with that logic- to be entirely fair, Internet Explorer shouldn’t be pinned in a default Windows install- but I don’t think two wrongs really make a right here. By the time I install Firefox, I’ve already unpinned Internet Explorer and the Windows Store and don’t have any pinned programs; I don’t want Firefox pinned either. And If somebody is a Firefox user or somebody sets up Firefox for somebody else (or Chrome, for that matter, though I don’t think Chrome automatically pins itself on install) then I’m sure they are capable of pinning Firefox themselves if they desire it that way.

The problem is that regardless of the circumstances, it presumes that your application is special. Moving to Powerarchiver, I recently updated to a newer version that was available, and after installing the new version, I found Powerarchiver had automatically pinned itself to the Taskbar. C’mon guys! Very few people use an Archiving tool in a manner where they will want to launch it standalone; speaking for myself, I use it through windows explorer to extract and very occasionally compress zip or 7z or other archive formats. I seldom launch it on it’s own, and I imagine that extends to most people. But even if that was not the case, to bust out a rhyme, ahem…”pinning yourself is forgetting yourself.”. That was a terrible rhyme, which fits with the behaviour. Let the user decide what to pin and whether your program is useful enough to them to pin. The fact that somebody had to grovel through internal Windows functions and structures to find out how they can force their program to be added to the taskbar as a pinned button I think just makes it worse, like nobody along the way said “Hey, guys, maybe our program isn’t the awesomest most useful program ever, perhaps not every single person will want it pinned?”

This extends to a lot of other annoying behaviours. Don’t steal file associations willy nilly- if your program wants file associations, present a prompt- or better yet- only associate unassociated file types, and provide an options dialog to allow users to associate other file types already associated with other programs. Setting as the default program for things like browsing fall into the same category.

Verify it functions properly at various DPI settings

When you run a program that doesn’t indicate it supports High DPI in it’s Manifest file on a High DPI display, Windows will try to scale it itself. It effectively does this by allowing the program to think it is running at a standard 72 (100%) DPI, and then stretching the image of the client area to the “actual” scaled size. For example, here is Recoder being displayed in that manner:

BASeCamp Recoder running without having declared DPI support on a high-DPI display.

BASeCamp Recoder running without having declared DPI support on a high-DPI display.

As we can see, this scaling feature allows programs that might not support higher DPI or have issues to remain compatible when run on high DPI displays, at the cost of looking rather blurry. If we add a call to SetProcessDPIAware() or if we declare DPI awareness in the manifest file, it looks much better.

Recoder running With High-DPI support on a High-DPI display.

Recoder running With High-DPI support on a High-DPI display.

The Caveat, of course, is that your program needs to be- well, DPI Aware. Since Windows isn’t going to do any work for you you’ll need to make sure that your Window Layout can be properly displayed regardless of the DPI of the user’s Monitor. This is particularly troublesome when using Windows Forms, as when you save a Form Designer, it saves pixel data that is dependent on your Development system DPI. On target systems, it attempts to scale based on that and the relative size on the system it is running on, but a bug means that if it attempts to scale to a DPI lower than the system on which the designer file was saved, then it completely borks a lot of the layout. The workaround is to save on a system with 100% DPI; for my work I’ve had to do that (we still use Windows Forms as the product is older and far to large to consider moving to a newer tech anytime soon) by using a separate system set for 100% DPI, selecting a element and moving it back and forth (to register a change) and saving and committing the designer.

If your program declares itself “DPI aware” then making sure it’s not a liar by verifying it works on non-standard DPI settings should be part of an exhaustive testing regimen.

Store data in the appropriate location

Windows as well as other Operating Systems establish conventions for where certain data should be stored. Data should be stored in these locations, such as the Application Data folder, the Common Application Data Folder, and so on. If nothing else, storing any additional data after installation to your programs installation directory is a strict no-no.


One thing that might get neglected in testing is the behaviour of any program uninstallers. These should be verified to remove the program and, if an option is provided, application data. Ideally, an uninstallation would not delete program configuration data, except where an option is provided to do so, and it is checked off.

Verify multiple operating Modes

Sometimes software might have different operating modes. For example, it may work different or even do a completely different task when run with certain arguments; perhaps a program might display different UI elements, or it might perform operations differently. Regardless, when testing software, or making changes, it is a good idea to check that all these operating modes continue to function as expected. Even if an added feature is added in a way that should “magically” work with the other approaches, such as in a central routine used by all the specific “operating modes”, it should still, of course, be verified. This is easy to skip with the justification that it probably works but probably is not knowledge or verification!

Posted By: BC_Programming
Last Edit: 18 Oct 2016 @ 09:24 PM

EmailPermalinkComments (0)
Categories: Programming

 19 Oct 2016 @ 3:00 AM 

Previously I wrote about how the onward march of technology has slowed down, but the ‘stigma’ that surrounds using older hardware has not been reduced to correlate appropriately. Despite slowing down, technology has certainly improved, particularly as we look back further. This can make for very unique challenges when it comes to maintenance for older systems.

In particular, the Thinkpad T41 that I wrote about in the above previous post has a failing Hard Disk, which I believe I also mentioned. This presents itself as a unique challenge, as it is a Laptop EIDE drive. These are available on sites like Amazon and eBay, but this gives the choice of rather pricey (a few dollars a GB) for a new drive, or used and thus of unknown lifespan (eBay). I ended up purchasing a cheap 40GB drive off eBay. However, I discovered that was not my only option, As it turns out that products have been released that almost entirely address this issue.

I speak of CompactFlash adapters. These are adapters which connect to a Laptop 44-pin EIDE interface, and allow you to plug a CompactFlash card into the other side. The device it is plugged into basically just sees a standard HDD. This is an interesting approach because it is in some sense an SSD for older systems, perhaps without quite the speed benefit of an SSD, though still with the advantage of Solid State.

Since I had already purchased a cheap 40GB drive off eBay, I decided to grab an adapter and a CompactFlash card as well for Benchmark purposes. My expectation was that the CompactFlash card would run much faster.

The first step was what to use to compare. CrystalDiskMark was about as good an option as any, so I went with that. First I tested the 40GB drive I received, Then I tested the CompactFlash Adapter. The HDD is a Toshiba MK4036GAX. The Adapter is a “Syba Connectivity 2.5 Inch IDE 44-pin to Dual Compact-Flash Adapter SD-ADA5006” and the Card I’m using with it is a 32GB Lexar Professional 800x 32GB.

Test MK4036GAX (MB/s) CompactFlash Adapter
Sequential Read 29.543 88.263
Sequential Write 31.115 29.934
Random Read 4KiB 0.430 12.137
Random Write 4KiB 0.606 0.794
Sequential Read 24.116 87.230
Sequential Write 30.616 19.082
Random Read 4KiB 0.326 3.682
Random Write 4KiB 0.566 0.543

Looking at the table, we see that, unlike modern SSDs, the use of a CompactFlash drive has some trade-offs. They get much faster performance for typical read operations such as sequential reads and random reads, but they falter particularly for random write operations. Or, rather, this particular CF adapter and card had problems with that arrangement.

Another interesting issue I encountered was that neither Windows nor Linux are able to establish a pagefile/swap partition on the compact Flash card. This is a bit of a problem, though with few exceptions most programs I use on this laptop would tend to not tax the 2GB of total memory available. That said, a bigger issue that may or may not be related seemed to be that Windows XP cannot seem to install programs that use Windows Installer databases- instead they will endlessly prompt for a Disc- even when they don’t use a Disc or if the Disc being installed from is in the drive. I wasn’t able to discover the cause of this problem after investigating it, though I had no issues installing when using the standard HDD.

For now, I’ve got the system back on it’s “normal” HDD drive which as I noted in the linked post works just fine- so in that sense, my “upgrade” attempt has failed, which is unfortunate. The system runs well, for what can be expected of it; As mentioned it is quite snappy, considering it being considered “ancient” by many, it still works respectably for reading most Web content as well as writing blog posts, so the argument that it is out-of-date is hard to properly substantiate. I would certainly find it lacking, mind you, for replacing my everyday tasks, or doing things like watching youtube videos, but despite it’s age I’ve found it fits well in a niche of usefulness that keeps it from being completely obsolete, at least for me.

When it comes to computers, in general, I think you can make use of systems from any era. You can still use older systems largely the same for the same tasks they were originally designed for, the main difference is that more recent systems add additional capabilities; for example, you won’t be watching youtube on a Pentium 133 PC. But you wouldn’t be watching youtube on such a system when it was top-of-the-line, either. I find there is something appealing about the simplicity of older systems, while at the same time the limitations of those older systems (where present) can make for an interesting challenge to overcome, and finding the right balance between the software and hardware can be more nuanced than “throw the latest available version on”.

Another consideration is something like security. For example, you might make use of an older IBM PC that uses Floppy diskettes to boot as a central password manager, or to store other sensitive information. (With copies of course). This allows the old system to be used beyond just fiddling about, and fulfill a useful function. However it would still be far less convenience then, say, Keypass or Lastpass or software of that nature. On the other hand, nobody is going to hack into your non-Internet-Connected PC without physical access.

Posted By: BC_Programming
Last Edit: 18 Oct 2016 @ 09:19 PM

EmailPermalinkComments (0)
Categories: Hardware, Programming

 18 Oct 2016 @ 9:13 PM 

My most recent acquisition on this is a Tandy 102 Portable computer.

Tandy 102 PortableI’ve actually had a spot of fun with the Tandy 102 Portable. Writing BASIC programs on the Tandy 102 Portable gave me both an appreciation for the capabilities of modern languages as well as a more cynical perspective about some of the changes to development ecosystems. With this system you start BASIC and that’s it. You write the program then and there as line numbers, run it, save it, etc. You don’t build a project framework, or deal with generated boilerplate, or designers or inspectors or IDE software or test cases or mocking or factory interfaces or classes or any of that. When it comes to pure programming, the simplicity can be very refreshing. I’ve found it useful on occasion for short notes. usually I use Editpad or Notepad for this but have found the Tandy 102 Portable to be more “reliable” in that I won’t lose it or accidentally close the file without saving. (And power outages won’t affect it either, though arguably those are rare enough to not even matter). The Large text also makes it easy to read (with adequate light). Most interesting was plugging it into the “budget build” I blogged about previously and having the two systems communicate directly through the serial port. I was able to transfer files both to and from the system, though to say it was straightforward would be a bit of a fib.

Posted By: BC_Programming
Last Edit: 18 Oct 2016 @ 09:13 PM

EmailPermalinkComments (0)
Categories: Hardware

 08 Oct 2016 @ 4:38 PM 

One of the old standby’s of software development is manipulating bits within bytes. While it used to be that this was necessary- when you only have 8K of RAM, you have to make the most of it, which often meant packing information. Nowadays, it’s not quite as necessary, since there is so much more RAM on a typical system and it’s generally not worth the loss of performance that would come from packing unpacking bits for the tiny memory savings that would be afforded.

There are, of course, exceptions. Sometimes, trying to use conventional data types together with functions or other operations that work with compact data representations (For example, generating a Product Key) can be as awkward as Hitler at a Bar Mitzvah. In those cases, it can be quite useful to be able to pack several Boolean true/false values into a single byte. But in order to do so, we need to do some bit bashing.

Bit Bashing

“Bit Bashing” is the rather crude term for Bit manipulation, which is effectively what it says on the tin- the manipulation of the bits making up the bytes. Even the oldest Microcomputer CPUs- the Intel 4004, for example- worked with more than one bit at a time, in the case of the 4004, it worked with 4-bits at a time. The original IBM PC worked with a full byte. This means that operations work at a higher level even then the bit, so it takes some trickery to work at that level.

The core concepts are easy- you can use bitwise operators on a byte in order to set or retrieve individual bits of the byte.

Setting a Bit

Setting a bit is a different operation depending on whether the bit is being cleared or set (0 or 1). If the bit is being set, one can do so by using a bitwise “or” operation with a shifted value based on the desired bit to change:

Straightforward- create a byte based on the specified bit index (0 through 7) the bitwise or that against the original to force that bit to be set in the result.

The converse is similar, though a teensy bit more complicate. to forcibly set a bit to 0, we need to perform a bitwise “and” operation not against the shifted value, but against the bitwise complement of the shifted value:

Retrieving a bit

Retrieving a bit is as simple as seeing if the bitwise and between the value and the shifted byte is non-zero:

Being able to encode and decode bits from a byte can be a useful capability for certain tasks even if it’s necessity due to memory constraints may have long since passed.

Posted By: BC_Programming
Last Edit: 08 Oct 2016 @ 04:38 PM

EmailPermalinkComments (0)
Categories: .NET, C#, Programming

 08 Oct 2016 @ 12:15 AM 

Had a bit of a hiccup with the move to the VPS- Seems I had some WordPress plugins that weren’t PHP 7 compatible and still accessed MySQL via the bad old mysql_ functions. I would have been able to sort it out sooner but I didn’t have access via my FTP, as in the move credentials had changed. I was able to get credentials sorted through support (I had to use the root account with a password I already knew, but had never tried arbitrarily on the root account since it wasn’t associated with one before).

At any rate, The site is now officially running on a VPS!

Posted By: BC_Programming
Last Edit: 08 Oct 2016 @ 12:15 AM

EmailPermalinkComments (0)
Categories: News

 17 Sep 2016 @ 3:42 PM 

Previously I wrote about implementing a Alpha-blended form with VB.NET. In that implementation, I had an abstract class derive from Form and then the actual forms derive from that. This causes issues with using the Form in the designer.

In order to workaround that issue I’ve redone parts of the implementation a bit to get it working as it’s own separate class. Rather than rely on the CreateParams() to adjust the GWL_EXSTYLE when the form is initially created, it merely uses SetWindowLong() to change it at runtime. Otherwise, the core of what it does is largely the same- just refactored into a package that doesn’t break the designer.

It should be noted, however, that adding controls to the form will not function as intended, though- this is inherent in the Alpha Blending feature, as it effectively just draws the bitmap. This is why it works well for Splash Screens. Controls will still respond to events and clicks however they will be invisible; making them visible would require drawing them onto the Bitmap and then setting it as the new Layered Window bitmap each time controls change.

Here is the changed code:

In usage it is no more complex than before, really:

The end result is largely the same:


Posted By: BC_Programming
Last Edit: 15 Oct 2016 @ 01:51 PM

EmailPermalinkComments (0)

 28 Aug 2016 @ 1:08 AM 

As anybody knows, there can be a lot of incorrect information on the internet. Internet “Just so” stories can spread like wildfire if they are believable and explain something neatly. One of those “just so” stories involves older game consoles and computers; over time, we find that our once-white and gray plastics on old systems like Apple II’s, NES consoles, SNES consoles, and so on change colour; they change from white or gray to yellow, and over time that yellow darkens, sometimes even turning brown.

This phenomena is “explained” here. Or is it? Does what is stated there about the process reflect reality? Does it make chemical sense? To the layman or casual observer- hey, it makes sense. Bromine IS brown, after all, it’s added to the plastic. But is there a chemical basis and support for it? What reactions actually take place?

“RetroBright”- which is basically just Hydrogen peroxide – is commonly recommended to “reverse” the effects. The reason I care about the actual chemical properties is because the yellowing itself goin g away isn’t an indication that everything is back to how it was. Colour changes can be the result of all sorts of things. More importantly, if we learn the actual chemical processes involved, perhaps we can come up with alternative approaches.

Basically, the story put forth in the article is a rather commonly repeated myth- a Chemical “just-so” story of sorts- “Bromine is brown so that must be it” Is the extent of the intellectual discussion regarding chemistry, more or less. Generally though there isn’t much drive to look further into it- it all makes sense to the layman on the surface, or even one with rather standard chemistry knowledge. But when you look deeper than the surface of the concept- you see that the commonly held belief that Brominated Flame Retardants are responsible doesn’t seem to hold up.

First we can start with the first inaccuracy in that link- Bromine is not added as a flame retardant- that is flat out, categorically and completely wrong, and trivially easy to refute. Bromine compounds are added as flame retardants, But as they are compounds, the colour of elemental Bromine (brown) is irrelevant, because elemental Bromine is not added to the plastic. Specifically, chemicals like Tetrabromobisphenol A. (C15H12Br4O2).

The article also says that “The problem is that bromine undergoes a reaction when exposed to ultraviolet (UV) radiation” But Bromine doesn’t photo-oxidize. It doesn’t even react with anything in the air on it’s own; creating Bromine dioxide either involves exposing it to Ozone at very low temperatures alongside trichlorofluoromethane, alternatively, gaseous bromine can be made to react with oxygen by passing a current through it. Neither of these seem like they take place in a Super Nintendo. Not to mention elemental bromine is brown, so if it was in the plastic, oxidization would change it from the brown of elemental bromine to the yellow of bromine dioxide.

Back to what IS in the plastic, though- Tetrabromobisphenol A is not photosensitive; it won’t react with oxygen in the air due to UV light exposure, and the bromine cannot be “freed” from the compound and made elemental through some coincidence in a typical environment. It is simply not the cause of the yellowing; (it will yellow without BFR’s as well, which sort of indicates it’s probably not involved).

The Yellowing is inherent to ABS plastics, because it is the ABS plastic itself that is photo-oxidative. On exposure to UV light (or heat, which is why it can happen with systems stored in attics for example), the butadiene portion of the polymer chain will react with oxygen and form carbonyl-b. That compound is brown. There’s your culprit right there. Retrobright works because thsoe carbonyls react with hydrogen peroxide, and create another compound which is colourless. but the butadiene portion of the polymer remains weak- oxalic acid is thought to be one possible way to reverse the original reaction.

So why does it sometimes not affect certain parts of the plastic or certain systems? here the “just so” story is a bit closer to reality- the “story” is that the plastic formulae has different amounts of brominated flame retardants, This is probably true, but as that compound isn’t photo-reactive or involved in the chemical process, it’s not what matters here. What causes the difference is a variance in a different part of the formulae- the UV stabiliser.

UV Stabilisers are added to pretty much All ABS plastic intentionally to try to offset the butadiene reaction and the yellowing effect the resulting carbonyl has. They absorb UV light and dissipate it as infrared wavelength energy which doesn’t catalyze a reaction in the butadiene. Less UV Stabilizer means more UV gets to the Butadiene and causes a reaction and the plastic yellows more quickly. more UV stabilizer means less UV catalyzes reactions and the plastic takes longer to change colour.

As with anything related to this- the best way is to experiment. I’ve decided to pick up some supplies and test both approaches on a single piece of plastic. some standard “retrobright” mixture using hydrogen peroxide, and a variation using oxalic acid. I can apply both to the same piece of yellowed plastic, and observe the results. Are both effective at removing the yellowing color? what happens longer term? It should be an interesting experiment.

Posted By: BC_Programming
Last Edit: 28 Aug 2016 @ 01:08 AM

EmailPermalinkComments (0)

 24 Aug 2016 @ 10:32 PM 

User Account Control, or UAC, was a feature introduced to Windows in Windows Vista. With earlier versions of Windows, the default user accounts had full administrative privileges, which meant that any program you launched also had full administrator privileges. The introduction of UAC was an attempt to solve the various issues with running Windows under a Limited User Account to make the more advanced security features of Windows far more accessible to the average user. The effective idea was that when you logged in your security token, which was effectively “given” to any software you launched, would be stripped of admin privileges. In order for a process to get the full token, it would require consent, this consent was implemented via the UAC dialog, allowing users to decide whether or not to give or deny that full security token.

It was a feature that was not well received; users complained that Vista was restricting them, and making them ask for permission for everything- something of a misinterpretation of the feature and how it works, but an understandable one somewhat. Nowadays, it is practically a staple of Windows, being present in the default user accounts through 7, 8, and now 10. Even so, it has had some design changes over the years.

One interesting aspect of the UAC consent dialog is that it will differentiate between a “Verified”, or signed, executable, and an unsigned one, displaying slightly different designs based on the evaluation of the executable. A signed executable effectively includes a digital signature which is able to verify that the program has not been altered by a third party- so if you trust the certificate authority as well as the publisher, it should be safe.

Windows Vista

We start our tour, perhaps unsurprisingly, with Vista.


Vista UAC Dialog, shown for an executable with a verified signature.


Vista UAC Dialog, shown for an executable with a verified signature, after expanding the Details option.

When the executable is verified, we see a relatively straightforward request. Expanding the dialog, as shown in the second image, provides access to the application path; There is no way, within the UAC dialog, to inspect the publisher’s certificate- that needs to be checked via other means.

Interestingly, once we start looking at unverified executables, however, we see quite a different presentation:


Windows Vista UAC Dialog displayed for a Unverified executable.


Windows Vista UAC Dialog shown for an unverified executable, after expanding the details option.

Rather than the more subdued appearance as seen when the application is verified, the dialog displayed for an unverified application is more bold; the options are presented as TaskDialog buttons, and the entire dialog has a very “Task Dialog” feel; additionally, the colour scheme uses a more bold yellow. Interestingly, Expanding the “Details” really only adds in the file location to the upper information region. Kind of an odd choice, particularly since the UAC dialog will usually be on it’s own secure desktop and thus screen real-estate is not as valuable as it might otherwise be.

Windows 7

On Vista, elevation tended to be required more frequently and thus UAC dialogs were rather common for standard Windows operations. Users needed to give consent for many standard Windows tasks such as adjusting Windows settings. Windows 7 adjusted some of the default behaviour and it does not by default present consent dialogs for many built-in Windows operations. The design of the UAC dialog also was adjusted slightly:


Windows 7 UAC dialog on a verified/signed executable.


Windows 7 UAC dialog on a verified executable, expanded.

For verified executables, the dialog is rather unchanged; The biggest changes we see are in the title copy “Windows needs your permission to continue” changes to an ask regarding whether the user gives permission to a particular program. The dialog now includes a hyperlink in the lower-right that takes you right to the UAC settings, and publisher certificate information is now available when the details are expanded.


Windows 7 UAC Dialog for an unverified Program.


Windows 7 UAC dialog for an unverified program, expanded

The Unverified dialog is quite a departure from the Vista version. It takes it’s design largely from the “Signed” version of the same dialog; perhaps for consistency. It dumps the “TaskDialog” style presentation of the options, instead using standard Dialog buttons, as with the “Signed” Appearance.


Windows 8


UAC dialog on Windows 8 for an unverified executable.


UAC Dialog on Windows 8 for an unverified executable, expanded.


UAC Dialog on Windows 8 for a Verified executable.


UAC Dialog on Windows 8 for a Verified executable, Expanded.



For the sake of completeness, I’ve presented the same dialogs as seen on Windows 8. There have been no changes that I can see since Windows 7, excepting of course that the Win8 Windows Decorator is different.

Windows 10


UAC Dialog from the Windows 10 November Update, running an Unverified executable.


UAC Dialog from the Windows 10 November Update, running an unverified executable, showing details.


UAC Dialog running a Verified executable on the Windows 10 November Update.


UAC Dialog from the Windows 10 November Update, running a Verified executable, showing Details.


Yet again, included for completeness, the UAC dialogs shown by Windows 10 in the November Update. These are again identical to the Windows 8 and Windows 7 version of the same, providing the same information.


This all leads into the reason I made this post- the Anniversary Update to Windows 10 modified the appearance of the User Account Control dialogs to better fit with UWP standards:



Windows 10 Anniversary Update UAC dialog for an Unverified Executable.


Windows 10 Anniversary Update UAC dialog for an unverified Executable, after pressing “Show Details”.


Windows 10 Anniversary Update UAC Dialog for a Verified application.


Windows 10 Anniversary Update UAC Dialog for a Verified Application, after pressing Show Details.


As we can see, the Windows 10 Anniversary Update significantly revised the UAC dialog. It appears that the intent was to better integrate the “Modern” User Interface aesthetic present in Windows 10. However, as we can see, the result is a bit of a mess; the hyperlink to display certificate information appears for unverified executables, but in that case, clicking it literally does nothing. The information is presented as a jumble of information with no text alignment, whereas previously the fields were well defined and laid out. I’m of the mind that updating the dialog to UWP should have brought forward more elements from the original, particularly the information layout; The “Details” hyperlink in particular should be more clearly designated as an expander, since as it is it violates both Win32 and UWP Platform UI guidelines regarding Link Label controls. I find it unfortunate that parsing the information presented in the dialog has been made more difficult than it was previously, and hope that future updates can iterate on this design to not only meet the usability of the previous version, but exceed it.





Posted By: BC_Programming
Last Edit: 24 Aug 2016 @ 10:35 PM

EmailPermalinkComments (0)

 03 Aug 2016 @ 4:37 PM 

I wrote previously, where I found that the Group Policy added to the insider build of Windows 10 did not have any observable effect.

This is merely a quick note as I re-tested on the recently released Anniversary Update, and with the group policy set in addition to the new “longPathAware” manifest setting, Functions like CreateDirectoryW do in fact allow access to long path names beyond MAX_PATH without the special prefix. Basically, the group policy is now active.

Of course, with the standard .NET File API, This has no effect, as the .NET File API does it’s own checks that restrict the path length.

Posted By: BC_Programming
Last Edit: 03 Aug 2016 @ 04:37 PM

EmailPermalinkComments (0)
Categories: .NET, C#

 Last 50 Posts
Change Theme...
  • Users » 41946
  • Posts/Pages » 350
  • Comments » 104
Change Theme...
  • VoidVoid « Default
  • LifeLife
  • EarthEarth
  • WindWind
  • WaterWater
  • FireFire
  • LightLight


    No Child Pages.

Windows optimization tips

    No Child Pages.

BC’s Todo List

    No Child Pages.