Personally, I have not found many of these ideals to hold merit; or where they do, they aren’t really an absolute and the “buzzword” doesn’t always apply, but adherents will insist on using it everywhere. I say “adherents” because discussing things with people who follow these “best practices” often feels like talking to a religious person. Instead of reviewing, evaluating, designing, and writing software, people are referring to their doctrine bibles written by consultants, and sometimes even people who have no real experience in the industry beyond figuring out how to construct buzzword ideals to write books.
I’m sure I could write a book myself on them, however here I would like to discuss one specific “code smell”, “primitive obsession”. The general idea is to not use primitive types if another type makes more sense. I’ve seen the idea applied too judiciously by adherents; to the point of for example suggesting that instead of a string, a password should be a special type; or that a File Path should have it’s own special type.
The idea is that for example a signature like this:
public int Login(String username, String pass)
Should instead be:
public LoginResult Login(UserName username, Password pass)
There is merit here. Using an enum instead of an int for the result- or even using a class to represent the result of an operation where appropriate, absolutely makes sense. And, ceertain instances where additional features or capabilities for that typed information is completely sensible- A phone number, for example, can be validated, have the parts extracted and be formatted and so forth. However, I’m skeptical of examples like UserName or Password here, and think they are an obsession over removing primitive obsession, rather than something of useful merit. The result of somebody following doctrine, but not actually understanding it. Types that simply wrap a string merely provide the illusion of strong typing. A Username cannot be directly validated as a username; a password cannot be validated as a password.
There’s no special formatting or operations that you can do that are unique to a Username or a Password independent of the environment for which it is for. It just doesn’t make sense to me to construct a class for this purpose. Furthermore, I’m not even convinced it solves the issue it intends to anyway. In the example above, it would prevent client code calling the Login() method from calling it with a Password as the Username, or a Username as the Password. But that isn’t as helpful as it appears because there’s nothing preventing the creation of a UserName with a password or a Password with a username, due to the aforementioned lack of any useful validation logic which could verify the data (as, for instance, would be the case for a phone number)- The constructor for these special type variables, or any static class that constructs them, by necessity needs to be "Stringly typed" (as the zealots who treat programming books as a bible might say). To me this really removes a lot of the benefit; You are assuming that a Username is a Username no less than one would with a String variable named Username, so it’s unclear why there is some advantage to those sorts of situations. In fact Usernames and passwords are not even one of the samples discussed in books like "Refactoring: Improving the Design of Existing Code"- it uses examples like Phone Numbers, Currency, Coordinates, and Ranges- none of which are simply loose wrappers around a primitive type and have their own unique operations which make sense to encapsulate.
Like a lot of these sorts of “rules” that get reduced into short, catchy phrases, they end up being applied very judiciously by people who don’t really understand the underlying spirit of the rule- and instead interpret it as “if I do this, I’ll write good source code” which is of course not true.
]]>I had always rather considered them a used clothing store, but the one near me does have electronics and other such odds and ends. I started visiting there and was surprised at the kind of stuff I found. It was convenient as well, as it was a nearby plaza with a grocery store; so if I had to go buy something from the grocery store I’d pop in and see if there was anything fun.
Over time I have now accumulated a collection of keyboards, mice, computers, and laptops, due to finding them there; I’ve got several Razer BlackWidow Ultimate’s; 2013, 2013, 2016, and two RGB models; a Corsair K70, a Corsair Strafe, Logitech K840 and two G810, and a smattering of others from vendors like “Havit”; Razer DeathAdder and Mamba mice; A few Logitech Mice, including a G502 HERO, Kingston HyperX Headset, complete in box; I could go on. Best find was a 7 dollar Acer Nitro 5 Gaming Laptop; though with the caveat that it doesn’t power on. I’ve been trying to sort that out for a while by poking around with a multimeter, but it’s slow going. That was worth it for the 512GB NVMe SSD and 8GB RAM stick though. I thought as a form of content I could start documenting my finds here, since I’m not writing as much as I’d like.
I found this in a box for a Razer Pro Type Ultra, along with the magnetic Wristwrest which was still in it’s protective bag thing. Not sure why it was in there. For 19.99 I figured it was a good buy though, so picked it up. I also didn’t have any Steelseries stuff, as far as I know. The LCD Screen is a bit strange IMO; however i could see it having it’s uses. Though mine just shows a “Steelseries” logo; I’d expect default behaviour to reflect the cap/num/scroll lock state. I didn’t dig too far into this one in terms of what sort of customizations are possible. One thing I did notice is that it appears to have ‘interactivity’ without the use of added software, which is actually something I’d have liked for my current K70 RGB. The iCUE software is a hog so I usually don’t run it; the keyboard on it’s own can save lighting settings, but it won’t have any of the effects, which are actually done through software. the Steelseries seems to at least be able to have it such that pressing a key changes the colour and then it fades back to the original color, as that is what it was set up with when I tested it.
This is a neat little MP3 Player. I still use both my Sony Walkman MP3 as well as a iPod G5 I’d picked up previously. The Zen Style has 8GB of built in memory and can also take an SD Card which is an interesting perk. The left arrow key either doesn’t work or doesn’t do anything- I can’t actually tell. The lens/faceplate has a lot of scratches unfortunately but it is for playing music so can’t imagine that’s that big of a deal.
Interestingly, this “inspired” me to finally fix “Recoder” Here on github. I eventually discovered that there were no tags on some of my MP3 files, and finally figured out that it was the files I had transcoded using my tool. It transcodes successfully- but it doesn’t copy the tags over. I kinda shelved it because I wasn’t in a good state of mind at the time to try to figure out how to read and write tag formats I wasn’t familiar with in a hurry. I revisited it and disappointed myself by not actually learning about the tag formats and just slapping a nuget package in and using it like a glue programmer from the 90’s using VBX controls and pretending they can program. I basically made it copy the tags over after conversion from the source file to the destination file. Then I let my tool run for a few hours transcoding my entire music library to a fully 320kbps MP3 copy of that folder. I copied some of that over to the Zen; though I opted not to utilize the SD Card slot, and just put a selection on it’s 8GB internal storage.
This is the fourth of these slim PCs that I now have. I have another 8200, a 8300, and 6200 already. The 8200 is a Generation 2 Core series system. The one I already have is running Windows 11, and it runs it just fine. This one only had 4GB of RAM and an HDD (the 8200 and 8300 I got before actually already had SSDs and 8GB of Memory installed, so were quite well kitted for around 10 bucks).
Today, I got two Motherboards. One motherboard had 4 2GB DDR2 sticks, the other had two 4GB DDR3 Sticks. They were 19.99 which is about what that sort of memory would cost. One is an Intel system, with the DDR2, and the other is AMD.
The Intel board tried to be a mystery, as the BIOS merely noted the CPU was an “EM64T” processor. Which is like- all Intel processors supporting 64-bit, I think. When I plugged in my Ventoy USB to boot from, however, the port was burning hot. “Yep, this has to be a Pentium D”. It was. For those Unaware, the Pentium D is basically a dual-corified Pentium 4, and uses the same somewhat problematic Netburst architecture. The Pentium 4 was already a very hot chip, turns out sticking two Pentium 4 cores close to each other in a single die didn’t help, so they are notorious for running at high temperatures. The 8GB seemed high for a Pentium D machine, then I noticed it was ECC and the motherboard had PCI-X slots. I believe it might be a server board, which is interesting. Not something I think is directly useful of course, but interesting.
The AMD board was smaller. It is an FM1 socket; Honestly, I don’t think I’d heard of that, so I was hoping it was maybe some weird way of saying AM3+, with the F standing for “FX” or something. That would give me a reason to open the still-sealed FX 8320 Black Edition CPU I got about a year ago. No such luck- it was the socket used for AMD APUs after AM1 (I gather). This one had an AMD A4 3400. It would make a good ‘electron sipper’ system, but I’ve already got an Athlon 5320 build for that purpose.
The RAM of those two systems was the real value there. Aside from the DDR3 being something I could slap right into the recent 8200 SFF system to bring it up to 12GB, And it’s always nice to have old RAM sticks around- and I didn’t have any ECC DDR2 so that’s a bonus.
]]>Most technology news is, frankly, kind of boring. Occasionally something really neat shows up that can attract people to your failing news website, but otherwise you have to sort of glam it up. A little embellishment there, a paragraph about some unrelated thing that an involved company did a few years ago that made people made there…
An excellent example of this in action is this sensational article.
The article, titled “Microsoft Broke a Chrome Feature to Promote It’s Edge Browser” with a subheading of “Windows borked a feature that let you change your default browser, and some users saw popups every time they opened Chrome. it’s the 1990’s again for Microsoft” Is wildly misleading.
Fundamentally, about a year ago, Chrome added an experimental feature which would force itself as the default browser. A recent update to Windows has broken this.
Now first, Let’s discuss how the browser option works. Basically, within the associations registry keys, there is a “user_choice” key, this has two values, a Hash, and the progID of the associated handler. The intent behind the feature is that the Windows Shell itself is the only thing that “knows” how the generate the hash, and that hash has to match up with the contents in the ProgID value. Additionally, unless running as administrator, modifying the keys is not allowed by default either. The intent behind this is that only the user gets to set what is in the user_choice key.
Now, on to the Chrome feature in question. The experimental feature in question is called “Win10UnattendedDefault”.
The responsible function in the chrome source code is "MakeChromeDefaultDirectly" in shell_util.
The way this works is that they basically reverse-engineered the hashing code in the Shell, extracting a salt value from the shell32 library (GetShellUserChoiceSalt in shell_util.cc) and duplicating the Hash algorithm and everything. It works around the key not being writable by deleting it; as per a code comment," // Deleting the key works around the deny set value ACL on UserChoice."
This is where the Update comes in. See, being able to delete the key is literally a security exploit. It means user-level applications can literally wipe out the user_choice associations altogether, and write new ones. The Update fixes this flaw by adding a Delete ACL, and either changing the hash algorithm or how it is salted.
This caused Chrome to behave strangely when using the feature. In particular the settings page for associations appears if the Hash doesn’t match, and since the key cannot be deleted, while ProgID can be written, the Hash cannot, so it no longer matches the new value. Trying to run through the association will show the settings screen because it’s invalid for this reason.
Partly because there is very little error handling- it just assumes things like deleting the key succeeded. It’s also possible the hash algorithm may have been altered, though that aspect is unclear. Basically though, we’ve got a long article that turns an update fixing an obvious security flaw that Chrome was exploiting into a strange diatribe about Microsoft pushing Edge, even though Microsoft Edge isn’t even involved.
Better yet, the article contains outright lies.
Gizmodo was able to replicate the problem. In fact, we were able to circumvent the issue just by changing the name of the Chrome app on a Windows desktop. It seems that Microsoft threw up the roadblock specifically for Chrome, the main competitor to its Edge browser.
This is completely fabricated. The name of the Chrome program, or shortcut, is completely unrelated to the problem. This is just completely made up nonsense because they think their readers are morons. The Chrome source code is right there. You can read what it does, and the changes make it literally corrupt the involved settings. Not all readers, of course, are going to be experienced enough to review source code, but one would expect a technology-oriented news website would have the technical review capability to verify what they are saying, not just say whatever the author has decided must be the case, and even make up “testing” they did which they pulled right out of their ass.
Mozilla’s Firefox has its own one-click default button, which worked just fine throughout the ordeal.
This is also complete nonsense. Firefox has NEVER had a “one-click default button”. The Firefox option to Set Default is “one button” but it uses the standard, proper method which loads up the Windows Settings page so the user can set the association up themselves.
This is the sort of journalistic integrity I expect from these sorts of publications, sadly. It seems the aim is not to inform, but to try to make people mad to drive engagement. The article also for some reason has a bunch of unrelated stuff about questionable MS Practices which literally aren’t even relevant here. Chrome was exploiting a bug. Microsoft fixed it. The feature using the exploit broke, but ya’know now arbitrary applications can’t delete your user_choice settings, so that specific thing seems like a good thing to me.
]]>Since I literally never wrote Rust code before this program, I don’t think any sort of performance review would be particularly helpful; I will include that information, regardless, however it should be considered that it is unlikely a seasoned Rust developer could not make it run much better with their wider knowledge base about the language.
First, what is Rust? And yes, this is the equivalent part to recipe articles that blabber about how their grandmother made some recipe when they were a kid and stuff, but you can always scroll down if you don’t want to listen to this copy that I’ve added to make this post a bit better than just “here’s some Rust code, deal with it”.
The Oxford English dictionary defines rust as “a reddish- or yellowish-brown flaky coating of iron oxide that is formed on iron or steel by oxidation, especially in the presence of moisture.”. That’s not relevant here but smartypants people always put definitions when they write.
Rust is a programming language designed for systems-level programming as well as high-level programming. In some ways, that aspect reminds me of D. However, the language is also designed with much stricter capabilities with regards to type checking and even ownership of variables; That aspect is designed to prevent bugs before they happen.
My implementation of the anagrams program reads the dictionary file from a hard-coded location and performs it’s processing. I struggled a bit with the idea of mutable and immutable values as well as “borrowing”; I’m not entirely convinced this program is “correct” in the sense that it is using those concepts correctly, but it has the desired output so it “counts” as part of the anagrams series, I say.
use std::collections::HashMap;
use std::fs::File;
use std::io::{self, BufRead};
use std::path::Path;
use std::iter::Iterator;
use std::iter::FromIterator;
use std::ops::Not;
fn main() {
let mut anagram : HashMap<String,Vec<String>> = HashMap::new();
if let Ok(lines) = read_lines("D:\\textfiles\\dict.txt") {
for line in lines {
if let Ok(word) = line {
let s_slice: &str = &word[..];
let mut chars: Vec<char> = s_slice.chars().collect();
chars.sort_by(|a, b| b.cmp(a));
let s = String::from_iter(chars);
if anagram.contains_key(&s).not() {
anagram.insert(s.clone(),vec![]);
}
let gramlist = anagram.get_mut(&s);
gramlist.expect("reason").push(word.clone());
}
}
//print out the results.
for (key,value) in anagram {
if value.len() > 1 {
println!("{}",value.join(","));
}
}
}
}
fn read_lines<P>(filename: P) -> io::Result<io::Lines<io::BufReader<File>>>
where P: AsRef<Path>, {
let file = File::open(filename)?;
Ok(io::BufReader::new(file).lines())</pre>
}
Unfortunately I now have to admit that I wrote this a few months ago, which makes it hard to describe what it does. Rust has a lot of unique syntax elements. This provides a lot of expressability but at the cost of being a bit overwhelming. The implementation here really is more or less the same as other examples: There is a HashMap that indexes a vector of strings using a string as a key. By sorting each word, we get the key to which the word should be added, in that way we find anagrams by reviewing all elements where the list has more than one item.
Rust’s mutability and “borrowing” as well as variable ownership mechanics presented issues while I was putting this together. It definitely takes some getting used to, but I can see the benefits that such a design can provide; it results in compile errors instead of mysterious hard-to-trace issues at run-time for certain kinds of safety problems. That’s pretty much what the language was built for, so that tracks.
]]>I originally created it because I wanted to create a sort of “Tetris all-stars” concept, where all the various implementations I was familiar with- Game boy, NES, DS, SNES, etc. could effectively mingle together their visuals and sounds.
Last night for some reason I was dwelling on the idea of “Pentris”- that is, it’s Tetris, but instead of 4 blocks, there are 5. In BASeTris, the Tetris tetrominoes are separate class instances that define the block positions appropriately. And while I could do that for Pentris, I got the idea that realistically it should be entirely feasible to simply have the game generate every possible unique combination, given the number of blocks. Stemming from that, it should, therefore, be possible to not only have Pentris, but even, say, “Duodectris” and have the game handle everything from there.
The algorithm I ended up constructing works in two sections. First, it creates every single possible combination, and then it filters out duplicates.
The construction of all the possibilities effectively works by starting with a Duomino at 0,0 and 1,0. From there, using 1,0 as the “head” it effectively calls itself recursively to add blocks going Right, left, and Forward from that point, with the end-case of course being when the constructed Nomino has the number of needed blocks. The implementation for that was thus:
public static IEnumerable> GetPieces(int BlockCount,List
CurrentBuild = null) { if (CurrentBuild == null) { CurrentBuild = new List<NominoPoint>() {new NominoPoint(0, 0), new NominoPoint(1, 0) }; //create first two blocks var subreturn = GetPieces(BlockCount - 2, CurrentBuild); //-2 since we added two blocks. foreach (var yieldit in subreturn) { yield return yieldit; } } else { //determine forward direction. There should be at least two tuples in the list. var Last = CurrentBuild[CurrentBuild.Count - 1]; var NextToLast = CurrentBuild[CurrentBuild.Count - 2]; var Direction = new NominoPoint(Last.X - NextToLast.X, Last.Y - NextToLast.Y); //Create three copies of the current List. List<NominoPoint>[] Copies = new List<NominoPoint>[] {new List<NominoPoint>(), new List<NominoPoint>(), new List<NominoPoint>() }; for (int i = 0; i < CurrentBuild.Count; i++) { for (int a = 0; a < 3; a++) { Copies[a].Add(CurrentBuild[i]); } } //copies established. index zero is left (-y,x), 1 is forward (x,y) and 2 is right (y,-x) List<NominoPoint> LeftwardList = Copies[0]; List<NominoPoint> ForwardList = Copies[1]; List<NominoPoint> RightwardList = Copies[2]; //what is the coordinate if we move leftward (-Y,X) NominoPoint LeftMove = new NominoPoint(Last.X - Direction.Y, Last.Y + Direction.X); NominoPoint ForwardMove = new NominoPoint(Last.X + Direction.X, Last.Y + Direction.Y); NominoPoint RightwardMove = new NominoPoint(Last.X + Direction.Y, Last.Y - Direction.X); if (!LeftwardList.Contains(LeftMove)) { LeftwardList.Add(LeftMove); if (BlockCount-1 > 0) { var LeftResult = GetPieces(BlockCount - 1, LeftwardList); foreach (var iterate in LeftResult) { yield return iterate; } } else { yield return LeftwardList; } } if (!ForwardList.Contains(ForwardMove)) { ForwardList.Add(ForwardMove); if (BlockCount-1 > 0) { var ForwardResult = GetPieces(BlockCount - 1, ForwardList); foreach (var iterate in ForwardResult) { yield return iterate; } } else { yield return ForwardList; } } if (!RightwardList.Contains(RightwardMove)) { RightwardList.Add(RightwardMove); if (BlockCount-1 > 0) { var RightResult = GetPieces(BlockCount-1, RightwardList); foreach (var iterate in RightResult) { yield return iterate; } } else { yield return RightwardList; } } } }
Note: NominoPoint is basically just “yet-another-point class” but doesn’t have much special capability)
This, of course, still did not yield (pun intended…) the correct set of Nominoes; this would yield each possible rotation as a separate Nomino, for example, and in fact it was quite possible for a different “path” to yield an identical nomino. For that reason, it is necessary to filter the pieces. So how, exactly, do we devise a way to remove duplicates? Well, if we “rebase” the coordinates so that they align in the same way, regardless of their generated position, we can create arrangements that match, if we can come up with a safe, unique “hash” for each, of course. We can then also rotate the nominoes 90 degrees, 180, and 270 degrees, and be able to detect if “rotated” versions of the candidate nomino are already in the resultset.
Which leads to the question of what to use for a “unique” value. At first, I tried to create a Hash for the set of points, by using the bijective algorithm on all the points sorted by x and y coordinate. This resulted in hash collisions, however, so some Nominoes that should have been present were not.
For debugging, I added some helper functions to create a string representation using Hash Marks. It occurred to me that that string representation would be perfectly sufficient for a unique hash. the same arrangement of blocks that were created a different way would still give that same text, and a string could be safely used as a HashSet key.
public static String StringRepresentation(List<NominoPoint> Points) { int MinX = int.MaxValue, MaxX = int.MinValue; int MinY = int.MaxValue, MaxY = int.MinValue; foreach (var iteratepoint in Points) { if (iteratepoint.X < MinX) MinX = iteratepoint.X; if (iteratepoint.X > MaxX) MaxX = iteratepoint.X; if (iteratepoint.Y < MinY) MinY = iteratepoint.Y; if (iteratepoint.Y > MaxY) MaxY = iteratepoint.Y; } StringBuilder sb = new StringBuilder(); for (int ycoord = MaxY; ycoord >= MinY; ycoord--) { for (int xcoord = MinX; xcoord <= MaxX; xcoord++) { if (Points.Any((p) => (p.X == xcoord && p.Y == ycoord))) sb.Append("#"); else sb.Append(" "); } sb.AppendLine(""); } return sb.ToString(); }
With this in hand, the FilterPieces function could now uh, filter pieces. The general idea is simple: we maintain a HashSet of all the previously returned pieces, and as we go through each piece, we basically check if it matches one in the HashSet. if it doesn’t, we will add the string representation of itself and the three rotations of that piece to the HashSet and yield it via the enumeration. Otherwise, it is filtered out as already having been returned.
public static IEnumerable<List<NominoPoint>> FilterPieces(IEnumerable<List<NominoPoint>> Input) { HashSet<String> FilteredPieces = new HashSet<string>(); List<List<NominoPoint>> ProcessedList = new List<List<NominoPoint>>(); HashSet<String> PreviouslyProcessed = new HashSet<string>(); foreach (var iteratefilter in Input) { String DebugAid = StringRepresentation(iteratefilter); String sHash = DebugAid; if (PreviouslyProcessed.Contains(sHash)) { continue; } var CW1 = RotateCW(iteratefilter); var CW2 = RotateCW(CW1); var CW3 = RotateCW(CW2); PreviouslyProcessed.Add(sHash); PreviouslyProcessed.Add(StringRepresentation(CW1)); PreviouslyProcessed.Add(StringRepresentation(CW2)); PreviouslyProcessed.Add(StringRepresentation(CW3)); yield return iteratefilter; } }
With this in hand, I could see how many of these combinations specific block counts provided. Pentominoes, or Pentris, with 5 blocks, yields 12 possibilities. with 6 blocks, there were 32; 7 had 81, 8 had 219, and so on. With 13 blocks, that’s 29,663 possible … Tridecominoes (?).
Now, for actual gameplay, the idea is that with more blocks in each, you would, of course, have a larger playfield. Thankfully, the TetrisField class that implements the GameField of BASeTris has supported this more or less since the beginning, and simply defaults to the standard size.
One aspect which occurred to me that while some aspects of the Game do require knowing all possible Nominoes, It really isn’t necessary for all parts of GamePlay. For the “Choosers”, they need to know all Nominoes since they are responsible for how they randomize, but, one could make contingencies for an imagined “N-block” “Xtris” implementation.
The first thing to address is to allow for non-deterministic generation of the Nominoes. That is actually surprisingly easy- with the recursive algorithm it is effectively a matter of randomizing the order that the directions are chosen. It was at this time I refactored the routine and made it shorter, too:
public static IEnumerable> GetPieces(int BlockCount,List
CurrentBuild = null,NominoPieceGenerationFlags GenerationFlags = NominoPieceGenerationFlags.Flag_None) { if (CurrentBuild == null) { CurrentBuild = new List<NominoPoint>() {new NominoPoint(0, 0), new NominoPoint(1, 0) }; //create first two blocks var subreturn = GetPieces(BlockCount - 2, CurrentBuild,GenerationFlags); //-2 since we added two blocks. foreach (var yieldit in subreturn) { yield return yieldit; } } else { //determine forward direction. There should be at least two tuples in the list. var Last = CurrentBuild[CurrentBuild.Count - 1]; var NextToLast = CurrentBuild[CurrentBuild.Count - 2]; var Direction = new NominoPoint(Last.X - NextToLast.X, Last.Y - NextToLast.Y); //Create three copies of the current List. List<NominoPoint>[] DirectionLists = new List<NominoPoint>[] {new List<NominoPoint>(), new List<NominoPoint>(), new List<NominoPoint>() }; for (int i = 0; i < CurrentBuild.Count; i++) { for (int a = 0; a < 3; a++) { DirectionLists[a].Add(CurrentBuild[i]); } } //copies established. index zero is left (-y,x), 1 is forward (x,y) and 2 is right (y,-x) List<NominoPoint> LeftwardList = DirectionLists[0]; List<NominoPoint> ForwardList = DirectionLists[1]; List<NominoPoint> RightwardList = DirectionLists[2]; //what is the coordinate if we move leftward (-Y,X) NominoPoint LeftMove = new NominoPoint(Last.X - Direction.Y, Last.Y + Direction.X); NominoPoint ForwardMove = new NominoPoint(Last.X + Direction.X, Last.Y + Direction.Y); NominoPoint RightwardMove = new NominoPoint(Last.X + Direction.Y, Last.Y - Direction.X); List<NominoPoint> MoveList = new List<NominoPoint>() { LeftMove, ForwardMove, RightwardMove }; int[] ArrayOrder = new int[] { 0, 1, 2 }; if (GenerationFlags.HasFlag(NominoPieceGenerationFlags.Flag_Randomize)) ArrayOrder = TetrisGame.Shuffle(ArrayOrder).ToArray(); foreach (int index in ArrayOrder) { if (!DirectionLists[index].Contains(MoveList[index])) { DirectionLists[index].Add(MoveList[index]); if (BlockCount - 1 > 0) { var Currresult = GetPieces(BlockCount - 1, DirectionLists[index],GenerationFlags); foreach (var iterate in Currresult) yield return iterate; } else { yield return DirectionLists[index]; } } } } }
Instead of the three almost identical blocks for each direction, they are now handled in a loop, and the order of that loop is determined via the ArrayOrder array; furthermore, if the Randomization is set, then the order of the array is randomized. This relatively straightforward change means effectively that while it will still go through every possibility, it will do so in an indeterministic order if the flag is passed.
By using an iterator method, this also allows for a rather interesting usage which simply wants one random piece:
return GetPieces(BlockCount, null, NominoPieceGenerationFlags.Flag_Randomize).FirstOrDefault();
For implementation, there are still issues. I quickly hacked the game such that the routine generating the next Nomino simply called the above. Results were a bit mixed.
Unfortunately, there are some architectural issues with this specific change. There’s good and bad indicators, here though. In particular, many things internally use the specific “Type” of the Nomino as a key into some dictionary or other cache; for example, the Images that are used for the Next Queue are actually indexed by the theme and the Nomino Type. Since all the “Pentominoes” here are actually just a Nomino, they share the same type, so all the Pentominoes share the image of the first one generated. The count of nominoes is of course not going to be useful here, since I didn’t change it. it is working properly in the sense that none of those Tetrominoes are used, though. Other than the presence of the Type being used as a Key for some aspects of the underlying handling, it actually handles the Pentominoes reasonably well; For example while it is just the one image, it did draw the Pentomino correctly, and it is handling them correctly within the game field, with some exceptions. A standard Tetris Field has 2 “hidden rows” which are basically used for generating the Tetrominoes off screen. All tetrominoes fit there. Pentominoes, however, do not; so what happens is some generated Pentominoes are generated and placed “off-screen” but they are now too high up and some blocks are actually outside the actual game field, including hidden rows. The game doesn’t handle this well. if the top row is outside, then it won’t let you move side to side until it falls one block. And it’s worse if a generated Nomino is two blocks above, because as soon as it falls one block, the game will detect that it cannot down (because the game considers the block that will still be outside the valid area invalid) and thus decides to “set” the block in place; and of course it’s at the very top which sets off the Game Over detection. Of course, the number of Hidden Rows is not hard-coded in my implementation, and nominoes could arguably be more intelligently spawned only as high as possible, instead of forcing them to be above the visible area.
In any case, My next task appears to be refactoring BASeTris such that the parts that use Nomino Class Types for HashSets and Dictionaries and such to instead use the string representation, which should address the “Next Queue” issue; From there I need to device a proper way for these extended “NTris” implementations to show for example the Nomino counts. My first thought is that the list of Nominoes would simply expand with each new unique one that is generated, and when there are too many to fit vertically it tries to create some sort of grid arrangement, with the Nomino image underneath and the count on top or something to that effect.
It’s a bit funny that I felt like doing this, and meanwhile the “Tetris 2” implementation is still largely unfinished and untested!
]]>II feel like Desktop User Interface standards have been pretty well set in stone for decades and there seems to be this weird attempt to mix in Mobile App design UX elements into it and they simply don’t mesh.
Basically, they’ve removed the Tabstrip, with it’s clear, obvious labels, and replaced that with “sidebar” thing. Each “Tab” has a monotone, somewhat confusing icons, with no labels. Labels can be displayed by clicking the “hamburger button” at the top, or you can see them with a tooltip. Clicking a “Tab” shows no visual connection between the tab and the actual content to the right, like a Tabstrip does (selected tab is “connected” to the actual tab client area).
I feel there is absolutely nothing good about this design. it doesn’t make any sense to me. The “tab” buttons are now more difficult to identify, requiring one to either memorize where they are, what their incredibly basic and non-descript icons look like, or to waste time either hovering over the tabs to see what they are, or clicking around to figure out where they are. I’ve had to use it a few times and every single time I think I clicked on every single tab just to find the one I was looking for. It’s frankly silly.
“You’ll get used to it” people will say. Well yeah. If you live in an outhouse, eventually you can no longer smell it because you get used to the smell. Doesn’t mean you aren’t surrounded by shit, though.
The odd thing is that it doesn’t really save space. It just wastes users time. The tabs at the top of the older Task Manager
But there’s no reason for them to be hidden. It’s not even saving any space, because they’ve got header text and a weird new sort of Explorer Bar thing that provides shortcuts to some tasks which I doubt are used very often; This is taller than the TabStrip in previous implementations ever was!
It seems like this rework was not done with a goal of actually making Task Manager easier to use or more user friendly, but with the goal of simply making Task Manager “look modern”; where, of course, Modern is defined largely through design trends that have migrated from mobile operating systems.
The Sidebar hiding labels makes sense on say, Android. There isn’t space to show the labels. The *resolution* of the displays tends to be higher, but the physical size means that text has to be larger, so those items cannot be visible all the time or they will take up too much space.
But the metrics and the User Experience when using a desktop or laptop monitor is completely different. It doesn’t make and has never made any sense, in my opinion, for design stylings that arose in response to the limitations of Mobile device screens to appear in desktop operating systems. Even in the case of systems with a touch screen, the fact that the displays are physically much larger, in general, makes the “trade off” that many of the user interface elements from those systems make either entirely moot or sometimes deleterious, where “new” Apps use more physical screen space but actually present less information than a “traditional” Desktop application design.
Traditional Desktop Design builds on several decades of very careful UX research, and very precise decisions and concepts regarding making a user interface easy to use. There were even debates regarding Apple’s Mac OS and the later introduction of hierarchal pull down menus with regard to whether they were good or bad User Interface Design.
I’ve yet to see or hear of those sorts of discussions or information surrounding these new “Modern” app designs on desktop software. I have a feeling that as more and more people were not only more used to the design stylings of mobile apps and software, That is what is driving this unusual and arguably insane push for Desktop Applications to follow the same design trends, despite not being applicable.
It actually kind of reminds me when I first got my smartphone. I had a hell of a time coming to grips with the UX and was constantly annoyed and just wanted it to work “how I expect” which in many cases involved some aspect of Desktop UI design that simply would not work well on a phone. I think desktop applications- or rather “apps” and the design stylings that come with them are the result of the same desires in the opposite direction; People used to Mobile device user Interfaces who are out of their element in traditional desktop applications and therefore want desktop software to mimic how mobile devices work.
]]>Back to C# being “bloated”. The typical argument seems to be that because there are so many ways of doing the same thing, and because there is one “proper” way, the language is bloated with “old ways”.
Some examples often cited as evidence of this bloat are things like pattern matching, switch expressions, and default interface implementation.
C#’s pattern matching in particular gets criticized, because it is now the “proper” way to check for null:
if(value is null){}
Often when this is raised, it is of course compared to comparisons to null; the claim is, oh, those old ways of null checking are “old code” and not the proper way anymore. But this argument actually sort of illustrates a flaw in it’s own reasoning- those “old ways” were never the “proper” way of checking for null even going back to C# 1.0, nor are they equivalent to the pattern matching approach. A second “pattern matching” implementation that is shown as the “proper way” to check that an item is not null in a similar fashion is to use the pattern matching like this:
if(value is {}){}
This is sometimes hoisted as an example of “OMG language so bloated” But fundamentally this is just reusing an expressive language feature and turning it’s versatility into some kind of disadvantage, by abusing it to perform a common task in this way. Consider that you can do something similar by abusing some other new operators that were added in previous language versions:
if((String.IsEmpty(value?.ToString()??""))
The ability to misuse expressive features in this manner to create “lots of ways of doing the same thing” is hoisted as a detriment to the language but what it means is the language is getting more expressive.
Basically, “is” in the original case avoids issues from overloaded equality or inequality operators. This is why it’s generally pushed as preferred. Thing is, we’ve had Object.ReferenceEquals which could be used to test for null in the same scenario – avoiding equality/inequality operators – which was indeed pushed as a “proper” way to check for null since C# 1.0. The fact that many of the writeups about “C# bloat” aren’t aware of this seems to suggest that the “proper” way doesn’t actually mean anything anyway. “newbies” aren’t going to learn “bad ways” by using equality/inequality. This is all notwithstanding the rather cogent argument that if you have an operator overload that messes up null checking, that is the bug, not using the “wrong” null check somewhere that causes it; though, perhaps that’s a matter of opinion.
Switch expressions lead to frustration that there are “too many” ways of doing branching code, or something of that sort. I mean, we’ve got if and switch and now switch expressions? Woah, too many. But now- hold the phone son. What about loops? You’ve got While, Do…While, Do…Loop, For, ForEach… All of which are just loops. Do…Loop is the “proper” way to ensure that a loop goes through at least once, but most people just hack it and still use a While loop. Again, using different constructs for the same task doesn’t mean you aren’t doing “proper” programming. You don’t *have* to use Do…Loop if you want to ensure a loop runs through at least once. There’s nothing wrong with finding other approaches that ensure that.
Default Interface Implementation is often considered to be redundant; That they really don’t serve much of a different purpose from members in an abstract base class. Like Do…Loop versus While, they are similar constructs but they operate slightly differently, which can be a benefit when designing a system architecture. Default interface implementations are stateless and have no inheritance association- eg if you have a second interface deriving from the first and it doesn’t operate polymorphically; that is, if you have a class that inherits an interface with a default implementation which does not itself implement it, then, similar to explicit interface inheritance, variables of that class type will be unable to call the member; instead, it can only be done when working with the interface type itself. Fundamentally, the feature is effectively providing the benefit of “traits” as seen in many other languages. Use it, or don’t use it. It doesn’t matter. But at least we should try to understand it before proclaiming it to be “bloat”.
My guttural reaction to many new features is often a somewhat similar “bah humbug”, and from that perspective I can almost see where the argument for “C# bloat” comes from. But I think it is actually emotionally driven, not logically so.
I’d argue that in some sense what is actually happening is we’ve got some more grizzled C# devs who have been working with the language for a while and, for the first time in a long time, these new features appear and they are lost. How can this be? They are “experts!” They’ve been doing this for years! Instead of realizing that they need to decide between the constant work of keeping up with the programming language they use or at least be comfortable in knowing they are not experts at the language in it’s current iteration, they decide, in a manner similar to the Principal Skinner meme, “Am I out of touch with C#? No, it is the language that is wrong” Don’t know how to use a newer feature? Oh, that’s not because you are lazy or haven’t put the effort in, no! That’s because that feature is *bloat* and therefore you don’t have to learn it.
I actually remember following a similar process with the introduction of linq…. this is language bloat, nobody asked for this, it does what we can already do, I don’t need to learn this, etc. But after digging into it and understanding the feature, it of course became second nature and I feel far more expressive than I did before that feature existed.
What I see when I see people complaining about new language features, and calling it "bloat" and saying that a new language needs to be designed to "remove the bloat" because there are "too many ways to do the same thing" I see a person who is basically saying "Why do we need a hammer? I can smash nails down just fine with the butt of my screwdriver that we got in the previous version, we don’t need two tools that do the same thing" Or possibly even explaining their issue with hammers- "look! It doesn’t even work any better!" because they are trying to smash the nail in with the handle of the hammer the same way they were using the screwdriver.
]]>I think this is a terrible feature. Making it default? Even worse. Frankly, it has no place in a language in 1998 let alone 2022. It’s one of the worst and most ill-thought features ever added to C#, in my opinion.
The feature is called “top-level statements”. But what is a top-level statement, anyway? Now, for the most part, supporters praise the feature as being better for beginners; often referencing how it makes C# more like BASIC, a language designed for beginners. However, this incorrectly, I feel, assumes BASIC has top-level statements as part of it’s design as a beginner’s language, but the most likely reason is simply because, well, that’s pretty much how all programming languages worked, and, particularly, the biggest professional languages- COBOL and FORTRAN, worked that way.
And why did COBOL and FORTRAN work that way? Because they were designed for punch cards.
So “Top level statements” is really “Punch-card format”. I’m going to start calling it that conversationally because it will really annoy the people who think top-level statements in C# are a great new feature.
So Top-level statements are fundamentally an ancient “language feature”, largely an artifact of the original media used to program computers than any inherent design based on it’s beneficial traits from the perspective of a programmer. So the question becomes, why was such an anachronistic feature added to C# in 2020 with the introduction of C# 9? It’s difficult, of course, to know specifics why, but I can guess.
The feature, in official descriptions of it’s initiatives, as being intended partly for “beginning” programming and to allow for creating simple programs more easily. This suggests the designers of the feature may be looking back on the “punch-card era” with some rose-tinted goggles; I’ve seen comparisons drawn by some veterans about how easy it was to start programming BASIC when they were 7 on say a commodore 64, and therefore to work better for beginners, C# should emulate that style. This is as I described previously something of a misapprehension, I think, about that design to BASIC; BASIC doesn’t work that way because it is better for beginners but simply because it’s basically how all programming languages worked at the time. Emulating that behaviour now is not better for beginners simply because programming languages seldom work that way now.
Newer developers- those who have no experience with the aforementioned “punch-card style” languages, see this feature as brand new. It’s absent from almost every major programming language, making the feature seem completely new and even unique. The positioning as being better for beginners seems accurate from that perspective because it “removes the annoying boilerplate”, allowing programmers to learn by writing code instead of being told to “ignore this for now”.
I’m not convinced that makes it better for beginner programmers, though. the C# language feature is “syntax sugar” in the sense that it basically compiles to an implied main routine anyway, since one is in fact required. This differs from those older language designs because in those the “top level statements” were a first-class, required design. The use of entry-points was actually mimicked by creating a Main() sub or function and calling it at the top-level for a number of years, to adhere to the newer programming style which eschewed the relics of punch card programming for the Entry-point concept that was pushed by the new structured programming style.
Of course, I’m not a beginner anymore so it’s difficult to judge whether this does or does not actually make the language easier to learn. Thing is, that would seem to apply to all the people involved in it’s conception and implementation. It seems there is a good chance this is one of those cases where experienced developers make somewhat uncharitable assumptions about beginner programmers, and then implement features to allow those imagined beginner programmers to learn more easily. It’s the skill equivalent of the Arrested Development joke about “How much could a banana cost, $10?”. We are rich in skill and experience enough that we no longer have any idea what actually helps a developer learn. This feature feels like some older developers thought back to how they learned programming, recalled things like C64 BASIC, and decided that “top-level statements” were the language feature that allowed them to learn programming and wanted to add that to C#. Younger devs typically love to follow and adapt to “new stuff” without thinking too much about whether it is actually a good feature, which is probably why some of the counter-criticism I have received due to disliking this feature is of the classic form “You just don’t like it because it’s new and different!”, even though the reason I dislike it is because it is neither.
]]>As I’ve previously written, my first Unicomp keyboard was a beige Ultra-Classic. That failed, and for some reason I gave Unicomp a second go with a Black model of the same keyboard. Eventually, as written in my previous post on the subject, that started to exhibit the same problems and symptoms that preceded the failure of my original Ultra Classic, and I nipped the problem in the bud by buying a Corsair K70.
Only a few hours ago, I stumbled upon the Black Ultra Classic and, on a lark, decided to plug it in and remind myself how broken it was. To my surprise, it worked perfectly fine! I have a secondary computer setup using an Acer Z22 AIO, which also plays host to the display of several other computers, with the USB peripherals being swapped as necessary by moving the USB Hub. It had a “RedDragon” mechanical keyboard attached, and so I did a drop-in replacement on the USB Hub.
Imagine my surprise when the keyboard was now completely non-functional. After some testing I found it worked when plugged in directly, but not through the hub. Eventually I realized the power supply for the hub was disconnected, and plugging it back in did indeed restore the keyboard’s functionality. But I was curious and used a USB tester to see the power draw. The keyboard was drawing extra power during identification which wasn’t working through the USB Hub when connected without Power, possibly due to sharing the power from a single USB port on the host machine. Oddly, it was drawing as much as 5.25V instead of 5V too; I threw out the old keyboard when I moved, mostly due to me gutting it and sort of ruining it trying to fix the issues. But now I wonder if it was simply a case of it trying to draw too much power? I don’t think so, but it’s certainly possible.
As an aside, I am in fact still using the same Corsair K70 keyboard I described in the linked post, which at the time I had purchased recently but I’ve now had about 5 years. It’s held up quite well. Now, I used the same colour scheme from when I wrote that until a few days ago, when I decided to change it up. Because I had used the same colours for so long (and never turn the backlight off) the LEDs have degraded such that now when I set the colour to white, I get a colour scheme somewhat “complementary” to the colours I had before; for example, keys that were previously blue, will now have a yellow tinge even when set to white, because the Blue LED, having been worked so hard for the last 5 years, just isn’t as bright as the relatively unused Red and Green. It’s an interesting aging effect that I’ve not seen anybody discuss before, but it’s certainly something worth considering in terms of using and maintaining a keyboard; In my case, since my computer is on 24/7 I probably should have been turning the backlight off with the keyboard button when I was going to bed or otherwise not using the keyboard for some time.
]]>It has not been a perfect solution. For one, being my two largest drives, the data they contain isn’t really backed up- some of it that is important is burned to blu-ray, and most of the rest of it I could get again, but it would be incredibly annoying. Secondly, none of my Linux machines are able to access it- I suspect because I have disabled parts of the SMB protocol on the system. Third, transfers are often very slow, to the point that I can sneakernet large files faster.
I also like the idea of having a system serving as dedicated mass/archival storage that I can just upgrade over time by adding hard drives to it.
This has been a long-term idea, but I’ve never been able to justify the cost of building a new NAS. All my existing machines had their own roles, and those that were most fitting I don’t feel comfortable running 24/7 (A QX6600 Quad Core for example).
That was, until I found a cheaply priced i3 540 machine at the thrift store, with 4GB of RAM. It was a no-POST system but that turned out to be an extra standoff being installed. It was a perfect candidate for a NAS; reasonably low power usage, and while the case was pretty cheap (it’s the exact same Rosewill case I used for a cheap Pentium 4 build a few years ago), it has room for 5 Hard Disks, which is a good start. (If I need more I could always transplant everything to a new case). The idea sort of sat on a "mental shelf” for a while until the other week when I finally ordered the only things I needed to make it a reality: a 120GB SSD for the OS, and 2 6TB Drives; They are SMR Drives, both because those are cheaper and because read speeds are probably m ore important for most of my uses- the data won’t be changing much, and I won’t be rewriting files frequently. I’m thinking it is more an “archive”. I’m also fairly tolerant of stuff being slow, seemingly more than a lot of people are. If it takes 5 hours to copy say 2TB of Data, that’s fine with me.
Once the SSD arrived, I couldn’t wait, though. I have two spare 4TB Drives in a cupboard as emergency replacements if I detect any sign of trouble from the two in my machine, and I unsealed the WD Red drive and decided to run up the NAS.
For OS, I’ve opted for OpenMediaVault. I was able to get plugins installed for Mergerfs, which allows multiple drive filesystems to effectively be merged and seen as one filesystem, and Snapraid, which I can use to add parity drives. With the two 6TB Drives added I’ll have a total of 16TB of storage which should be plenty for my needs for the foreseeable future. I’ll add a third 6TB Drive as parity for my own peace of mind (and set it up to properly recalculate parity every few days, I think). Getting a 16GB NAS for <$500 total is a pretty good deal, particularly since that’s half the price of many NAS Devices!
So far with the 4TB Drive and acting as an SMB and NFS server, it’s exceeded my expectations. Copying files over the network doesn’t feel any slower than copying files from one internal drive to another, and the i3 540 may be rather old, but it looks like it will be perfectly capable of keeping up with the demands of the NAS.
One thing a lot of people use a NAS for is “cloud storage” in that they make it accessible from outside their LAN, but I’m not interested in that application- it will live solely on my home network.
]]>