Most application frameworks/languages provide access to the Command Line parameters passed to your application. Generally it is passed to your application as either a string or an array of strings. What you do not get automatically is the functionality to parse out switches.
Command-line parameters used to be the only way to communicate with a program. Fundamentally, the command line was your UI into the program. Different platforms took different approaches. Unix-like systems typically take the “invasive” route; they replace wildcards and then pass the resulting command line to the application. This means that you don’t have to do any shell expansion of wildcards (as it is known) but you have to account for the fact that your command line could include a lot of files. It’s a trade-off, really. Either way, I figured for the purposes of this library, we could stick to the platform- if the program is run with a wildcard, you’ll see the wildcard on windows, but it will have been expanded if you run the same program on Linux. It might be worth adding an option to “auto-expand” wildcards- just for consistencies sake, but that seems like a post for another day.
Either way, most applications also include flags and switches. This is more a De Facto standard that has cropped up- there is no hard and fast rulebook about what flags and switches are or how you are supposed to pass arguments, which can cause no end of confusion when it comes to reading application documentation. the .NET language just gives you the string, and leaves it up to you to decide how to interpret it. Some language libraries provide functionality to parse the Command Line appropriately, such as Python. C# doesn’t come with such a class…. So let’s make one!
First we need to determine what exactly can exist in a command line. My method allows for two things: Switches, and arguments. A Switch can include an argument, separated from the switch with a colon. For example:
In this case, we have three switches- switch, sw, and doall. The first two include an argument. My “syntax” allows for quotes in the arguments of switches as well as the “loose” arguments. We will evidently need classes to represent and parse Arguments, and another one for Switches. The parsing can be done sequentially. Although it’s not a recommended best practice, I chose to use by reference parameters in the class constructors. In order to keep things generic and accessible, both Switches and Arguments will derive from a CommandLineElement abstract class, which will force each base class to implement toString(). the ArgumentItem class will be used for parsing both “loose” arguments, as well as arguments found after a switch.
Arguments are simple- if the first letter of the position is a quote, we look for the next quote that isn’t doubled up. Otherwise, we look for either the next whitespace or the end of the string. Each argument only needs the actual argument value.
The constructor is where the important stuff happens. the by reference parameter is used to define the starting position, and we update it when the constructor returns to point at the character after the argument. The class also defines some statics for implicit conversions to and from a string.
Now that we have the Argument class, we can define the Switch class. The actual syntax of switches often depends on the application but also seems to depend on the platform. for example, Linux tools favour the hyphen for single letter flags, and double hyphens for multi-character flags. Switches are also called flags. forward slash is not generally used as a switch or flag indicator. Windows platforms prefer the forward slash but generally allow for single hyphens as well. We aim to support all three syntaxes, and make the client application not have to worry about which it is. We also add support for arguments- a switch can be specific as such:
The element after the colon will be parsed as an argument and attached to the switch itself. But enough waffling- on to the Switch:
With the basic parsing logic completed, we need to consider how we want this to be used. Best way is to think of how we would like to use them:
Some basic take-aways from this. First, the Core Parser Object needs to provide an Indexer. In the above example, we see it is accessing Switches by passing in the Switch name. Other possibilities include using direct numeric indexes to refer to any argument- much like you would access elements in the framework provided args  String array. Another possibility is to have the Argument of a switch auto-populate, rather than be null, when accessed:
1,924 total views, 30 views today
That’s right. The latest version of BASeBlock now adds a working Polygon block. There was a lot of reengineering to be done, but it works realistically; which is to say, a ball will bounce at the proper angle.
Having Support for arbitrary polygons is something of a pipe dream I had. Every attempt failed. What made it possible was in fact me adding ellipse blocks, in which I unwittingly added support for polygon collisions, since the Ellipse Block was in fact just a polygon and used euclidean geometry to determine if and make adjustments to ball collisions. After getting that working, I realized it would make sense for EllipseBlock (and possibly other kinds of blocks) to simply derive from an Abstract PolygonBlock that did the work of dealing with the details of the polygon itself, while the derived class pretty much just handles it’s own fields and creates the Polygon to be used. The math itself actually uses some of the same code that was being used for ball collisions, which takes two polygons, a speed (for the initial polygon) and returns a result structure that determines if they currently intersect, if they will intersect, and how to adjust the latter to be no longer touch the former. I use that last item to create a collision normal, and reflect the speed vector of the ball across a vector perpendicular to it.
A lot of other code needed to be changed to streamline the support of it. iEditorBlockExtensions, a interface used for adding editor-oriented support, had to add a method to allow for the overriding of the selection “pulse” drawing, which at the time only drew the rect. A lot of other code that assumed that the BlockRectangle was the definitive source of the block’s shape had to be changed. This actually ended up in the addition of another virtual method to the base Block class, “GetPoly()” which of course in default implementations returns the polygon of the rectangle, but PolygonBlock not surprisingly overrides this and returns it’s own poly. The base implementation of EditorDraw() fills the polygon dictated by “FillPoly” so this works out just fine; The Editor now allows selection of polygon-shaped blocks by actually clicking on them (rather then in their rectangle) as well as highlighting only the poly (again, as opposed to their rectangle).
Currently it only supports convex polygons, but that is not planned to change, since a concave polygon can easily be simulated with convex polies anyway. (And I haven’t actually tested it with non-convex poly’s so I’m really just assuming for the moment it doesn’t work for them).
It will still need some touching up, and I’ve been making a few other minor cosmetic and UI changes (allowing drag-drop of files on the editor, better dirty-file prompts (for unsaved documents) and so forth). The only area that needs significant work and/or rework is probably the path editing.
In summary, the Next Version of BASeBlock is going to be a significant upgrade, with a completely new domain of blocks to use and explore. I still need to make some sort of “standard” LevelSet that isn’t the crappy LevelBuilders included with the game.
366 total views, no views today
Oftentimes, when comparing software products in the same market, you’ll see comparisons made where one product has a “pro” over another based entirely on the fact that it doesn’t cost money.
I’ve never understood this. It doesn’t make any sense, when you think about it. Sure- if the two products are extremely similar in form and function, then the comparison is valid- because all other things are equal within a margin. But the problem is, when it comes to free software, they typically don’t stack up to commercial ‘evil’ proprietary Applications.
For me, I learned this by way of text editors. This is a very simple type of application, and one would assume that out of the bajillions of free offerings, one of them would also be easy to use, and meet my needs. This was the case, but I was stymied by what I found in a lot of them.
For example, I have often seen free, Open Source applications, such as emacs, vi, etc touted as “the de facto text editor” application, and held up as some kind of standard.
I have to be brutally honest here- if those are some sort of standard, then that is a pretty damned low bar.
How is this even something to consider for everyday text editing? It’s about on par with WordStar in terms of finger-contorting shortcuts, and it reminds me of edlin, except that it is powerful; That much I can see. But when I need to become a god-damned apprentice to a ancient VI master to learn how to use the software application in a way that fits my needs and need to “train” myself even longer in order to do so adeptly, I’ve lost. My time is not valueless . It doesn’t matter if the software was free, or if I can edit the source code. For one thing, I don’t try to judge software as if being Open Source automatically makes a piece of software sit on some sort of moral high-ground above others. I want my software to work and do what I need . That is it. software should be judged on it’s own merits, not on it’s license.
One thing I noticed with free text editors, was they all seemed to have too many features, poorly organized.
notepad++ is a fine application. But it’s menus are an absolute mess. Being Open Source doesn’t mean you can ignore basic UI Design guidelines. Same goes for the Graphical versions of Vi/Vim (gvim) and for emacs. I have no doubt these are powerful tools, leveraged by plenty of people worldwide. But I cannot personally justify the time investment it would take to learn these applications, when there are plenty of other applications that provide exactly the functionality I want in an easy to use package. Also, from what I’ve seen, becoming proficient in either emacs or vi turns you into a condescending douchebag. gedit is a fine application- it’s free. It has syntax highlighting, and it’s menus actually make some bloody sense, and yet time and time again I see Linux veterans saying it is only for “noobs”. I want to edit my text. That’s what it does. The problem they have is that they invested so much time in learning an overall badly designed (UI-wise) application, and now need to justify that time investment by putting down those people who avoided that time investment in the interest of getting things done.
The free software I found that met my needs at that time was Editpad Classic. This program was (and is) a closed source application. I didn’t, and still do not care. It did what I needed. Then, when my needs grew, I found the same vendor had a product called “Editpad Lite”, and I found that to be sufficient as well.
When I started this website, I needed a efficient way to upload and edit files to and from my webhost. Upon reading the descriptions of features for that same company’s paid offering of the same product (Editpad Pro) I found it seemed to fit my needs perfectly.
Ever thrifty, however, I decided to prowl the web for free software with similar features. Notepad++ had an FTP plugin, but it was unweildy, stubborn, and finicky. No other FTP capability seemed to match. So I purchased Editpad Pro, and I am still using Editpad Pro (still using version 6) to this day for this very capability. Being able to make quick changes to my news page, edit the PHP code of any file on my host, at the touch of a single GUI button is something that I value. Again, I value my time more then some self-inflated sense of pride. Sure, I could:
But I cannot see any reason to do that. I can do all of those. But why are they disparate tasks? All I want to do is edit a file on my webhost. Why is it that an editor cannot edit a file simply because it happens to reside on a remote server? Why do I have to go through an arcane ritual of download->Edit->Return to sender just to edit a single file? And why do people seem to think this is in any way superior to the time-saving method of simply using an application that does this properly ?
Returning to Open Source; It’s fine! I have no problem with it at all. I plan to release BASeBlock’s source code under the BSD/MIT license. But I don’t feel that an application being open source gives it value. The fact that an application is Open Source, in fact, means absolutely nothing to me. I only care about whether it does what I want. I don’t care if it has the potential to do what I want if I make changes and recompile the program, because that means it’s not free at all, costing me time I could spend not editing somebody elses program. People often tout the “Open Source” label, as if it matters. It truly does not. In the majority of circumstances, you don’t need, and you do not sanely want to view and edit the source code. How many people can look at the source code for perl, and make sense of it- only the maintainers. That is their job; even if they volunteer, thats what they like to do. But Why would your average user want to recompile the perl interpreter? I can’t think of a good reason. Same for nearly any other Open Source application. I’ve never downloaded and used a Open Source application and thought “hmm, it’s missing this feature- I know, I’ll waste the next two weeks staying up until 2AM and adding it”. No, what I think is “Hmm, this software application is missing this feature, I know, I’ll stop wasting my time using it and find something that does”.
I think the best summary I can come up with is this- Being Free or Open Source does not excuse sub-par design and implementation; and, at least in my opinion, I don’t see a reason to use any application based entirely on it’s license or distribution method. It’s almost as arbitrary as using an application over another because one of them is written by a catholic and the other is written by a jew. It’s a arbitrary and irrelevant to the meat of the matter which is whether that software does what you need it to.
616 total views, no views today
So I was bored and decided to update my Flash plugin, a chore that I recollect stopping in it’s tracks previously, for reasons I couldn’t recall. Main reason was that my flash plugin has been nearly constantly crashing on certain sites. Mostly due to the ubiquitous use of flash for advertisements, which seems to be one of the dominant uses of the technology.
So, I visit adobe.com and go to download the player. First, they try to shove a McAfee scan down my throat. You know the drill. They know we just want to get the hell away from them, so they decide to helpfully fill out the “default” options for us, which just so happen to correspond with the options one would need to choose to give them the most revenue.
So I finally manage to get past that brigade of crap, and then it asks to install software. fair enough- that is what I was doing.
Much to my chagrin, however, it isn’t installing flash, it wants to install Adobe DLM, DLM I assume stands for DownLoad manager, although it could very well stand for Dingo-Llama-Mammoth for all I care.
let’s analyze the sequence of events so far:
Every single fucking program I download wants to install a god damned download manager! how many bloody download managers do I need? Am I going to need a download manager manager to manage all the download managers that all manage only the specific downloads from that specific company? Is there something wrong with the concept of downloading a program, I don’t know, using the conventional browser method? You know, like any other sane person? No, Adobe has decided to decide for me. “We won’t install Flash like you wanted, but we will install a download manager that will consume resources indefinitely for this one-time installation of Flash. Then it will sit in the background and make sure your updated, because god forbid if your version get’s out of date!”
Which brings me to another rant, Versioning. I mean, I totally understand why you might want to have the latest version of an application- it fixes bugs, adds features, and so forth. and being notified, and even having the opportunity to update with a few clicks is very convenient. I have no beef with the concept.
What I disagree with is this whole “OMG if you aren’t updated to the latest version you will get haxored!” there are people who say this about every bloody program. It’s understandable for browsers, and for a number of browser-based/web-based technologies, as well as things like the .NET framework, and of course the core of windows itself. But, seriously, the main reason you update a program is to fix bugs and add features, and hope that the bugs and security concerns that a new version adds (And they always do, unless the change is extremely minor) don’t outweigh the benefit of having the known vulnerabilities and the existing bugs eliminated.
Additionally, this very mantra is proposed on applications that have little relevance to web technologies. I mean, Microsoft Word has been relatively unchanged since version 6, with of course downlevel changes (which I’m sure took a lot of effort, I’m not downplaying that) But the fact is the entire purpose of the program is to be a word processor. The fact that it now represents a bloody programming platform should be some indication that they might have sort of lost their focus on what the program is supposed to do. It’s supposed to make it easy to edit documents, not make it easy to program spam e-mail merge programs or even be a platform from which to launch your own applications.
I don’t mean to pick on Word or Microsoft by any means- this seems to be a problem with a global scale. It’s a complex with versioning. If somebody has a problem, and they don’t have the latest version, that is automatically the cause, and truly, this attitude, or more precisely, the logic behind me, continues to elude me. They don’t understand the various downlevel changes, and half the time the release notes and changelog for said program mention nothing even remotely relevant to the various issues the person might be having.
Going almost hand-in-hand with the “download manager” syndrome is the “background updater”. Each company seems to have it’s own. You’ve got the Adobe one, the one from, say, Google, Apple, and so forth. And every single one of them is sitting in the background making sure I’m “up to date”. The problem here is that they all have to same goal but they all have very different UIs and they all act entirely different and essentially have different paradigms. This is something where Linux has the right idea; the package manager can update any package you install through either the GUI package manager or through a apt-get command in the terminal. The thing is, the environment is different; Linux programmers have no problem submitting their updates and new packages to the essentially neutral repository folks. With Windows, the best solution, which is the integrate this all into Windows Update, is owned by MS, which many of the companies who would have their software in it are competing with, which seems a bit like a conflict of interest; who knows if MS will “accidentally” forget to update users of competing products?
Back to the various “update” managers, they don’t simply update the programs you already have from their company; they also inform you of “updates” to their other products. The Apple update software makes sure you know when a new version of Safari is available, even if you only have iTunes; Google’s updater makes sure that you’re fully aware of when a new version of Picasa is released. And so on.
In conclusion, suffice it to say that currently update and download managers are wholly unnecessary (especially with the latter) and a huge pain in the ass for everybody.
592 total views, no views today