So I recently got thinking about why the heck we men get facial hair. I mean, what purpose does it serve? if I let my beard and moustache grow, it would basicaly just mean I had a giant pillow stuck to my face with cookie crumbs and tangles in it. Eating becomes a game of trying to thread a needle by sticking a fork in a specially crafted mouth hole in the web of hair.
Of course, nobody let’s their beard grow. Except hippies. But they don’t count. Those that do keep beards and moustaches do so more for fashion and appearance than any functional purpose. Like noserings, I’ve always thought of a full beard or moustache as putting one at a disadvantage, because it just gives people something to pull on.
Some may wonder how this doesn’t apply also to a head of hair. Well, hair on our head actually evolved to protect the environment, see if everybody was bald and had a shiny head, the amount of energy being reflected off our chromium domes would add to the greenhouse gas effect, or something equally badly thought out. More practically, it keeps our heads warm. Moustaches don’t really do anything, aside from get in the way. Beards don’t keep our chins warm. Besides why would we need our chins warm anyway. Surely we could go for furry hands or furry feet instead.
This brings us back to my original idea: at this point it’s just an appearance thing. I guess it’s just a vestigial result of our evolution. (Or, for theologians, I guess god decided men should have hair on their faces for no particular reason). It’s interesting because you can look totally different when you shave a full beard or moustache to the point where people might not recognize you. The latter is true too but only if somebody hadn’t seen you since you started to grow it; if you deal with somebody daily you won’t notice the slowly growing beard and/or moustache. But it can also be used to determine if somebody is evil. a “hitler-style” moustache is not really socially acceptable, which I always thought was a bit weird. Are people with the same colour eyes as him also socially awkward? well, not really. But also if I was to give myself some goofy beard and/or moustache appearance, and take over the world gruesomely, aside from being an epic reign of terror, surely the future would then shun people using my funky beard and/or moustache style.
Anyway I forgot where I was going with this. I wonder if maybe people with big moustaches have like extrasensory perception, like cat whiskers, so they know that if they are going head first into a narrow area their entire body will fit because their whiskers do. I doubt that, though. our hair is critical to our sense of touch everywhere but the front of our hands, really.
One could argue that our fingernails aren’t useful either but they serve as a counterweight on which the fleshy ends of our succulent fingers grip objects. Without the nail, most of our digits would just have flabby shapeless blobs on the ends.
200 total views, no views today
That’s right. The latest version of BASeBlock now adds a working Polygon block. There was a lot of reengineering to be done, but it works realistically; which is to say, a ball will bounce at the proper angle.
Having Support for arbitrary polygons is something of a pipe dream I had. Every attempt failed. What made it possible was in fact me adding ellipse blocks, in which I unwittingly added support for polygon collisions, since the Ellipse Block was in fact just a polygon and used euclidean geometry to determine if and make adjustments to ball collisions. After getting that working, I realized it would make sense for EllipseBlock (and possibly other kinds of blocks) to simply derive from an Abstract PolygonBlock that did the work of dealing with the details of the polygon itself, while the derived class pretty much just handles it’s own fields and creates the Polygon to be used. The math itself actually uses some of the same code that was being used for ball collisions, which takes two polygons, a speed (for the initial polygon) and returns a result structure that determines if they currently intersect, if they will intersect, and how to adjust the latter to be no longer touch the former. I use that last item to create a collision normal, and reflect the speed vector of the ball across a vector perpendicular to it.
A lot of other code needed to be changed to streamline the support of it. iEditorBlockExtensions, a interface used for adding editor-oriented support, had to add a method to allow for the overriding of the selection “pulse” drawing, which at the time only drew the rect. A lot of other code that assumed that the BlockRectangle was the definitive source of the block’s shape had to be changed. This actually ended up in the addition of another virtual method to the base Block class, “GetPoly()” which of course in default implementations returns the polygon of the rectangle, but PolygonBlock not surprisingly overrides this and returns it’s own poly. The base implementation of EditorDraw() fills the polygon dictated by “FillPoly” so this works out just fine; The Editor now allows selection of polygon-shaped blocks by actually clicking on them (rather then in their rectangle) as well as highlighting only the poly (again, as opposed to their rectangle).
Currently it only supports convex polygons, but that is not planned to change, since a concave polygon can easily be simulated with convex polies anyway. (And I haven’t actually tested it with non-convex poly’s so I’m really just assuming for the moment it doesn’t work for them).
It will still need some touching up, and I’ve been making a few other minor cosmetic and UI changes (allowing drag-drop of files on the editor, better dirty-file prompts (for unsaved documents) and so forth). The only area that needs significant work and/or rework is probably the path editing.
In summary, the Next Version of BASeBlock is going to be a significant upgrade, with a completely new domain of blocks to use and explore. I still need to make some sort of “standard” LevelSet that isn’t the crappy LevelBuilders included with the game.
242 total views, no views today
A frequent- and aggravating- problem that I’ve had with games on my Windows machine is that many of them simply would not start. Inspecting Task manager would show a rundll32 process consuming the CPU and doing nothing useful. Terminating that would terminate the game process as well. This had become aggravating enough that when it happened an hour ago I got sick of putting up with it and decided to figure out what the heck was going on.
First, what do we know about it? the rundll32 process seems to invoke something within GameUX.dll. GameUX.dll is part of Windows Game Explorer library. Looking up what exactly that does online revealed that it is for managing, organizing, and keeping games up to date. That last one stuck out for me. See, my Windows machine has not had a Internet Connection for almost a year. And a LOT of software just assumes it can access the internet without proper error-handling. To test my suspicions, I decided to test something. I deregistered GameUX.dll using regsvr32 /u. Launched a game- and same problem. Interesting. After a bit more flopping around like a fish I realized that I don’t have one GameUX.dll, I have 2, since I’m running 64-bit Windows. so I unregistered the one in SysWOW64 as well. Started the game… and…
It worked, as did all my other old games that had stopped working mysteriously. Seemed the cause was simply that I no longer had a net connection. That seems a bit stupid, since these aren’t online games. Bit of a “woops” on the part of the Game Explorer. I imagine a quick fix could be added to actually handle the situation where there is no internet connection, rather than sit in a loop waiting for one to magically appear, while keeping the game from starting.
632 total views, no views today
I made a new wallpaper. This one is simpler than some of my others, but I think that suits it well.
The tricky part was the logo. All I could find were the relatively small versions visible in some of the existing logos and headers.
I did find a slightly larger version in a tutorial of some sort being used as an example; I was able to take that small image, delete the white portion, applied a 100% black colour overlay, and used Illustrator to “trace bitmap”; then saved that as an EPS, opened that back in Photoshop with a high resolution, and used that image in a two-layered approach with different bevel settings to approximate what the logo might look like at a higher resolution. It looked a bit bland, so I also used a mask to block out part of the text of 1′s and 0′s.
Since I’m here, may as well plonk out a few other wallpapers I made.
This one was made for spaceandscience.co.uk:
This one was a gigantic pain. I managed to get a nice asteroid mesh by doing some funky noise and smoothing and stuff on a sphere with a bajillion (give or take a few) different faces. The comet trail is sort of a cheat though. I rendered the 3ds max image twice: one with only the asteroid visible, and one with everything else. I used the one with only the asteroid, applied a color overlay of white, merged it with an empty layer to rasterize the overlay, and then used a motion blur in the desired direction. Plopped over top of the original asteroid image, and then the “everything but the asteroid” render, and there it was. (I did some additional tweaking, such as making a third layer with the motion blur and messing with dissolve to try to make “ice particles”, or the appearance thereof) I had to add the text too, but that was pretty much the easy part. Another piece of trivia that might not be immediately obvious: the Earth is showing only the UK. Thought that was pretty cool.
And of course I had to make one for myself at some point!
Not much to say for this one. I did have to re-create the mountain logo at a higher resolution, which I managed to do the same way I did in the CH wallpaper (although I did mine first, having only made the CH one a few hours ago…). Basically trace in Illustrator, save as EPS, import, and profit. I use the binary 1′s and 0′s here too. (side note: I didn’t type the 1′s and zeroes, I wrote a quick C# program to do it for me:
compiled that straight at the command line, used genbinary > bin.txt, opened bin.txt in Editpad 7 (A program I should possibly write a review of) and then pasted it into the text object in Photoshop. (the const values are what I used for the CH 1′s and 0′s; I used smaller values for the BC Wallpaper).
I made over 100 other wallpapers but they are mostly MLP (there are a few star trek ones, too). If I was to write out the details of how I made them all I’d be typing for days! So I chose this subset.
I also made a wallpaper for GlitchPC.net but that’s another bag of oysters…
856 total views, 6 views today
Call me old fashioned, or possibly slow, but for some reason I never seem to be using the latest version of a piece of software. Until recently I was doing all my .NET work with Visual Studio 2008; this was because VS2010, bless it’s heart, felt sluggish to me.
With the pending release of Visual Studio 2012, which as I write this is available for a free download as a Release Candidate, I decided I’d bite the bullet and start switching. This was also because I wanted to dip into XNA, and As near as I could tell the latest version only worked in conjunction with VS2010. I had to reinstall Resharper to get proper VS2010 support, since I had installed Resharper before I installed VS2010, and after applying my own preferences to both Visual Studio as well as Resharper, I was able to get back into coding. (Am I the only person who hates the preferences IDE’s have to automatically complete parentheses and braces and stuff? I always find myself typing the ending parenthesis, ending up with double, so I delete the extra ones, then I forget where I was in the nesting; and if you get used to that behaviour, suddenly you find yourself not typing ending parentheses in plain-text editors. You can’t win! I’m not a big fan of that sort of autocomplete; Actually, I don’t really like any form of autocomplete, but that’s sounds like material for another post altogether.
The End result is BCDodgerX , which is available on my main downloads page. It is essentially a rewrite of BCDodger, with an unimaginative X added onto the end that means pretty much nothing.
Overall, VS2010 is actually quite good. Call it a belated review; I almost purposely fall several versions behind for some reason. I cannot say I’m overly fond of the use of 3-D Acceleration within a desktop application, but at the same time all the Controls still have the Windows Look and Feel (which is my main beef with Java’s Swing libraries, which have a look and feel all their own), and the desktop itself is accelerated with Aero anyway so I suppose it’s only a natural progression. (Besides, I don’t play games very often and this 9800GT should get some use…).
The tricky question now is when I should start migrating my VS2008 projects to 2010, and whether I should switch to the latest framework. I can switch to VS2010 without using the latest framework, of course, but I wonder what benefits I will see? One day I’m sure I’ll just say “screw it” and open say, BASeBlock in VS2010 and jump in; I’m falling behind, after all (What with the aforementioned release of 2012 on the horizon). And VS2010 is definitely an improvement both tool and functionality wise over 2008, so there isn’t really a good reason not to switch now. No doubt I’ll keep making excuses for myself. Oh well.
At first, I thought I hated XNA; but now I know that what I actually hate is 3D programming. I imagine this is mostly because I got extremely rusty at it; additionally, I had never truly done 3-D programming, at least in the context of a game. My experience at that point was pretty much limited to the addition of 3-D graphic capabilities to a graphing application that I wrote (and never posted on my site because it hasn’t worked in ages, is old, and uses libraries/classes/modules I have updated that are no longer source compatible etc.). Of course that didn’t have constantly changing meshes, used DirectX7, and it was shortly after I had finished that feature that I abandoned the project, for whatever reason. I had never dealt with 3-D in a gaming capacity.
The purpose of XNA is to attempt to simplify the task of creating games- both 3D and 2D, for Windows as well as XBox 360. And it definitely does this; however you can really only simplify it so much, particularly when dealing with 3D Programming. My first quick XNA program was basically just to create a bunch of cubes stacked on one another. This is a very common theme given the popularity of games like Minecraft, but my goal was to eventually create a sorta 3-D version of Breakout (or, rather, BASeBlock 3D).
I was able to get the blocks visible, after a lot of cajoling, and doing the work on paper (Visualizing 3-D space and coordinates are not my forte). But it ran at 10fps! This was because I was adding every single block’s vertices to the VertexBuffer; for a set of blocks in a “standard” arrangement of, around 1920 blocks (which is probably a number that would make the 2-D version go around 10fps, to be absolutely fair here), that is over 11520 faces, each of which actually consist of a triangle list of 6 points (I tried a triangle fan but it didn’t seem to even exist (?), oh well) meaning that I was loading the VertexBuffer with over 69120 texture-mapped vertices. That’s a lot to process. The big issue here is Hidden Surface Removal; obviously, if we had a cube of blocks like this, we don’t need to add the vertices of blocks that aren’t visible. I’ll admit this is the part I sort of gave up on that project for the time being; that would involve quite a bit of matrix math to determine what faces were visible on each block, which ones needed to be added, etc based on the camera position, and I kind of like to understand what I’m doing, and I, quite honestly, don’t have a good grasp over how exactly Matrices are used in 3-D math, or dot products (at least in 3-D), and I prefer not to fly blind. So I’ve been reading a few 3-D programming books that cover all the basics; the book itself I believe goes through the creation of a full 3-D rasterization engine and has a lot of in-depth on the mathematics required; this, paired with concepts from Michael Abrash’s “Graphics Programming Black Book” should give me the tools to properly determine which blocks and faces should be added or omitted.
Anyway, scrapping that project for the time being, I decided to make something 2-D; but since I was more or less trying to learn some of the XNA basics, I didn’t want too much of the concepts of the game itself getting in the way, so I chose something simple- I just re-implemented BCDodger. I added features, and it runs much better this way, but the core concept is the same.
XNA is quite powerful- I have no doubt about that. Most of my issues with it are minor. One example is that XACT doesn’t seem to support anything other than WAV files, which is a bit of a pain; this is why BCDodgerX’s installer is over twice the size of BASeBlock’s, despite having far less content). Another minor peeve is that there is no real way to draw lines, or geometry; everything has to be a sprite. you can fake lines by stretching a 1×1 pixel as needed, but that just feels hacky to me. On the other hand, it’s probably pretty easy to wrap some of that up into a class or set of classes to handle “vector” drawing, so it’s probably just me being used to GDI+’s lush set of 2-D graphics capabilities. Another big problem I had was with keyboard input- that is, getting text entry “properly” without constant repeats and so forth. Normally, you would check if a key was down in Update(), and act accordingly. This didn’t work for text input for whatever reason, and when it did it was constrained to certain characters. I ended up overriding the Window Procedure and handling the Key events myself to get at Key input data as needed, then hooked those to actual events and managed the input that way.
Overall, I have to conclude that XNA is actually quite good. There are some intrinsic drawbacks- for example it isn’t cross platform (to Linux or OSX), and the aforementioned basic problems I had, which were probably just me adjusting my mindset. It’s obviously easier than using Managed DirectX yourself, or DirectX directly (if you’ll pardon the alliteration), and it is designed for Rapid creation of Games. With the exception of the High-Score entry (Which took a bit for me to get implemented properly) BCDodgerX was a single evening of work.
334 total views, 2 views today
First, a warning:
Now that that is out of the way…
One of the nice things about later versions of windows is that you don’t automatically have full control over everything. Some people try to say this is bad, because it is their computer so they should be able to do what they want, but the point they are missing is that the changes to the default security settings is not to prevent them from doing things, but to prevent nasty programs from being able to do anything they want. By definition the settings for a user control more what the programs running under that account can do; and only serve to restrict the user themselves by virtue of them not really being able to do anything that a program cannot do. (If no program can delete a file, that user cannot delete it either).
Sometimes, however, this can get in the way. Stubborn files, for example, might refuse to be deleted. Usually, running a program as administrator clears this up, but sometimes even this doesn’t work.
In particular, a failed Windows update, or an update that doesn’t clean up properly, can leave a mess of files around. Usually these are weirdly named folders in the root of the system drive. A quick search for words like that via google reveals that this is not an uncommon problem. The problem is that nothing can delete these files- you cannot run as an administrator to delete them, tools like unlocker and deleter don’t work either. The cause is that the files weren’t even created by the administrator, but rather by the LocalSystem account under which Windows update runs. (this is required so that the update can update dll files and other files that are in-use, which usually will require a reboot for a myriad of reasons that I won’t get into). The files are supposed to be deleted afterwards- they are simply temporary files- but a unexpected power loss or an error could prevent proper cleanup of these files. But since they are owned by LocalSystem, nobody else can delete them.
So the question is- how the heck do we clean-up the files?
well, if the only way to delete them is to become LocalSystem- let’s try that. After some experimenting, one of the most reliable ways I found was to create a service. You can do this by starting a Elevated Command Prompt, and entering the following command:
This creates a new service called runcmd. the /K start is necessary because the service control manager expects services you run to be… well, services. cmd is not a service, so it won’t register itself with the SCM and this SCM will kill the process after a timeout. using /K start, we can force that first spawned cmd to instantly start another one, since killing the parent process does not kill child processes, that cmd remains alive.
Running it is simple. just enter this command:
So, CMD was running. I switched to the Interactive desktop, and was greeted (after this weird switch thing) with this:
Success! cmd was running under the LocalSystem account. This is good way to clean up files left about by services. However, while I was able to get explorer running (start menu), I wasn’t able to get explorer running (file manager). So I cheated, opened notepad, and used it’s file dialog. This method could be used to delete odorious files that refuse to be deleted on other ways.
Obviously, this should only be used when needed and the applications you run should be kept to a minimum, and you surely shouldn’t run browsers this way!
298 total views, no views today
It is a frequent point of debate in many web communities that contain programmers- or back-seat programmers, as it were, to argue that such and such language is better than another, or “if it was written in this language it would be faster”. A lot of the time this is coming from people calling themselves “real programmers” or at least putting on the air that “real” programmers don’t use managed languages like Java or C#; instead they seem to think that “real” programmers use C. And arguably, any programmer should at least be able to read C. However, a good programmer is defined not by what language they use, but their product. a Program that works is a program that works, regardless of the language it was written in.
Anyway, the idea that C- or C++ is superior to Java- or, more generally, that any language is better than another for speed- comes with the implicit assumption that there is some limit. Because you know what can probably create faster code than C++? C. And faster than that? Hand-tuned assembly. Saying C++ is superior is basically saying that it’s worth it to dump the language-based advantages that Java has over C++ for the speed improvement, but somehow it isn’t worth it to make nearly the same amount of gains by switching to C, or by switching from C to Assembly.
Thing is, unless you choose Assembly language, there will always be a language that can make your program faster. But the thing is that we use Programming languages to try to abstract away the details. Instead of having a series of direct instructions to the Floating Point Unit, placing values on the FPU and CPU stacks to perform operations in Assembly, we simply use C or a higher-level language and give them a in-fix expression as we are used to it. Can you sometimes make such code run faster in Assembly? probably; if you take advantage of U and V pipe-lining, make sure to reduce wait states and memory accesses, and so forth.
The bigger question is, “Is it worth it”. And largely, no, it isn’t worth it. In fact, it very seldom is worth it.
another point is that the primary thing you are doing with a programming language now is interfacing with libraries. C++, C, and Assembly don’t make the Libraries run any faster. On windows if you allocate memory- whether by way of creating objects in Java or C#, using new in C++, using malloc, alloc, or whatever in C, it all boils down to OS-specific functions; in Windows case, all of those eventually become calls to LocalAlloc. (Or GlobalAlloc). But whether you make that call from Java or C doesn’t make that function execute faster.
Sure, you can argue that Java or C# probably has more overhead; from new Object() to the actual Allocation there is probably some garbage collection housekeeping and allocating the various fields. But the fact is that in C you will usually be doing that housekeeping yourself anyway; depending on the nature of the memory allocation and what it is for, you will probably be making a lot of calls to malloc() and free(); and every single time is a tango with the evil poltergeist known as memory allocation problems. accidentally forget to free a block of memory and then reassign the value being used to store the pointer- leak. Accidentally call free twice on the same block of memory? double-free. All that extra code adds up, and while I don’t think it quite equals the time penalties associated with Java, which might accure to about a tenth of a second over a constant year of execution, it certainly takes a toll on anybody trying to work on the code. Having to remember “I have to allocate that structure on 16-byte boundaries, and I need to make sure all the char arrays are packed before I call this function,” etc.
And even then, you could easily eke out a similar performance gain over a C or C++ implementation by completely retooling the entire program in Assembly.
For any number of projects written in Java or C#; particularly Games- you can usually find a number of posts on forusm calling for or at least implying that the game should be re-written in C or C++ “for speed”. But why stop at C or C++? Why not call for the entire thing to be re-written in Assembly? Because it’s not worth the effort. But the thing is, it’s not worth the effort to rewrite it in C or C++, either; by the time any such rewrite is completed, computers will have gained enough speed to make the speed improvement moot. The reason that Assembly language isn’t used is because it is no longer necessary.
Programs used to pretty much have to be written in Assembly to be reasonably fast. QDOS/MS-DOS was originally coded in Assembly language. Same with every early Operating System. But those Operating Systems were dead simple by comparison. Now, C is the main language used for Operating Systems. Did C code get faster? Not really, at least not in comparison to hand-tuned Assembly. but the fact is that if writing twice as much code could make your code 10% faster, it was only worth it if that 10% speed difference actually mattered. With the 386, it did, and could often mean your program showing a chart a full second before your competition. Now, your program running 10% faster can often mean that it shows a chart imperceptibly faster, which is hardly justification for tripling the amount of code, making it difficult to maintain with low-level code, and discarding numerous useful abstractions.
That last word sort of touches on what a Programming Language truly is- just a set of abstractions. Let’s take a simple language construct that exists in nearly every language- the simple if. In assembly, the equivalent is to use a compare instruction, and then use a Jump instruction (for example, JNE (Jump if not equal) to Jump to a specific address. Most Assemblers also add features that don’t directly translate to Machine code, such as Macros, making some of this a little easier. Your typical C if statement might take quite a few lines to perform all the needed compares. But it certainly takes more work. Can you make that if run faster? Well, probably, if you are a skilled Assembler programmer. But most people aren’t, and even in the case that one is they would only do this in a time critical inner loop.
Nowadays, inner loop speed is not as critical as it used to be. While a Assembly rewrite to a critical section of code might make that code 200% faster on a 386, that doesn’t mean it would have the same effect on a modern machine, because modern machines have multiple processors, some of which need to access the cache concurrently; there are also numerous pipelining issues to consider. For the most part optimizing code with Assembly on a modern processor is not a feasible solution.
It used to be that Assembly had to be used for everything. If you used C, you were a loser and your code was too slow for general use. Then it was that C was used for everything. If you used C++, you were a loser and your code was too slow for general use. Now, it feels like C++ is used “for everything”. If you use anything else, your code is too slow for general use.
But the fact is that people sticking to these older- if arguably faster- languages are sticking also to the same languages that have made possible almost every single security vulnerability in every modern Operating System. C code just begs to be buffer overflowed, and the simplest oversight can topple a companies entire network, pass along trade secrets, or otherwise be a gigantic pain in the ass. In today’s networked world, Speed is not something that is required of programming languages, both because if it’s myriad trade-offs, such as costing more programmer time and anguish, as well as the fact that you can always buy a faster computer later. Obviously, as long as the algorithms used are sound, you aren’t going to be getting gigantic gains on your implementations just by switching to another language. Sometimes it can even make things slower. Managed languages are a good idea for Application code- and games- simply because they are for Applications. Games don’t need to be close to the hardware because, like a wife after a long marriage, the hardware hasn’t even been touched in years. All “hardware” interaction is done today either through OpenGL or DirectX, which themselves delegate through software-based drivers anyway.
Computer time is cheap. Programmer time is not.
400 total views, no views today
Freelancing is essentially being a short-term hired contractor for a well-defined piece of work. Effective Freelancing, by extension, requires a good rapport with your customers as well as a strong work ethic, and, a strong sense of pride in one’s work. The first is needed because keeping the lines of communication friendly and open, rather than confrontational, makes the experience smoother for everybody. A strong work ethic is needed to keep yourself on-task and disciplined, as well as honest with your customers. A strong sense of pride is required simply so you can make the best product you can for the customer, without cutting corners. With freelancer.com, you can do all this without leaving your chair. I do frequently, in fact. (My username on freelancer.com is BCProgramming).
Freelancing can seem like an “easy way” to make money for some people. However, it is anything but! Freelancing can be just as demanding, if not more so, than a standard 9-5 job- but without the assured job security. Juggling a number of projects simultaneously in a manner that makes it possible to make a living- and having the discipline to do the work, can be quite a task. This applies just as much to website design, graphic design, as well as programming freelance jobs as much as it would for freelance work in any other industry.
The internet is rife with articles and postings that try to purvey the secrets to effective freelancing. I recommend any person interested in trying to freelance as a Graphic Artist, Programmer, or similar vocation read those as well. This post will instead focus on some things to avoid in order to facilitate effective freelancing. Particularly on freelancer.com .
In order to provide an easier reading experience, I have boiled down my reasons into a numeric sequence. Numbers are inherently soothing, like calamine lotion, as long as you stick to base-10. Each of these rules of course has exceptions which I note. Most of these are “don’ts”, but some of them are additional points that I thought were important.
A lot of developers stuck in other jobs or doing crappy retail jobs during the day and working on cutting edge technologies and helping people with their AP CS college projects at night may think they have what it takes to instantly jump into the world of freelance development and make a living that way. Usually, this is not the case. A freelance reputation needs to be built first, and you cannot rush those types of things. Another problem is that typically the type of person that thinks it will be easy is also easily discouraged as well as not always having the right amount of self-discipline, or not enough self-confidence to even try to juggle multiple projects at the same time for fear of “drowning” in a sea of work and making promises one cannot keep. A corrolary to this rule is that it can replace a day job- just not overnight. In the worst case scenario a competent programmer, web designer, or other practicing freelancer can easily make some extra money from their skills, in addition to providing quality products to employers.
This rule has perhaps the biggest exception, which may or may not always apply, and depends on the person. For a first project, some might be tempted to low-ball a bid, and perhaps undercut their own estimations of how long the work will take. This can work both for and against you. If you apply yourself and manage to meet your ridiculously low-barred requirements, you’ll find that you worked harder for less pay. At the same time, if you gave your client a good experience, they are unlikely to forget it soon. If you aren’t able to make that low-balled estimate, what results depends on your clients understanding of the industry and how estimates so frequently fall far shorter of the actual work done. This is particularly so in this case, where in order to eke out another bid one might shave off another day or another hundred dollars, or both. As a general rule, however, and this mixes in a bit with the previous bullet point- if you want to make a living from this, you need to make a living off it. I myself enjoy programming, but to be perfectly honest I could just as easily work on many of my own projects as I could on a boring database program or something to that accord. As a result, when I come up with a bid I usually consider the various requirements, decide on what I hope to be a realistic estimate, and then consider how much value I could add to my own products if I was to apply myself to them for that amount of time. I then usually munge that value, often halving it, to account for the fact that I will most likely gain skills from a project I take on that I wouldn’t otherwise that I will be able to apply to my own projects. Unfortunately, this is not the best way to do it, because that always undercuts the amount I receive from it, making it unsustainable without a ridiculous amount of effort. Working on a single project for 3 months with a total investment of hundreds of hours and making only a few hundred dollars is inherently demoralizing.
This ties in with the previous point. And this applies equally to both freelancers as well as employers. As a freelancer, you need to know your limits and make sure you feel everything is fair. Your clients aren’t looking out for your interests, they are looking out for their own. Sometimes you will find they sometimes try to pile on as much work as they can get away with for as cheap as possible. The main reason I chop my own estimates in half is because I don’t want to seem greedy, but one needs to make a living from this, and sometimes that sort of attitude can be detrimental. Conversely, Employers have to feel they are getting their money’s worth too. These two requirements have to mesh. I’ve had excellent relationships with my clients on more than one occasion where they are so pleased with my work that they actually raised the price! The client is looking out for themselves first, which is reasonable. The client want the best value, as any customer does. The freelancer needs to watch out for themselves, too, to get the best return on their investment of time and effort. As long as both sides consider these requirements in a fair and equitable manner, things always go smoothly.
This goes for both Freelancers as well as Employers. It is part of the human condition to try to get things “free”. A Freelancer may try to up the price a bit for some otherwise simple piece of functionality. A employer may want that one teensy-tiny feature added for a negligible fee. I’ve found that the best approach is to be perfectly honest and open about it. If a feature is simple, I will often add it. On more than one occasion features that were requested I had already added to the program when I thought of them previously. some other requests required a rearchitecture of the entire application, which I document fully, as well as the trade-offs which are usually inevitable, and allow them to choose which branch to follow. I don’t increase my own prices during the project. Instead, the goal for me is to try to make the price as accurate as possible from the get go. As long as the employer gives me no hidden surprises, I have no hidden surprises either. The take-away for this rule is not that you should always be a pushover, but rather that you should be fair. This ties in with point number two (Don’t undersell yourself).
When you first join freelancer.com, fill out some of your abilities, you are inundated with projects to purview. In order to select a project and make both yourself and the employer happy,(which one could argue are critical to effective freelancing) you should have a good idea of how capable you are for it. It’s a bad idea to expect to take on a project with very little expertise on the subject area and hope to catch up in a day or two of google searches. For example, if you have only ever dealt with PHP and MySQL, it is unlikely to be a good idea to try to take part in a project that requires the use of MVC, SQL Server, and C#! That’s common sense. Effective freelancing means knowing when something is beyond your capabilities. Taking on tasks that are beyond your skillset results in a stressful experience both for yourself as well as the employer. This generally ends on a negative note for both parties.
At the same time however, Aim high on your “limits”. Don’t just stick with what you are exceptional at. Take on projects that either require or could benefit from technologies that you are good at. A good example of this was one project that I took on. It consisted of a few components. At the time I was very familiar with Windows Forms, but hadn’t used WPF. there was no requirement that anything use WPF, but the nature of one of the client programs lended itself well to some of the things I did know about WPF, being resolution independent being one of them, as well as having very powerful drawing capabilities to customize standard controls. I dove into that part of the project and was amazed at how functional and overall elegant the result was. I had, in effect, underestimated my own capabilities with WPF. It took a lot of research, but it was very fulfilling since not only was the customer happy with that product but I learned a great deal about WPF, which I will no doubt apply in future projects, both personal and professional alike. By expanding your skillset, you increase the “surface area” of projects you can work on, meaning you can gain even more skills, and so on. The converse of this is the point of this paragraph, which is that you should not overevaluate your abilities. If the project had required WPF experience, I might have been less likely to take it, simply because without that qualifier I always know I can fall back on my Windows Forms capabilities if I need to.
For many people, the idea of becoming an “affiliate” is loaded down with the meaning of “sellout”. Personally, even using Google adsense makes me feel like a cog in Google’s machine. However, I’ve come to regard freelancer.com somewhat differently. After all, I’ve made a fair amount on it and rather than consider my placing of any links or “advertisements” as me selling out, I consider it a “testimonial” of sorts. I base everything here on personal experience.
It may seem natural to consider things such as the bidding process to be a “contest”. I think a more apt description might be to consider it as the analogy to a short job interview. In some ways, that makes it a contest. However, one might take this to the logical conclusion and figure that it would be better to have fewer freelancers around. I think the opposite. A healthy freelancing community needs a lot of employers and a lot of freelancers. a community with too few freelancers doesn’t mean those freelancers get more jobs, it means that employers will go elsewhere. So if you have found a great community like freelancer.com, encourage others to join- as either freelancers or employers- and support it that way.
Since I haven’t actually used any other Freelancing service, I am graciously saved from having to point out inherent flaws specific to them since I have no idea what they are (on the flip side, this also means I cannot point out the plus side compared to other services. That said, I have tried a few other similarly themed websites and found them to be unweildy, bogged down in ridiculous policies, or just plain a scam.) Instead, though, I can discuss some of the distinct negatives not with freelancer.com itself as a service (since I don’t know any) but freelancing in general as well as employing freelancers. This should not be taken as dissuasion from either becoming a freelancer or hiring freelancers or outsourcing, but rather as something to keep in mind.
To summarize some of my points above- you cannot just jump into the market, even with a lot of skills, and expect to instantly make a living. You need to not only be a good service provider with the service you provide, but you also need to be good at marketing yourself. That is, you need at least a bit of dual capability. If you are used to a more corporate environment, you might not be used to that. I certainly wasn’t. You also need to be somewhat careful in your selection of projects. Freelancer.com does an excellent job making sure that only legitimate employers cna post projects, but even the best communities are going to have a few bad apples. A quick google on the web reveals that a few of these bad apples gave some freelancers a negative impression for their first project, and sullied the otherwise good name of the site. Many of them paint the site as a “scam”, but this is not the case. Ironically, the measures they have in place that make people erroneously believe it to be a scam are in place to prevent just that type of thing; Employers running away with work, or freelancers not delivering what they promised. I think the dissapointment from some expectant patrons comes from having the belief that being a freelancer basically means “being your own boss” and magically making a living doing what you love. But what some of those naysayers discovered was that being your own boss meant self-discipline, and doing what you love doesn’t necessarily mean it’s either easy, nor that it was always fun. Pair this with the fact that a lot of potential freelancers don’t realize you have to also be able to “market” yourself, and it’s no surprise some came to the erroneous conclusion that the entire site is a scam, (which is, again, erroneous to a T). Freelancing is not easy, but freelancer.com- and it’s community- makes it easier. Before it was recommended to me, for example, all I had for “freelancing” was a contact page and resume on my own site. The problem is that isn’t really “marketing”, since very few potential employers in that capacity would somehow find themselves on my site anyway. Freelancer provides a valuable service- consider it matchmaking even- where employers find their “match” in a freelancer that can do what they need done, and do it well.
For Employers there are two things to keep in mind. The first, is that in the long run, you get what you pay for. Second, language barriers always seem taller than they really are, but are something to take into account, because communication is the most important thing to making sure you get work you are happy with. Basically, regardless of the region from which a prospective freelancer is from, the portion of freelancers who are, shall we say- less than stellar- is approximately the same. Be sure to review any work history for prospective freelancers, as well as the reviews from any previous employers. This also means that you shouldn’t necessarily avoid a prospective employ based on region. There is a common viewpoint that workers from India and other Eastern countries are sub-par, but the truth is that there are simply a lot more of them on the market. The actual ratio of competence is still pretty much the same, and the rule where you “only hear about bad news” most of the time works overtime here. People are quick to express when they have an awful experience with a foreign freelancer, but they aren’t as likely to express when that work is done well. This works equal but opposite in some sense for local freelancers. Employers could be more likely to find positives in the experience when they know the freelancer had the same citizenship as them. This is not of course purposeful deceit, but just part of the human condition.
Now the above faint praise for foreign freelancers might not be surprising coming from a foreign freelancer. In some respects, I guess I could be called foreign to the U.S market (since I’m Canadian), but since I live in a westernized country I would be more likely to gain from disparagement of foreign freelancers. Naturally, you have to be careful, and consider choices thoroughly. They are by and large just as capable as any other freelancer, but there will sometimes be communication issues. However, those can arise regardless of region.
The basic concept of freelancer.com is to make it as easy as possible for a person who needs something done and the person with the capabilities to do it to find each other. It could be argued that it could be called a non-exclusive club, which is a very interesting way to put it. As with any club, there are going to be members that don’t fit in with the overall philosophy. Equally, as with any site wherein people look for and offer work, there are going to be people trying to make a quick buck while disregarding empathy. One of the nice things about freelancer.com in particular is the capability for scoring your relationships. Bad work should get a bad review, and a poor customer who makes constant changes to the spec or demands huge features for the same price should get a likewise bad review. The idea is not that you are in business for yourself, but rather that you are in business as part of a larger community, and the ideal case would be to either have those “bad apples” either start taking it seriously, or stop altogether. By posting frank reviews you help other freelancers and employers find competent and responsible employers and freelancers respectively, while pointing out those who make a mockery of the system.
1,002 total views, 4 views today
I haven’t dealt with XNA heavily as a game platform. My language of choice is C#, but I just never really liked XNA. At any rate, I decided to give it a serious go. My initial dislike is mostly because for 2-D you can’t really draw lines or shapes yourself, instead you basically just draw and rotate some sprites. It’s not awful by any measure. My project in this case is called “BCDodgerX” and is just a re-implementation of BCDodger, the C#/GDI+ implementation of which can be found on my downloads page.
After using it for a bit, (enough to get a playable basic version of the game down, with score display, etc) I have to say I still don’t really like XNA; at least for 2D games. It works, no doubt, but there is something… unnerving.. about it. I can’t put my finger on it.
This version will try to be more “complete” than the C#/GDI+ implementation, which was basically just me messing around with a dual-threaded model (the gameproc() routine looping continuously on one thread, and using Invoke to get the paint routine to draw). BASeBlock further refines the same model, too.
And this is a very short post. it’s a NEW SHORTNESS RECORD!
256 total views, no views today
The currently released version of BASeBlock is 2.3.0. I have made a lot of changes to the game, added a few blocks, abilities, and other fun stuff, and refactored various parts of the code to make things work better since then. One of the biggest new features is “framerate independence”. 2.3.0 and earlier versions basically did velocity like this for every game tick:
However, the faster the game loop ran, the more times this would run, and typically the higher the fps the more the game loop would run too. This meant that the speed of objects could be the same internally but visibly the objects seemed to move at wildly different speeds. The “fix” to this is relatively simple- instead of simply adding the velocity to the location, we need to take into account some other factors. First, we analyze the problem. What do we want to achieve? The quick answer is “we want the movement of objects to remain equal regardless of how fast the game ticks go”. The best way I’ve found is to choose a given framerate as the “ideal” framerate; if the game runs at this fps, than the result would be that the velocity is added verbatim; If the framerate is less, than we add “more” to compensate; for example, a framerate of 30 in this case would double all speed additions that are performed; and a framerate of 120 would half them.
BASeBlock already tracks the FPS, so the solution was three-fold; first, create a routine that would retrieve the appropriate multiplier based on the framerate and the desired framerate, next, create a routine to simplify the incrementing of a location with a velocity that would take into account the current multiplier that was derived from the framerate, and also to change all the code that simply adds them to use the new routine.
Implementing this in BASeBlock was something I was wont to do for quite some time; it seemed a lot more involved than it really was. Eventually I just decided to try; if things went sour I could always roll back to a previous SVN commit anyway.is has
First, I added the routine for getting the game Multiplier. This required the current FPS of the game. Since that seems like something best dealt with in the presentation layer (and also since the main game form was already tracking FPS for the FPS counter) I simply added a property to the IClientObject interface, which is designed to allow for a way for the form and the game logic to communicate without explicitly requiring knowledge on what it is communicating with. With that property in place, I simply implemented the multiplier routine as a basic division- the DesiredFPS divided by the current FPS. (There is an exception for the case where the retrieved FPS is 0 where it will return 1 for the multiplier). One very interesting side effect of this is that I could, if I wanted, “fake” slow motion by munging around with the CurrentFPS as returned by the clientObject, though that is probably not a good use of this design.
I then implemented a simple routine for incrementing the location, not surprisingly I called this “IncrementLocation”. It adds the velocity, but multiplies it by the multiplier as derived from the currentFPS and desired FPS.
This worked rather well, once I found and replaced all the old direct-addition code with a call to this routine. However there were still some odd behaviours; mostly related to velocity decay. Some objects- particles, the gamecharacter’s jumping, some items falling, and whatnot would reduce or increase their speed by multiplying components of that speed by a set factor. For example, a particle might “slow down” after it spawned by multiplying it’s X and Y speed by 0.98 each frame. I needed to make similar adjustments to the multiplications factors there in much the same manner as for the additions.
I still encounter minor issues that are a direct result of the changes to a “managed” framerate concept; a nice benefit compared to 2.3.0 is that I was able to remove the silly Thread.Sleep() call that slept for 5 or 50 milliseconds (I forget specifically) so the framerate is typically higher; on the “Spartan” Level set builder, the framerate is usually close to 200, which is pretty good for GDI+, and that’s the debug build, too, which is slower than release.
After this, I tried to improve the platforming elements of the game a bit more. I added some new powers, fixed a few minor issues with some of the powerup management code, and added a new interface for the editor to allow blocks to draw something “special” when being shown in the editor; this is used by the powerup block to show the contents of itself as well as modify the tooltip shown. Another change was “block tracking” at the level of the PlatformObject. This also sounded a lot more complex than it was. The idea was simple- when the character, or anything, is on a block, we want them to move with it. This was done by having the platform object track any block it is on, then, each frame, adding the distance the block moved, if any, to it’s own location as well.This has worked spectacularly. I also added an interface for blocks so they can receive notifications from a platformobject when they are stood on.
There is a bit of a downside to this idea, though, based on how I implemented some other “moving” block features for performance reasons. I have a few blocks that give the illusion of moving when hit, but in fact destroy themselves and spawn another object in their place that looks the same. These blocks include BlockShotBlock, BallDirectShotBlock, and the “magnetAttractor” block; the first one gives the appearance of shooting upwards when hit, breaking all blocks in it’s path; the second goes in the direction the ball that hit it was going, and the third works in tandem with another instance of a magnetAttractor block to create the illusion of the two blocks flying towards each other and exploding, or flying apart. These rely on GameObjects to control their behaviours after they are hit, allowing themselves to be destroyed and allowing the rest of their “action” to be governed by those objects. Most specifically, the “BoxDestructor” which is used to create a block-shaped projectile that can destroy other blocks. The magnetAttractor creates two such blocks when necessary, and controls them with yet another gameobject that handles their velocity change, and detects when they meet, creating the requisite explosion. I did it this way because my animatedBlock “architecture” is terrible and annoying to work with, or, at least it was at the time. This means that a gamecharacter cannot stand on such a block and be “fired” along with it, which would have been an awesome gameplay principle for level design. I did create a movingplatform block that opens up some neat possibilities too, though. And causes some really goofy gameplay when I replace all the blocks in a level with them.
My next endeavour was related to the editor; With the new platforming component, I had made it possible to create a Platform-oriented level, with or without a paddle, by adding the appropriate triggers and components to a level. I forgot to add some of these more than once; in fact the second level of the “testplatforming5.blf” levelset included with 2.6 forgot to set the autorespawn field of one of the spawner blocks, meaning that once you die, you cannot beat the level, since only the paddle respawns, not the character. To help alleviate this, I decided to create “templates”. This means that when adding a new level, as well as being able to just add a blank level, one can create a new level copied from a template. This really added a richness to the editor. Templates are loaded from the templates directory, and can be shown either in a categorized drop-down or in a categorized dialog; the “category” design derives from the template concept used with tools such as Visual Studio itself or VB6, which separates the templates into separate categories. This should make the creation of custom levels, particularly platforming levels, far easier. Templates can also add sounds or images to the loaded Set. (possible revisions might be to warn when a template object conflicts with an existing resource, rather than replacing it).
I also fixed a myriad of other bugs and UI issues that I encountered while working on other features. The newer version is really shaping up to be a great update.
376 total views, no views today