10 Jun 2018 @ 10:23 AM 

It suddenly occurred to me in the last week that I don’t really have a proper system in place for not only software downloads here on my website but appropriate build integration with the Source Control in building the projects as needed when commits are made. Having set up a Jenkins build environment for the software I work on for my Job, I thought it reasonable that I make the same demands of myself.

One big reason to do this, IMO, is that it can actually encourage me to create new projects. The idea of packaging up the result and making it more easily accessible or usable is often a demotivator I find for creating some new projects. Having an established “system” in place whereby I can make changes on github and have say, installer files “appear” properly on my website as needed can be a motivator- I don’t have to build the program, copy files, run installation scripts, etc. manually every time- I just need to configure it all once and it all “works” by itself.

To that end, I’ve setup Jenkins appropriately on one of my “backup” computers. It’s rather tame in it’s capabilities but it should get the job done, I think- only 4GB of RAM and an AMD 5350. I would use my QX6700 based system, but the AMD system uses far less power. I also considered having Jenkins straight-up on my main system but thought that could get in the way, and just be annoying. Besides- this gives that system a job to do.

With the implementation for work, there were so many projects interdependent and we pretty much always want “everything” that I just made it a single project which builds all at once. This way everything is properly up to date. The alternative was fiddling with 50+ different projects and figuring out the appropriate dependencies to build based on when other projects were updated and such- something of a mess. Not to mention it’s all in one  repository anyway which goes against that idea as well.

In the case of my personal projects on Github, They are already separate repositories. So I will simply have them built as separate projects, with jenkins itself understanding upstream/downstream, I can use that as needed.

I’ve successfully configured the new jenkins setup and it is now building BASeTris, a Tetris clone game I decided to write a while ago. It depends on BASeScores and Elementizer, so those two projects are in jenkins as well.

BASeTris’s final artifact is an installer.

But of course, that installer isn’t much good just sitting on my CI Server! However, I also don’t want to expose the CI Server as a “public” page- there are security considerations, even if I disregard upload bandwidth issues. To that end, I constructed a small program which uploads files using SSH to my website. It will run once a day and is given a directory. It will look in all the immediate subdirectories of that directory, get the most recent file, and upload it to a corresponding remote directory if it hasn’t already been uploaded. I configured BASeTris to copy it’s final artifact there into an appropriate folder.

Alternatively, it is possible to have each project configured to upload the artifacts via SSH as a post-build step. However, I opted to not do that because I would rather not a series of changes throughout the day result in a bunch of new uploads- those would consume space and not be particularly useful. Instead, I’ve opted to have all projects that I want to upload uploaded once a day and only if there have been changes. This should help reduce redundancy (and space usage) of those uploads.

My “plan” is to have a proper PHP script or something that can enumerate the folders and provide a better interface for downloads. If nothing else I would like each CI projects folder to have a "project_current.php” file which automatically sends the latest build- then I can simply link to that on blog download pages for each project and only update it to indicate new features or content.

As an example, http://bc-programming.com/downloads/CI/basetris/ is the location that will contain BASeTris version downloads.

There is still much work to do, however- the program(s) do have git hash metadata added to the project build, so they do have access to their git commit hash, but currently they do not actually present that information. I think it should for example be displayed in the title bar, alongside other build information such as build date, if possible. I’ve tried to come up with a good way to have the version auto-increment but I think I’ll just tweak that as the project(s) change.

Heck- the SSH Uploader utility seems like a good candidate for yet another project to add to github, if I can genericize it so it isn’t hard-coded for my site and purpose.

Posted By: BC_Programming
Last Edit: 10 Jun 2018 @ 10:23 AM

EmailPermalinkComments Off on About Time I had a CI Server, Methinks
 03 Dec 2013 @ 8:19 PM 

Making Programs, Classes, and Source code available for free is a very helpful service to those who would wish to consume that content.

One has to select their audience, however- and know what to expect.

For quite some time I’ve helped with a Plugin called GriefPrevention. Initially, this was because I literally had nothing to do aside from my own personal projects, so I was able to dedicate a lot of time to making very significant changes to address a myriad of problems users of the plugin were having. In some ways, I started to treat it like “my job” (aside from my actual Job of, well, finding one). Most of my Time went into GriefPrevention. BASeCamp/SurvivalChests was abandoned; BASeBlock has been hardly edited since… etc.

A few months ago I got a quite spectacular Job; it involves working heavily with Postgres, C#, SQL, and Java, as well as some legacy libraries and stuff for good measure. Naturally that really only touches the surface of it But the important take-away is that I think it’s awesome. Some work can be a PITA (like making the same slightly complex changes to a query in 19 Jasper Reports) but it beats the hell out of any other Job I’ve had.

I didn’t expect it to be much different. But this rather opened my eyes. The difference between my real work and working on GriefPrevention is quite interesting. Working on GriefPrevention is OK. Fundamentally the plugin is relatively simple. Certainly less strenous than page-long stored procedures used in Reports, or Asynchronous Update Downloading and installation.

But what came as a shock was the fact that much of the time I [i]preferred[/i] to work on work tasks rather than GriefPrevention. I would address WOs instead of GP tickets. To make matters worse, I only get paid for time I put into one of those- take a guess which. Discussions with my co-workers is frequent; meetings take place to plan future additions and changes. The only discussions I’ve had about GriefPrevention are effectively with my inner Monologue since I’m for a number of reasons the only “active” developer (and in this case, “active” means maybe a few hours a week). I work for 8 to 12 hours typically for my Job- I basically go nuts until I’m mentally fatigued. This usually means the chance of any changes to any of my Open Source stuff is nil. It’s sad because I actually play BASeBlock and use BCSearch, and could list off a number of bugs and issues I’ve had with both, but simply don’t have time for them (heck, I didn’t even have time for them when I was unemployed!).


Don’t get me wrong, another interesting part of GriefPrevention are it’s users. This is also one of the parts that makes me wonder occasionally why I bother. I reworked the code arduously over a period of a few weeks around the time I started to work on GP, to address some issues that many people had- per-world configurations and a more flexible configuration for controlling behaviour. Most people are reasonable happy with these changes. However those that are not are like a shrill siren piercing a dark night. In this case, right now, the biggest problems seem to be that Configurations are “complicated”, with any number of suggestions up to and including a complete rollback of the rules feature. I’ve been insulted, both passive aggressively and otherwise, both via PM and through project comments. And then been called “unprofessional” when I respond in kind (well, duh, I’m not getting paid to the requirement for professionalism is a bit less. I doubt I’d be particularly happy if a REAL customer cursed me out, but I’d still be obligated to help them in as professional a manner as possible- not only because I get paid for it but also because I put particular care into my products. Arguably, GriefPrevention is no exception, the difference is that it also represents a loss of my Spare Time. Many folks are reasonably understanding of this situation- Primarily my GP contributions center around Weekends, and most importantly around my own schedule. When an Open Source project starts to feel like more work than your actual Job, it starts to make you ask questions. Then when some people running Minecraft Servers act more entitled from Free Software than Entire Marina’s are of their Line of Business Software, the disillusionment leaves me with no choice but to laugh.

Pull Requests

With Distributed Version Systems such as git through github, anybody can fork a project, make changes, and request those changes be implemented into the main branch. My experience with GriefPrevention has not been promising. With stunning frequency, Pull Requests have basic functionality, make no changes to functionality, completely break functionality, perform basic code cleanup on the source, run IDE Wizards, remove dead code, add extraneous null checks, etc. Usually with any non-trivial PR I have to ask myself, “Are the changes in this PR more significant than what I will probably put into my next few commits?” Because merging a PR is typically a PITA since even if I only update once a week, sometimes PRs are based on a fork from several months ago and make no attempt to merge changes before a PR, leaving that instead to me. What I’d love to see is “Fixes X bug that you’ve been tracking down for Months” in the PRs. YES. Thank you, that is great, no matter the cost of attemtping to Merge it (at very least I can try to figure out the change and what caused the issue to begin with and fix it in the Working Copy, ascribing appropriate credit of course). But what do I see instead? “Ran Eclipse Wizards”, “Removed unused Variables”, “Organized Imports”. etc. Sometimes I can find an ACTUAL fix, but it requires poring over the File Changes in that pull or it’s commits searching for such significance. And then it comes to light that effectively the only useful changes are moving some lines of code so that they execute after some other portion designed to null check. Then I discover that an equal or greater “fix” has been applied to the master branch since, making those “fixes” useless, and even where they aren’t useless, I would have to include all those other changes for them to be added.

If anybody wanting to make PRs against GriefPrevention is reading this, DO NOT EVER run any IDE cleanup wizards on the code before issuing a PR. Your PR WILL need to be merged against a later version after more commits and using a wizard WILL touch every file, meaning that you will force a conflict that requires resolution. I’d be all for merging a change to fix some critical bug occurring because recent commits made a change to the same file. Not so much when the PR touches every single file to sort the imports list at the top of the file, thus forcing a merge conflict with changes that occurred since.

Posted By: BC_Programming
Last Edit: 03 Dec 2013 @ 08:19 PM

EmailPermalinkComments Off on An Open Source Vent

 Last 50 Posts
Change Theme...
  • Users » 47469
  • Posts/Pages » 397
  • Comments » 105


    No Child Pages.

Windows optimization tips

    No Child Pages.

Soft. Picks

    No Child Pages.

VS Fixes

    No Child Pages.

PC Build 1: “FASTLORD”

    No Child Pages.