10 Jun 2018 @ 10:23 AM 

It suddenly occurred to me in the last week that I don’t really have a proper system in place for not only software downloads here on my website but appropriate build integration with the Source Control in building the projects as needed when commits are made. Having set up a Jenkins build environment for the software I work on for my Job, I thought it reasonable that I make the same demands of myself.

One big reason to do this, IMO, is that it can actually encourage me to create new projects. The idea of packaging up the result and making it more easily accessible or usable is often a demotivator I find for creating some new projects. Having an established “system” in place whereby I can make changes on github and have say, installer files “appear” properly on my website as needed can be a motivator- I don’t have to build the program, copy files, run installation scripts, etc. manually every time- I just need to configure it all once and it all “works” by itself.

To that end, I’ve setup Jenkins appropriately on one of my “backup” computers. It’s rather tame in it’s capabilities but it should get the job done, I think- only 4GB of RAM and an AMD 5350. I would use my QX6700 based system, but the AMD system uses far less power. I also considered having Jenkins straight-up on my main system but thought that could get in the way, and just be annoying. Besides- this gives that system a job to do.

With the implementation for work, there were so many projects interdependent and we pretty much always want “everything” that I just made it a single project which builds all at once. This way everything is properly up to date. The alternative was fiddling with 50+ different projects and figuring out the appropriate dependencies to build based on when other projects were updated and such- something of a mess. Not to mention it’s all in one  repository anyway which goes against that idea as well.

In the case of my personal projects on Github, They are already separate repositories. So I will simply have them built as separate projects, with jenkins itself understanding upstream/downstream, I can use that as needed.

I’ve successfully configured the new jenkins setup and it is now building BASeTris, a Tetris clone game I decided to write a while ago. It depends on BASeScores and Elementizer, so those two projects are in jenkins as well.

BASeTris’s final artifact is an installer.

But of course, that installer isn’t much good just sitting on my CI Server! However, I also don’t want to expose the CI Server as a “public” page- there are security considerations, even if I disregard upload bandwidth issues. To that end, I constructed a small program which uploads files using SSH to my website. It will run once a day and is given a directory. It will look in all the immediate subdirectories of that directory, get the most recent file, and upload it to a corresponding remote directory if it hasn’t already been uploaded. I configured BASeTris to copy it’s final artifact there into an appropriate folder.

Alternatively, it is possible to have each project configured to upload the artifacts via SSH as a post-build step. However, I opted to not do that because I would rather not a series of changes throughout the day result in a bunch of new uploads- those would consume space and not be particularly useful. Instead, I’ve opted to have all projects that I want to upload uploaded once a day and only if there have been changes. This should help reduce redundancy (and space usage) of those uploads.

My “plan” is to have a proper PHP script or something that can enumerate the folders and provide a better interface for downloads. If nothing else I would like each CI projects folder to have a "project_current.php” file which automatically sends the latest build- then I can simply link to that on blog download pages for each project and only update it to indicate new features or content.

As an example, http://bc-programming.com/downloads/CI/basetris/ is the location that will contain BASeTris version downloads.

There is still much work to do, however- the program(s) do have git hash metadata added to the project build, so they do have access to their git commit hash, but currently they do not actually present that information. I think it should for example be displayed in the title bar, alongside other build information such as build date, if possible. I’ve tried to come up with a good way to have the version auto-increment but I think I’ll just tweak that as the project(s) change.

Heck- the SSH Uploader utility seems like a good candidate for yet another project to add to github, if I can genericize it so it isn’t hard-coded for my site and purpose.

Posted By: BC_Programming
Last Edit: 10 Jun 2018 @ 10:23 AM

EmailPermalinkComments (0)
Tags
 03 Jun 2017 @ 3:00 AM 

One of the fun parts of personal projects is, well, you can do whatever you want. Come up with a silly or even dumb idea and you can implement it if you want. That is effectively how I’ve approached BASeBlock. Iit’s sort of depressing to play- held back by older technologies like WindowsForms and GDI+, and higher resolution screens make it look quite awful too. Even so, when I fire it up I can’t help but be happy with what I did. Anyway, I had a lot of pretty crazy ideas for things to add into BASeBlock, some fit and were even rather fun to play- like adding a “Snake” boss that was effectively made out of bricks- others were sort of- well, strange, like my Pac man boss which attempts to eat the ball. At some point, I decided that the paddle being able to shoot lightning Palpatine-style wasn’t totally ridiculous.

Which naturally led to the question- how can we implement lightning in a way that sort of kind of looks believable in a mostly low-resolution way such that if you squint at the right angle you go “yeah I can sort of see that possible being lightning?” For that, I basically considered the recursive “tree drawing” concept. One of the common examples of recursionm is drawing a tree; f irst you draw the trunk, then you draw some branches coming out of the trunk, and then branches from those branches, and so on. For lightning, I adopted the same idea. The eesential algorithm I came up with was thus:

  1. From the starting point, Draw a line in the specified direction in that direction at the specified “velocity”
  2. From that end point, choose a random number of forks. For each fork, pick an angle up to 45 degrees of difference from the angle between the starting point and the second point, and take the specified velocity and randomly add or subtract up to a maximum of 25% of it.
  3. If any of the generated forks now have a velocity of 0 or less, ignore them
  4. otherwise, recursively call this same routine and start another “lightning” from this position at the specified velocity from the fork position.
  5. Proceed until there are no forks to draw or a specified maximum number of recursions has been reached

of course as I mentioned this is a very crude approximation; lightning doesn’t just randomly strike and stop short of the ground and this doesn’t really seek out a path to ground or anything along those lines. Again, crude approximation to at least mimic lightning. The result in BASeBlock looked something like this:

BB_lightning

Now, there are a number of other details in the actual implementation- first it is written against the game engine so it “draws” using the game’s particle system, and also uses other engine features to for example stop short on blocks and do “damage” to blocks that are impacted, and there are short delays between each fork (which again is totally not how lightning works but I’m taking creative license). The result does look far more like a tree when you look at it but the animation and how quickly it disappears (paired with the sound effect) is enough, I think, to at least make it “passably” lightning.

But All this talk, and no code, huh? Well, since this starts from the somewhat typical “Draw a shrub” concept applied recursively and with some randomization, let’s just build that- the rest, as they say, will come on their own. And by that, I suppose they mean you can adjust and tweak it as needed until it gets the desired effect. Or maybe you want to draw a shrubbery, I’m not judging. With that in mind here’s a quick little method that does this against a GDI+ Graphics object. Why a GDI+ Graphics Object? Well, there isn’t really any other way of doing standard Bitmap drawing on a Canvas type object as far as I know. Also as usual I just sort of threw this together so I didn’t have time to paint it and it might not be to scale or whatever.

What amazing beautiful output do we get from this? Whatever this is supposed to be:

bluetree

It does sort of look like a shrubbery I suppose. I mean, aside from it being blue, that is. It looks nothing like lightning, mind you. Though in my defense if electricity tunnels through certain things it often leaves tree-like patterns like this. Yeah, so it’s totally electricity related.

 

This is all rather unfulfilling, so as a bonus- how about making lightning in Photoshop:

 

Step 1: Start with a Gradient like so

Next, Apply the “Difference Clouds” filter with White and Black selected as Main and Secondary Colours.

Invert the image colours, then Adjust the levels to get a more pronounced “Beam” as shown.

Finally, add a layer on top and use the Overlay filter to add a vibrant hue- Yellow, Red, Orange, Pink, whatever. I’m not your mom. I made it cyan for some reason here.

Posted By: BC_Programming
Last Edit: 03 Jun 2017 @ 03:00 AM

EmailPermalinkComments (0)
Tags
Tags: , , ,
Categories: .NET, C#, Programming
 26 Feb 2017 @ 12:21 PM 

BASeCamp Network Menu, which I wrote about previously, was a handy little tool for connecting to my VPN networks. It, however, had one disadvantage- It was clearly out of place- the style was “outdated” for the OS themes:

BCNetMenu displaying available VPN connections.

As we can see above, the style it uses is more like Office 2003, Windows XP, etc. Since the program is intended for Windows 10, that was a bit of a style issue, I think. Since none of the other Renderers really fit the bill, I set about writing my own “Win10” style menu foldout ToolStrip Renderer; Since the intent was merely to provide for drawing this menu, I’ve skipped certain features as a result to make it a bit easier.

Windows 10 uses an overwhelmingly “flat” style. This worked in my favour since that makes it fairly easy to draw using that style. Windows Forms- and thus the ContextMenuStrip one attaches to the NotifyIcon, allows overriding the standard drawing logic with a ToolStripRenderer implementation; so the first step was to create a class which I derived from the ToolStripSystemRenderer. This attempts to mimic the appearance of many Windows 10 foldouts by first drawing a dark background, then drawing a color over top. However- the color over top is where things were less clear. We want to use the Accent Color that is defined in the Windows Display Properties. How do we find that?

As it happens, dwmapi.dll has us covered. However, it bears warning that this is currently an undocumented function- we need to reference it by ordinal, and since it’s undocumented, it could be problematic when it comes to future compatibility. It’s very much a “use at your own risk” function:

This function uses DWMCOLORIZATIONPARAMS, which we of course, need to define:

Once defined, we can now create a helper method that will give us a straight-up color value:

We allow for an “Opaque” parameter to specify whether the caller wants the Alpha value or not; of course, t he caller could always do this itself but the entire point of functions is to reduce code so may as well put it in this way. it takes the 32-bit integer representing the color and splits it into it’s appropriate byte-sized components through shift operators, and uses those to construct an appropriate Color to return.

Using this color to paint over an opaque dark background (the color used with the Taskbar Right-click menu, for example) gives the following Menu, using the new WIndows 10 Renderer I created:

Not a bad representation, if I say so myself! Not perfect, mind you, but certainly fits better than the Professional ToolStrip Renderer, so I don’t think calling it a success would be entirely out of band. A more interesting problem presents itself, however- When configured in the display properties to have transparency effects,The default Windows 10 Network foldout has a “Blur” effect. How can we do the same thing?

After unsuccessful experiments with DwmExtendGlassIntoFrame and related functions, I eventually stumbled on the SetWindowCompositionAttribute(). This could be used to set an accent on a window directly- including, setting Blur Behind. Of course, as with any P/Invoke, one needs to prepare yourself for the magical journey with some declarations:

If the Blur setting is enabled, then the EnableBlur function is called to enable blur; otherwise, to disable blur. In both cases, it tosses in the Handle of the ToolStrip that is opening, which, apparently, is the Window handle to the actual menu’s “Window”, so it actually works as intended:

I also found that darker colours being drawn seemed to be “more” transparent. Best I could determine was that there is some kind of transclucency key; the closer to black, the more “clear” the glass appears. References I found suggest that SetLayeredWindowAttributes() could be used to adjust the colour key, but I wasn’t able to get it to work as I intended; Since the main effect is that the “Disabled” text, which is gray, appears like more “clear” glass within the coloured blurred menu, I found it to be fine.

It will still be ideal to write additional custom draw routines in order to allow checked/selected items in the listing to be more apparent. As it stands the default “Check” draw routine appears more like an overlay on the top left of the icon, but it’s easy to miss; it would be better to custom draw the items entirely, and instead of a checkmark perhaps highlight the Icon in some fashion to indicate selection.

Posted By: BC_Programming
Last Edit: 26 Feb 2017 @ 12:21 PM

EmailPermalinkComments (0)
Tags
 27 Oct 2016 @ 12:39 PM 

This is part of a series of posts covering new C# 6 features. Currently there are posts covering the following:
String Interpolation
Expression-bodied Members
Improved Overload resolution
The Null Conditional operator
Auto-Property Initializers

Yet another new feature introduced into C# 6 are a feature called Dictionary Initializers. These are another “syntax sugar” feature that shortens code and makes it more readable- or, arguably, less readable if you aren’t familiar with the feature.

Let’s say we have a Dictionary of countries, indexed by an abbreviation. We might create it like so:

This is the standard approach to initializing dictionaries as used in previous versions, at least, when you want to initialize them at compile time. C#6 adds “dictionary initializers” which attempt to simplify this:

Here we see what is effectively a series of assignments to the standard this[] operator. It’s usually called a Dictionary Initializer, but realistically it can be used to initialize any class that has a indexed property like this. For example, it can be used to construct “sparse” lists which have many empty entries without a bunch of commas:

The “Dictionary Initializer” which seems more aptly referred to as the Indexing initializer, is a very useful and delicious syntax sugar that can help make code easier to understand and read.

Posted By: BC_Programming
Last Edit: 27 Oct 2016 @ 12:40 PM

EmailPermalinkComments (0)
Tags
Tags: , ,
Categories: .NET, C#, Programming
 23 Oct 2015 @ 12:25 AM 

It should not come as any sort of surprise that verifying that software works properly relies quite heavily on testing. While in an academic environment, it can be possible to prove that an algorithm will always work in a certain way, it is much more difficult to prove the same assertions with a piece of code implementing that algorithm; even more so when we consider the possibility of user interaction, which adds a arbitrary variable to the equation.

When dealing with large or complicated projects, testing needs to be done in layers. In addition to testing by using the same interface as the user and ensuring it all acts as expected, it is also reasonable to add code-level tests. These tests will run functions and methods or use classes within the program(s) and verify that they work as intended. This can be used quite fruitfully in combination with normal testing to identify regressions easier.

Some languages, such as D, include built-in, compiler/language-level support for Unit Tests. In D you can define a Unit Test within the block that you want to test, usually it is tied to a Class instance, as shown in the documentation examples.

The D Programming Language Logo

The D Programming Language Logo, Which I’ve inserted here for no particularly salient reason.

Different languages, platforms, and frameworks will tend to do tests differently. Most languages and platforms don’t have the sort of Unit Test functionality we see in D, but as the industry has matured, and particularly with design concepts like test-driven development being pushed to the forefront, most development platforms have either had Unit Test functionality added to them by third parties, or had it added as an integral component.

Unit Test Projects

In Visual Studio, Unit Test Projects can be added to a solution. Test Classes and Test Methods are added to that project, and they can reference your other projects in order to test them. This is a particularly good approach for testing libraries, such as BASeParser. BASeParser is a relatively straightforward recursive descent Expression parser/evaluator, which was one of my first C# projects, which I had based off of a VB6 library of the same function and name. This is actually a very straightforward thing to test; we can have a test method merely have a set of expressions and the expected result, and verify that evaluating that expression provides the expected value:

By running a test- or integrating a test into your build process- you can have the new version of the program verified to provide correct results. If you rewrite a critical part of the program to improve performance, you don’t want to make the results wrong- having these sorts of checks and balances in place can let you know right away if a commit you made has just caused regressions, or broken expected behaviour. You can run these tests directly from the IDE, which is a very useful feature. A small marker appears next to the Method header, and you can click it to get test options- run the tests, run all tests in a class, debug tests, etc. The Debug capability is particularly valuable, as it can help you step into and determine why something is not failing. For example, the test shown above helped me to trace down a problem with operator precedence and function evaluation. And these are very basic tests, as well.

I’m also working on Unit Test related features and capabilities, which includes integration into Jenkins, but for obvious reasons I can’t be going and showing you that, now can I? 😉

Posted By: BC_Programming
Last Edit: 23 Oct 2015 @ 12:25 AM

EmailPermalinkComments (0)
Tags
 26 Sep 2015 @ 9:58 PM 

Comparing Objects

C# has a number of useful comparison interfaces that we can use and implement, so this would seem to be a redundant post, wouldn’t it. We can use IComparable, IComparable, or even the .Equals() method and compare two objects relatively sufficiently.

However, in the context of something like a Unit Test, if we want to compare/assert that two objects are equal, we would ideally be able to output what is different between them- if we have two objects with many properties and merely compare them, then the developer will haveto debug to figure out which properties actually were different between them, whereas we pretty much have access to that information in the Unit Test. At the same time, we don’t want to have to write sophisticated comparison routines for every object type. Instead, it might be reasonable to try a more generic approach. If we want to compare two instances of objects we could merely compare their public, readable properties. While this won’t get everything, it means we can store the actual differences which can be expressed as part of debugging output or the output of a unit test.

An Array of issues

The major issue I encountered was handling of Arrays. I previously wrote about the task of serializing Arrays. The tricky part of dealing with Arrays is the same here, which is how we manage the Rank. another issue in this situation is how do we compare Arrays in a meaningful way if they have different ranks or dimensions? I cannot really think of a good way to do so so What I’ve done is merely ignore when the objects passed in are arrays. This means that, technically, the solution is wrong (since the arrays between two instances of an object may differ) but since I’m actually going to be using this code in a Unit Test consideration and would rather not spend dozens of hours merely working on a way to compare objects I think it will work to get started.

Effectively, we can use reflection to grab each property, then compare the values of the two properties in the two instances being compared. If they are different we can add the property to a list of Strings to return, indicating the properties that differ; otherwise, we don’t. Once returned, we can use another helper function to use the list to construct a more useful set of the differences between the two elements.

With this, we can now use a test program to show off the results:

Which outputs the following:

Posted By: BC_Programming
Last Edit: 26 Sep 2015 @ 09:58 PM

EmailPermalinkComments (0)
Tags
 21 Sep 2015 @ 10:56 PM 

Unit Testing. Books have been written about it. Many, many books. About Unit testing; about Testing methodologies, about Unit Testing approaches, about Unit Test Frameworks, etc. I won’t attempt to duplicate that work here, as I haven’t the expertise to approach the subject to anywhere near that depth. But I felt like writing about it. Perhaps because I’m writing Unit Test code.

Currently what I am doing is writing Unit tests, as mentioned. These are written using C# (since they are testing C#, so it makes sense), and they make use of the built-in Visual Studio/NET Unit Testing features.

In my mind there are some important ‘cornerstones’ of Unit Tests which make them useful. Again, I’m no expert with Unit tests but this is simply what I’ve reasoned as important through my reading and interpretations on the subject so far.

Coverage

making sure Tests engage as much of your codebase as possible applies to any type of testing of that code, and Unit Tests are no exception to that rule. It is important to make sure that Unit Tests execute as much of the code being tested as possible, and- as with any test- should make efforts to cover corner cases as well.

Maintainability

When it comes to large software projects, one of the more difficult things to do is encourage the updating and creation of new Unit Tests to make sure that Coverage remains high. With looming deadlines, required quick fixes, and frequent “emergency fixes” It is entirely possible for any Unit testing code designed to test this to quickly get out of date. This can cause working code to fail a Unit test, or Failing code to pass a Unit Test, because of changing requirements or redesigns or because code simply isn’t being tested at all.

Automation

While this in part fits into the Maintainability aspect of it, this pertains more to automation of build processes and the like. In particular, with a Continuous Integration product such as Jenkins or TeamCity, it is possible to cause any changes to Source Control to result in a Build Process and could even deploy the software, automatically, into a testing environment. In addition, Such a Continuous Integration product could also run Unit Tests on the source code or resulting executables and verify operation, causing the build to be marked a failure if tests fail, which can be used as a jumping off point to investigate what recent changes caused the problem. This can encourage maintenance (if a code change causes a failure then either that code is wrong or the unit test is wrong and needs to be updated) and is certainly valuable for trying to find defects sooner, rather than later, to try to minimize damage in terms of Customer Data and particularly in terms of Customer Trust (and company politics, I suppose)

I repeat once more than I am no Unit Test magician. I haven’t even created a completely working Unit Test Framework that follows these principles yet, but I’m in the process of creating one. There are a number of Books about Unit tests- and many books that cover Software Development Processes will include sections or numerous mentions about Unit Test methodologies- which will likely be from much more qualified individuals then myself. I just wanted to write something and couldn’t write as much about Velociraptors as I originally thought.

Posted By: BC_Programming
Last Edit: 21 Sep 2015 @ 10:56 PM

EmailPermalinkComments (0)
Tags
 15 Aug 2015 @ 4:18 PM 

Windows 10 has been out for a few weeks now. I’ve upgraded one of my backup computers from 8.1 to Windows 10 (the Build I discussed a while back).

I’ve tried to like it. Really. I have. But I don’t, and I don’t see myself switching my main systems any time soon. Most of the new features I don’t like, and many of those new features that I don’t like cannot be shut off very easily. Others are QOL impacts. Not being able to customize my Title bar colour and severely removing all customization options, for example, I cannot get behind. I am not a fan of the Start Menu, nor do I like how they changed the start screen to mimick the new Start menu. I understand why these changes were made- primarily due to all the Windows 8 naysayers- but that doesn’t mean I need to like them.

Windows 10 also introduces the new Universal App Framework. This is designed to allow the creation of programs that run across Windows platforms. “Universal Windows Application” referring to the application being able to run on any system that is running Windows 10.

If I said “I really love the Windows 8 Modern UI design and Application design” I would be lying. Because I don’t. This is likely because I dislike Mobile apps in general and having that style of application not only brought to the desktop but bringing along the same type of limitations I find distasteful. I tried to create a quick Win8 style program based on one of our existing winforms programs but hit a brick wall because I would have had to extract all of our libraries and turn it into a web service, then have it running in the background of the program itself. I wasn’t able to find a way to say “I want a Windows 8 style with XAML, but I want to be able to have the same access as a desktop program”. It appears that this may have been rectified with the Windows 10 framework, as it is possible to target a Universal app and make it, errm- not universal- by setting it to be a Desktop Application. I hope- though have as of yet been unable to determine if that is possible and it is looking more and more like it isn’t. This makes my use case- to provide a Modern UI ‘App’ that makes use of my company’s established .NET Class Libraries – impossible. This is because for security reasons you cannot reference standard .NET Assemblies that are not in the GAC. I was thinking they might work if they are signed in some fashion, but I wasn’t able to find anything that would indicate that is the case.

the basic model, as I understand it, mimicks how typical Smartphone “apps” work. Typically they have restricted local access, and will access remote web services in order to perform more advanced features. This is fairly sensible since most smartphone apps are based off of web services. Of course, the issue is that this means porting any libraries that use those sorts of features to portable libraries which will access a web service for the required task. (For a desktop program, I imagine you could have the service running locally)

I’m more partial to desktop development. Heck right now, my work involves Windows Forms (beats the mainframe terminal systems the software replaces!) and even moving to WPF would be a significant engineering effort, so I keep my work with WPF and new Modern UI applications ‘off-the-clock’.

Regardless of my feelings regarding smartphone ‘apps’ or how it seems desktop has been taking a backseat or even being replaced (it’s not, it’s just not headline worthy), Microsoft has continued to provide excellent SDKs and Developer tools and documentation, and is always working to improve both. And even if there is a focus on the Universal Framework and Universal Applications, their current development tools still provide for the creation of Windows Forms applications, allowing the use of the latest C# features for those who might not have the luxury of simply jumping to new frameworks and technologies willy-nilly.

For those interested in keeping up to date who also have the luxury of time to do so (sigh!) The new Windows development Tools are available for free. One can also read about What’s New within the Windows development ecosystem with Windows 10, And there are also Online courses regarding Windows 10 at the Microsoft Virtual Academy, as well as videos and tutorials on Channel9.

Posted By: BC_Programming
Last Edit: 30 Dec 2015 @ 08:01 PM

EmailPermalinkComments (0)
Tags
 02 Aug 2015 @ 8:55 AM 

I’ve got a problem.

Computer/Programming books. I have a shelf full of them. When I was a teenager my computer time was limited, so I would often read computer books I had or borrowed. Recently I decided to expand my collection and add classics. I bought ‘Clean Code’ and “Code Complete”. Both well-regarded books about software development practices. I’ve barely cracked open either of them, sadly. The only time I can think of doing so was when I had a power outage!

Following this proud tradition, I decided to add more books to my collection. In particular- the biggest one was the entire four-volume set of Donald Knuth’s The Art of Computer Programming. I also got “Programming F# 3.0” and “Programming C# 5.0”. And the overwhelming question to me is “why”…. When will I read them? I’ve hardly even touched the two I bought over 2 years ago!

Books versus Internet

Programming/Computer Books do have a rather hefty competition in the form of the Internet. I think there is an argument to be made for books, however, even compared to eBook devices such as Kindles. There is an ineffable quality to a good book, and I think it being physical makes reading it more “personal” in some way.

The aforementioned titles have since arrived, and I’ve tried to get started with them. I’ve pushed through The first volume of the “Art of Computer Programming” as best I can. I can see why it has become something of a Programmer’s Bible of sorts, and why Knuth is so well-regarded in the industry.

One of the biggest advantages of Donald Knuth’s “Art of Computer Programming” is that it’s content is effectively timeless- he mentions therein that it is effectively designed to be accessible, and from what I’ve read so far (not much, admittedly!) he holds true to that

very well. This is in contrast to what could be called more timely titles. For example, “Programming F# 3.0” will eventually be “outdated” in the sense that there will be future versions of F# released; same with “Programming C# 5.0”. Both of these are excellent titles but will they still be useful in fifty years? Arguably, it doesn’t really matter since there will be updated versions, and a good software developer tries to keep up to date on the most recent technologies and platforms. But there is something to me -unsavoury- about content, like books that eventually becomes out of date.

On the other hand, an argument could be made that, regardless of content, books have a “timeless” quality to them. for example I still have books covering Windows 3.1 and Visual Basic 2.0, as well as an ancient college textbook covering BASIC. I find that these books are far more valuable for information about the topics they cover than even the Internet, so while what they actually cover may be somewhat out-dated, the information and topics they cover particularly with regards to older software is much more difficult to find online than more up to date information, so there is certainly some value there.

Posted By: BC_Programming
Last Edit: 07 Aug 2015 @ 08:44 PM

EmailPermalinkComments (0)
Tags
 19 Jul 2015 @ 5:54 PM 

For some time I have been working on and off on an attempt to create a useful, powerful, and easy to use library to help with serializing and deserializing data instances to and from XML. Without repeating too much, the core concept is to have an IXmlPersistable interface implemented by classes, or to have an IXmlPersistableProvider implementation made available that the library can associate with that type to save/load instances to and from an XElement node.

For the most part, it has gone quite well. However I hit some interesting snags with regards to Generic types, particularly, Generic types that I want to write IXmlProvider implementations for. For example, let’s say we want to support serializing and deserializing a List<T>. We obviously cannot implement every single permutation of IXmlProvider<List<T>>, since implementing an interface requires a concrete class definition. What this means is that if I wanted to support saving/reading a List via a PersistableProvider, I would need an implementation that implements IXmlPersistableProvider<List<String>> and so on for each possible type- which means of course that types that I don’t know about at compile time could never be included. Unfortunately the logic is too complex to embody via generics- really, I’d want an implementation of IXmlPersistableProvider<List<T>> where T was any type that itself either implemented IXmlPersistable or for which there was an available IXmlPersistableProvider<T>. So clearly I needed to find an alternative approach.

My first consideration/implementation was to not have any sort of Provider at all; instead, I created a SaveList<T> method and a corresponding ReadList<T> method, which would attempt to save each element T of the List by using the generic SaveElement<T> method I created:

However I soon discovered that there was a bit of a caveat to that approach. In my case I discovered it after expanding to create a similar construct for Dictionaries. If there was Dictionary where either the key or the value type was a List, than it wouldn’t work properly- as there is no implementation of the Providers or interface to save/read a List! So it was back to the drawing board. it was then that I decided that while I couldn’t implement a all-emcompassing IXmlPersistableProvider for the List<T> type, I could have an implementation that covered IList and IDictionary, which are interfaces implemented by the List and Dictionary generic types respectively. There is still a caveat in that saving a Dictionary<String,List<String>> will save properly, but calling ReadElement will instead return a Dictionary<String,IList>, but for the moment I cannot determine a reasonable method to do otherwise within the framework I have created. For now I’m thinking that can be a sort of advisory for the actual Persisting code to keep in mind if it ever needs to save/load nested Generic types.

Posted By: BC_Programming
Last Edit: 19 Jul 2015 @ 05:54 PM

EmailPermalinkComments (0)
Tags
Tags: , ,
Categories: .NET, API, C#, Windows

 Last 50 Posts
 Back
Change Theme...
  • Users » 46430
  • Posts/Pages » 378
  • Comments » 105
Change Theme...
  • VoidVoid « Default
  • LifeLife
  • EarthEarth
  • WindWind
  • WaterWater
  • FireFire
  • LightLight

PP



    No Child Pages.

Windows optimization tips



    No Child Pages.

Soft. Picks



    No Child Pages.

VS Fixes



    No Child Pages.