Naturally, as we write programs we create a small set of useful functions. BASeBlock has been no exception. I’ve created quite a few functions that offer generic functionality that could be used elsewhere. Here, I share some of them.
Value Clamping
Clamping values is a very common activity. It started to get on my nerves, repeating code to make sure a given value was within a range. As a result I conceived of a generic function that could be used for any IComparable.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
public static T ClampValue<T> (T Value, T min, T max) where T:IComparable { //cast to IComparable IComparable cvalue = (IComparable)Value; IComparable cmin = (IComparable)min; IComparable cmax = (IComparable)max; //return (T)(cvalue.CompareTo(cmin)< 0 ?cmin:cvalue.CompareTo(cmax)>0?max:Value); if (cvalue.CompareTo(cmin) < 0) { return min; } else if (cvalue.CompareTo(cmax) > 0) { return max; } return Value; } |
The basic idea is fairly simple. First, in order to clamp the value, we will need to be able to compare them, so we constrain the function to accepting only type T’s that implement that interface. The first step is casting each value to an IComparable; then we use those variables to compare and return the appropriate value. if the value is larger than max, max is returned; if it is smaller than min, min is returned. otherwise, value is returned unchanged. This function is most useful for numbers, but it can also have interesting implications and usage cases for other classes that are comparable, even strings.
choosing N items from a Enumerable list of S
This also came up quite a lot- some parts of the game needed to randomly choose some set of values from a larger set of values. Naturally this gave birth to another generic routine for the purpose:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
public static T[] Choose<T>(IEnumerable<T> ChooseArray, int numselect) { Random rgen = new Random(); T[] returnarray = new T[numselect]; SortedList<double , T> sorttest = new SortedList<double, T>(); foreach (T loopvalue in ChooseArray) sorttest.Add(rgen.NextDouble(), loopvalue); var usearray = sorttest.ToArray(); for (int i = 0; i < numselect; i++) { returnarray[i] = usearray[i].Value; } return returnarray; } |
The idea is simple- make a SortedList that sorts the given listing using a random value as a key, then take the top numselect items off the top. This code will not work properly if numselect is larger than the size of the enumeration, but using count to clamp the size of the array would enumerate twice.
It’s probably possible to make this faster- possibly much faster- since we only need numselect elements. The core idea here is to shuffle the input array and choose two elements. One flawed approach for shuffling an array is to choose a random index and swap it with another random index, but this has myriad problems since it doesn’t really guarantee that everything is shuffled, and the result could very well have runs of the original card order.
Now, what if we had three objects we wanted to randomly choose from, and we wanted one of them to be chosen more frequently? One way of doing this is to use the above choose function and add duplicate entries. However, this could be tricky if you had odd requirements. This is where the Select<T> function would come in. this function is designed to accept an array and a corresponding array of probability weightings; if all the weightings are equal, than the result should be similar to what we get from Choose. Select accomplishes this by keeping it simple. each array element is essentially assigned a given range within the total, and a random number is generated from the complete total of all weightings. For example, if we had the following elements:
# | Name | Weight |
---|---|---|
1 | Billy | 15 |
2 | Thomas | 35 |
3 | Jack | 70 |
4 | Selmac | 40 |
5 | Patrick | 80 |
We can see that if we generate a value between 0 and 240, than if it is between 0 and 15, we choose Billy, if it is between 15 and 50, we choose Thomas, etc.
Here is the code for the Select Function:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
public static T Select<T>(T[] items, float[] Probabilities) { return Select(items,Probabilities,new Random()); } public static T Select<T>(T[] items, float[] Probabilities,Random rgen) { //first, sum all the probabilities. //we do this manually because we will also build a corresponding list of the sums up to that element. float getsum = 0; float[] sumulations = new float[Probabilities.Length + 1]; for (int i = 0; i < Probabilities.Length; i++) { sumulations[i] = getsum; getsum += Probabilities[i]; } sumulations[sumulations.Length-1] = getsum; //add this last value in... //get a percentage using nextDouble. we use doubles, just in case the probabilities array uses rather large numbers to attempt to prevent //abberations as a result of floating point errors. double usepercentage = rgen.NextDouble(); //convert this percentage into a value we can use, that corresponds to the sum of float values: float searchtotal = (float)(usepercentage * getsum); //now find the corresponding index and return the corresponding value in the items array. for (int i = 0; i < Probabilities.Length; i++) { if (searchtotal > sumulations[i] && searchtotal < sumulations[i + 1]) return items[i]; } return default(T); } |
A short test Main() routine that can be used to… test it:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 |
static void Main(string[] args) { int totalcount = 50000; String[] names = new string[] { "Bill", "Tom", "Dick" }; float[] probability = new float[] { 50, 20, 30 }; Dictionary<string , int> countrunner = new Dictionary</string><string , int>(); Console.WriteLine("running simulations..."); for (int i = 0; i > totalcount; i++) { String Selected = Select(names, probability); if (!countrunner.ContainsKey(Selected)) countrunner.Add(Selected, 0); countrunner[Selected]++; } Console.WriteLine("Completed. Results:"); foreach (var iterate in countrunner) { Console.WriteLine(iterate.Key + "\t\t\t" + iterate.Value.ToString() + " " + ((float)iterate.Value)/(float)totalcount); } Console.ReadKey(); } </string> |
The test is created in such a way that the values can be thought of directly as percentages. The resulting output shows that after a run the occurences of each lie around the percentages as given; Bill appears 50 percent of the time on average, Tom 20 percent, Dick 30 percent, etc. Obviously there is no requirement that the values add up to 100, but the values end up as percentages anyway. (choosing the values 100,40, and 60 for the probability array results in similar results).
This particular method has a bit of a “problem”; what if we run it repeatedly on the same array? Then we are constantly recreating the sumulations array and calculating the totals. How can we cache it? Easy- we use a Dictionary and keep weak references to the given array.
But all is not that simple! This is a generic method and the type T could easily change between calls- so what do we do? Well, it is possible to create a static object that contains a Dictionary indexed by a Type that has a value that is a KeyValuePair<weakreference ,List<T>>, and then inspect the Dictionary for the appropriate values, make sure the cache doesn’t get to big, dispose ofthe WeakReferences that point to arrays that have since been destroyed, blah blah, tricky business. We don’t want that, because for one thing it will be a pain to write- and for another, it will probably be slower overall to begin with. Instead, how about absolving the method itself from the responsibility, and having a ref parameter that can accept a calculated sum array.
For me, the above testing Main() routine took 200ms to execute, on average (I placed calls to System.Diagnostics.Stopwatch members before and after the loop). There was no appreciable difference in speed. Thnakfully, however, the extra logic did not slow it down, either.
The speed improvement can be seen when we have a lot more members. With 5000 members in the probability and Values arrays, and executing a Select on them 50,000 times, the average time was around 5-6 seconds. When the Main function instead gave a float[] ref as the third parameter, the average time dropped to one second. The code for the revised version of the procedure. This also adds some overloads:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 |
public static T Select<T>(T[] items, float[] Probabilities) { return Select(items, Probabilities, new Random()); } public static T Select<T>(T[] items, float[] Probabilities, Random rgen) { float[] sumulator = null; return Select(items, Probabilities, rgen, ref sumulator); } public static T Select<T>(T[] items, float[] Probabilities, ref float[] sumulations) { return Select(items, Probabilities, new Random(), ref sumulations); } public static T Select<T>(T[] items, float[] Probabilities,Random rgen, ref float[] sumulations) { //first, sum all the probabilities; unless a cached value is being given to us. //we do this manually because we will also build a corresponding list of the sums up to that element. float getsum = 0; if(sumulations ==null) { sumulations = new float[Probabilities.Length + 1]; for (int i = 0; i < Probabilities.Length; i++) { sumulations[i] = getsum; getsum += Probabilities[i]; } sumulations[sumulations.Length-1] = getsum; //add this last value in... } else { getsum = sumulations[sumulations.Length-1]; } //get a percentage using nextDouble. we use doubles, just in case the probabilities array uses rather large numbers to attempt to prevent //abberations as a result of floating point errors. double usepercentage = rgen.NextDouble(); //convert this percentage into a value we can use, that corresponds to the sum of float values: float searchtotal = (float)(usepercentage * getsum); //now find the corresponding index and return the corresponding value in the items array. for (int i = 0; i < Probabilities.Length; i++) { if (searchtotal > sumulations[i] && searchtotal < sumulations[i + 1]) return items[i]; } return default(T); } |
A lot of the above could even be implemented as Extension methods to the appropriate classes, making it seamless.
Since I am working on a game, dealing with vectors and speeds and whatnot is common. One frequent requirement is for items to move at a random angle at a random speed within a given range. The obvious base case here is creating a Vector given a angle and a magnitude:
1 2 3 4 5 6 7 8 9 |
public static PointF GetVelocity(double speed, double angle) { return new PointF((float)(Math.Cos(angle) * speed), (float)(Math.Sin(angle) * speed)); } |
This uses standard trigonometry to calculate what would be the X and Y axes of a fictitious triangle with the given direction as it’s hypoteneuse. Extending from this, we simply create a few extra routines that perform the randomizations:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
public static PointF GetRandomVelocity(double minspeed, double maxspeed, double angle) { return GetRandomVelocity(minspeed,maxspeed,angle,new Random()); } public static PointF GetRandomVelocity(double minspeed, double maxspeed, double angle, Random rgen) { return GetVelocity(minspeed + (rgen.NextDouble() * (maxspeed-minspeed)),angle); } public static PointF GetRandomVelocity(double minspeed, double maxspeed, Random rgen) { return GetRandomVelocity(minspeed,maxspeed, Math.PI*2*rgen.NextDouble(),rgen); } |
easy as 3.1415926!
Another interesting endeavour is simplifying the otherwise messy world of Object serialization, such as easily converting a Serializable Object to and from a Stream:
[code]
public static void ObjectToStream<T>(T saveme, Stream outstream) where T : ISerializable
{
BinaryFormatter bf = getFormatter();
using(GZipStream gz = new GZipStream(outstream,CompressionMode.Compress))
{
bf.Serialize(gz, saveme);
}
}
public static T StreamToObject<t>(Stream instream) where T : ISerializable
{
BinaryFormatter bf = getFormatter();
using(GZipStream gz = new GZipStream(instream,CompressionMode.Decompress))
{
return (T)bf.Deserialize(gz);
}
}
[/code]
This also plonks in a bit of compression through the use of a GZipStream.
Anyway, that’s a quick sampling of a few snippets of possible usefulness 🙂
Have something to say about this post? Comment!