Tim's Blog

Another code blog

  • Shadow Codex Released!

    Shadow Codex, a turn-based RPG that uses a word game combat mechanic has been relased on the popular indie games site itch.io!  It is available for Windows or macOS.  Alternatively, you can purchase on the Mac App Store.

    Watch the Trailer:

  • Shadow Codex Available For Pre-Order!

    My latest game project is finally done:  Shadow Codex which is a unique word-game/RPG hybrid.  It is available now for pre-order on the App Store for iPhone, iPad, and iPod Touch and is also available on the Mac App Store for macOS.

    Here is a video showing the first few minutes of gameplay:

      

    From the App Store Description:

    Battle for good against an evil force in this unique word/RPG hybrid game!  Face off against countless enemies by mastering words, spells, weapons, and items.  Explore an ever-unfolding map with characters to meet, stories to tell, and quests to embark upon.  Earn gold and visit shops to upgrade your equipment and buy items.  Earn experience to level up and improve your character's fighting stats in combat.

     

    In Shadow Codex, turn-based combat is accomplished by spelling words on a shared game board of letters.   Pick a letter and chain it to an adjacent letter to form a valid word.  Each word you spell will earn a certain score.  This score is added to your "action points".  If you reach a certain amount as defined by your equipped weapon, you will gain a weapon attack to use.  Rare letters and longer words give big scores where you can potentially gain extra turns or numerous attacks!  Gems on letters will double the score!

     

    In addition to word-spelling, you can also cast powerful spells.  These spells can change letters on the game board, heal you, damage your enemy, and can be learned by defeating foes and completing quests.

     

    EXPERIENCE COMBAT

    • Face off against over 45 characters and monsters w/ unique abilities!
    • Earn gold to purchase new weapons with unique stats and abilities.
    • Earn XP to level-up and improve your stats and magic power.
    • Cast spells to gain an advantage or aid your word-spelling ability.
    • Use items to heal yourself and win the battle!

     

    PLAY THE MINI-GAMES

    • 3 extra mini-games help you earn gold and experience!
    • Test your spelling speed in the Endurance Trial game.
    • Unlock chests by playing a word search-type game.  You never know what you'll find!
    • Collect or harvest items quickly to aid your fellow citizens and earn rewards!

    Categories: Games

  • RavenDB Survival Tip #3: Handling Polymorphism

    If you want to store objects in your database that are implementations of interfaces or base classes, you can do this pretty easily by alternating Raven’s JSON serializer settings so that when Raven serializes your object, it includes a special “$type” property in the JSON that records the full type name of the object.

    The documentation for RavenDB actually mentions this, but there’s a little change that I do to make it a little more efficient.  The docs say to use a TypeNameHandling convention of “All” but this will emit the $type property on every single object.  This is a waste of space and creates a lot of clutter.  You should instead use TypeNameHandling.Auto.  This setting will only include the $type property if the type of the property does not match the actual type of the object. 

    Here’s how you’d set this up (I’m assuming that you have a DocumentStore instance available).  You’d perform this setup only when initially creating the DocumentStore (once per app):

    store.Conventions.CustomizeJsonSerializer = serializer =>
    {
        serializer.TypeNameHandling = 
            Raven.Imports.Newtonsoft.Json.TypeNameHandling.Auto;
    };
    

    Categories: RavenDB

  • Parsing a Time String with JavaScript

    Let’s say you are building a UI where you’d like a user to enter a time into a text box.  This time value they might enter needs to handle a lot of different formats, but we’d like it to end up as the same value either way:

    • 1pm
    • 1:00pm
    • 1:00p
    • 13:00

    Here’s a little JavaScript function to do this work.  Most of the interesting bits are the regular expression which helps handle of these various scenarios.  It takes a string representation of a time and attempts to parse it, setting the time on the specified date object sent to the function as the second parameter.  In the case where you do not provide a second parameter, the current date will be used.

    function parseTime(timeStr, dt) {
        if (!dt) {
            dt = new Date();
        }
    
        var time = timeStr.match(/(\d+)(?::(\d\d))?\s*(p?)/i);
        if (!time) {
            return NaN;
        }
        var hours = parseInt(time[1], 10);
        if (hours == 12 && !time[3]) {
            hours = 0;
        }
        else {
            hours += (hours < 12 && time[3]) ? 12 : 0;
        }
    
        dt.setHours(hours);
        dt.setMinutes(parseInt(time[2], 10) || 0);
        dt.setSeconds(0, 0);
        return dt;
    }

    This function will return NaN if it can’t parse the input at all.  The logic immediately following the match() call is to handle the noon/midnight case correctly. 

    Here’s a jsFiddle of this in action:

    Categories: JavaScript

    Tags: Algorithms

  • Fun with Statistics Calculations

    A while back I was working on a system where we were going to score work items to measure risk of auditing.  Higher numbers would most likely result in an audit while lower numbers would pass.  The exact mechanism of measuring the risk is immaterial for this post, so we’ll treat it as a black box number.  Furthermore, we calculate the risk on all work items but only update our statistics (as described below) on work items that actually did get audited.

    I wanted to know whether the audit score for a particular work item was very far away from the mean or very under the mean.  If it was low, the audit risk should be low and vice versa.  What we are looking for here is a “sigma” level – or a number that indicates how far away from the mean something is.  If something has a sigma level of zero, it means it is equal to the mean.  If it has a sigma level of 1, it means it is 1 standard deviation above the mean.  -1 means that it is one standard deviation below the mean.  Lower levels of sigma are generally better than higher ones in this system.  In normalized data, we’d expect over two-thirds of the work items to score within +/- 1 sigma.  A sigma number of 6 or higher means that the score would be a very large outlier.

    To calculate this sigma value, we need two primary pieces of data – the mean and the standard deviation of the population or sample (i.e. the audit risk scores).  I did not want to calculate these values over the entire set of data each time I wanted to compute the sigma level – I just wanted to add it to the previous mean and standard deviation to make calculations really fast.

    Let’s start with the mean.  If we save the number of data points used (n) and the previous mean calculated (ca), we can derive the new mean given a new data point (x) with the following formula:

    new_mean = (x + n * ca) / (n + 1)

    Or in C#:

    public static double CalculateNewCumulativeAverage(int n, int x, double ca)
    {
        return (x + n * ca) / (n + 1);
    }
    

     

    The standard deviation calculation is a little harder.  The Wikipedia article at http://en.wikipedia.org/wiki/Standard_deviation#Rapid_calculation_methods describes a method for rapid calculation that requires you to only provide the following variables to compute a new standard deviation given a new data point (x): n – the number of previous data points used, s1 – sum of all previous x’s, and s2 – sum of all previous x^2 (squared).  Here’s the formula in C#:

    public static double CalculateNewStandardDeviation(int n, int x, int s1, long s2)
    {
        if (n == 0)
            return double.NaN;
        s1 += x;
        s2 += x * x;
        double num = (n + 1) * s2 - (s1 * s1);
        double denom = (n + 1) * n;
        return Math.Sqrt(num / denom);
    }
    

     

    This will be a very fast way of calculating standard deviation because you simply don’t have to go over all data points (which also means not reading values out of a database).

    The sigma value I talked about earlier can then be calculated given the data point (x), cumulative mean (ca) and standard deviation (s):

    public static double CalculateSigma(int x, double ca, double s)
    {
        return (x - ca) / s;
    }
    

     

    So all you will need to store in your database is the following scalar values to calculate these stats:

    1. Total number of data points (n).
    2. Sum of all data points (s1).
    3. Sum of the squares of all data points (s2).
    4. Cumulative average or mean (ca).
    5. Current standard deviation (s).

    To add a new data point (x) and update all variables to new values:

    public static void AddDataPoint(int x, ref int n, ref int s1, ref int s2, ref double ca, ref double s)
    {
        ca = CalculateNewCumulativeAverage(n, x, ca);
        s = CalculateNewStandardDeviation(n, x, s1, s2);
        n += 1;
        s1 += x;
        s2 += x * x;
    }
    

    Categories: C#

    Tags: Algorithms

  • Improving on KoLite

    In my posts on creating a dynamic menu system, I used KoLite commands to represent what gets invoked when clicking on a menu item.  KoLite adds a command abstraction to Knockout that cleanly encapsulates the execution of a command and the status of a command (with the canExecute() computed, you can disable UI widgets that cannot be executed in the current state).

    There was just one thing missing from the command that I wanted: the ability to tell if a command is toggled or latched.  Latching a command means that the command is “on” and all UI elements bound to this can update their state (for example, a menu item in a drop down menu typically has a checkmark or dot placed next to the item to convey that the item is on or toggled).

    The concept of latching is very simple to implement.  In knockout.command.js, I added the following code to the ko.command constructor function:

    ko.command = function (options) {
        var self = ko.observable(),
            canExecuteDelegate = options.canExecute,
            executeDelegate = options.execute,
            latchedDelegate = options.isLatched; // new
    
        // new computed; everything else is the same
        self.isLatched = ko.computed(function () {
            return latchedDelegate ? latchedDelegate() : false;
        });
    };
    

    This is really simple – it’s just adding a function that delegates to another function you supply when you setup the command.  This function you supply represents the logic required to tell if the command is in a latched state or not.

    Here’s an example of how this could be used:

    var showStatusBarCmd = ko.command({
        execute: function () {
            showStatusBar(!showStatusBar);
        },
        isLatched: function () {
            return showStatusBar() === true;
        }
    });
    

     

    In this example, there’s an observable in the outer scope called showStatusBar that will be used to determine if the status bar in the app is visible.  If the user clicks on a menu item bound to this command, the execute() handler will fire toggling the showStatusBar observable.  The status bar’s visibility is then bound to this value.  Now for latching, the isLatched handler will test to see if showStatusBar is true and will return true if it is.

    Now, in the menu system you would probably wire up an element that would show a checkmark or a dot next to the menu item if the command’s latched state is on.  Note that you could have two different UI elements bound to the same command (like a menu item and a toolbar button) and both would be synchronized automatically to changes in the app’s state.

    Categories: JavaScript

    Tags: Knockoutjs, KoLite

  • Getting Compile-on-Save to work for TypeScript in VS2012

    I’m poking around with TypeScript and the word on the street was that a recent addition to the plugin for VS2012 (v. 0.8.x) added the ability to do compilation from TypeScript to JavaScript when saving.  I discovered it doesn’t actually work quite right out of the box.  There are two things you need to do:

    1. Enable compile on save in the Tools –> Options menu.
    2. Add a <PropertyGroup> to your .csproj file to configure the TypeScript compiler.

    To enable compile-on-save, do the following:

    1. Navigate to Tools –> Options.
    2. On the left side, expand Text Editor and find TypeScript, and then the Project sub-node.
    3. Check the “Automatically compile TypeScript files which are part of a project”
    4. Click OK to save changes.

    Next, you need to update your project that you are using to contain the appropriate properties to configure the build target for the TypeScript compiler.  Add this XML to your .csproj (first, unload the project and then edit the .csproj file manually):

    <PropertyGroup Condition="'$(Configuration)' == 'Debug'">
      <TypeScriptTarget>ES5</TypeScriptTarget>
      <TypeScriptIncludeComments>true</TypeScriptIncludeComments>
      <TypeScriptSourceMap>true</TypeScriptSourceMap>
      <TypeScriptModuleKind>amd</TypeScriptModuleKind>
    </PropertyGroup>
    <PropertyGroup Condition="'$(Configuration)' == 'Release'">
      <TypeScriptTarget>ES5</TypeScriptTarget>
      <TypeScriptIncludeComments>false</TypeScriptIncludeComments>
      <TypeScriptSourceMap>false</TypeScriptSourceMap>
      <TypeScriptModuleKind>amd</TypeScriptModuleKind>
    </PropertyGroup>
    

     

    Notice that you can set various compiler options here that you normally set as a command-line parameter when using the tsc compiler program.  In this case, I added the <TypeScriptModuleKind> element which tells the compiler how to generate the module requiring code.  Here, I’ve set mine up to use AMD rather than CommonJS (which is the default).  My Target is also “ES5” so it will target ECMAScript 5 rather than the default ECMAScript 3. 

    Categories: Visual Studio

    Tags: TypeScript

  • Using C# Implicit Type Conversion

    I was recently required to connect to a SOAP-based web service that had a very nasty WSDL.  It contained tons of complex type definitions for objects that really didn’t need to exist (strings and decimals and integers would have been sufficient).  For example, here’s a snippet of a complex type defined in the WSDL that I needed to consume:

    public partial class TOTAL_AMT_TypeShape : object, INotifyPropertyChanged 
    {
        private decimal valueField;
    
        public decimal Value
        {
            get { return this.valueField; }
            set 
            {
                this.valueField = value;
                this.RaisePropertyChanged("Value");
            }
        }
    
        // snip...
    }
    
    

    This class is created by Visual Studio and is the proxy for the TOTAL_AMT_TypeShape object.  As you can probably surmise, this is simply a complex type wrapping a number (a C# decimal to be exact).  The name is awful, and the whole premise of requiring a complex type for a simple number amount (a dollar amount in this case) makes the use of this type really awkward:

    decimal amount = 100000.0m;
    TOTAL_AMT_TypeShape totalAmt = new TOTAL_AMT_TypeShape() { Value = amount };
    

     

    Now imagine this like times 100.  It can get really ugly, really fast.

    My solution was to rely on partial classes and implicit type conversion.  Implicit type conversion is a C# feature that allows you to convert between data types automatically by informing the compiler how to perform the conversion.  The conversion happens automatically and should only be used where the conversion will not result in any data loss or possibly throw an exception (if either of those scenarios exists, you should use an explicit cast conversion instead).  An example of an implicit conversion built into C# would be int to long conversion.  Since an int always fits inside a long, we can assign a long variable to an int without any special effort.  The opposite is not true, however, and we’d need to explicitly cast.

    Here’s my partial class with the implicit conversion operator added:

    public partial class TOTAL_AMT_TypeShape
    {
        public static implicit operator TOTAL_AMT_TypeShape(decimal value) 
        {
            return new TOTAL_AMT_TypeShape() { Value = value };
        }
    }
    

     

    The implicit conversion operator overload is defined for converting a decimal to a TOTAL_AMT_TypeShape (the output type is always the name of overload method).  We could also go the other way (convert a TOTAL_AMT_TypeShape into a decimal, but I didn’t need to in my case).  And because C# allows partial class definitions, if our proxy object definition changes because of a WSDL refresh, we keep our partial intact and the code for the implicit conversion won’t be overwritten.

    Here’s how we’d use it now:

    TOTAL_AMT_TypeShape totalAmt = 100000.0m;
    

     

    Nice and neat.

    Categories: C#

  • Distributing Monetary Amounts in C#

    Many times in financial applications, you will be tasked with distributing or allocating monetary amounts across a set of ratios (say, a 50/50 split or maybe a 30/70 split).  This can be tricky getting the rounding right so that no pennies are lost in the allocation.

    Here’s an example:  split $0.05 in a 30/70 ratio.  The 30% amount becomes $0.015 and the 70% will be $0.035.  Now in this case, they both add up to the original amount (which is an absolute requirement) but they must have a half penny each to accomplish this.  We can’t have half pennies in this situation, so something has to be done with the extra penny.

    Now the specifics on where you allocate the extra penny are up to your business requirements, so the solution I present below may not be exactly what you need.  It should, however, give you an idea on how to do this split without losing the extra penny.  Here’s a static method that will take a single monetary amount (as a C# decimal) and an array of ratios.  These ratios are your allocation percentages.  They could be the set [0.30, 0.70] or even [30, 70].  They don’t even need to sum to 1 or 100, it doesn’t matter:

    public static decimal[] DistributeAmount(decimal amount, decimal[] ratios)
    {
        decimal total = 0;
        decimal[] results = new decimal[ratios.Length];
    
        // find the total of the ratios
        for (int index = 0; index < ratios.Length; index++)
            total += ratios[index];
    
        // convert amount to a fixed point value (no mantissa portion)
        amount = amount * 100;
        decimal remainder = amount;
        for (int index = 0; index < results.Length; index++)
        {
            results[index] = Math.Floor(amount * ratios[index] / total);
            remainder -= results[index];
        }
    
        // allocate remainder across all amounts
        for (int index = 0; index < remainder; index++)
            results[index]++;
    
        // convert back to decimal portion
        for (int index = 0; index < results.Length; index++)
            results[index] = results[index] / 100;
    
        return results;
    }
    
    

    (Another thing here is that I am assuming pennies and dollars, or at least a currency system that divides their unit of currency into hundredths – notice the multiply by 100.  You can change that for universal currency systems to support any currency).

    This code works by converting amounts to a fixed point value with no mantissa.  So in our example, $0.05 turns into 5.  We then iterate over each ratio and compute the amount to distribute using a simple division.  The trick here is the Math.Floor().  We round all half pennies down.  The half-penny will stay in the remainder variable. 

    At the end of the distribution, we then distribute all of the fractional pennies that have built up evenly across the distributed amounts.  If there are no remaining pennies to distribute, it simply ends.  So in this implementation, the first ratios in the set tend to get the extra pennies and the last one loses out.  You can change this behavior to be almost anything you like (such as random, even-odd, or something else).

    At the very end, we convert back to a decimal portion by dividing each amount by 100.

    The final results for $0.05 would be { 0.02, 0.03 }, which adds up to 0.05.

    Categories: C#

    Tags: Algorithms

  • Revisiting War of Words: Postmortem

    A postmortem usually happens within a small timeframe after release.  Well, I waited 3.5 years, lol.  Anyway, this is my take on what went right with design and development and what went wrong.

    What Went Right

    1.  Word Graphs and AI characters

    Implementing a word graph to store the word list was a great idea because it gave me so much flexibility in solving problems.  I was able to search the graph with the AI engine and find words very quickly.  I loved that the Game Studio Content Pipeline allowed me to create processors that could take lists of words and create a word graph structure out of them.  I saved off this structure to disk and loaded it very quickly at game startup.  I played other word games on the XBox Indie Games platform and many had long load times (probably because they were processing giant XML files or something).

    The AI was also a pretty good implementation IMO.  It looked very natural, and it scaled up and down in difficulty nicely.  It wasn’t perfect, but extending it and tweaking it was pretty simple.

    2.  Overall Game Concept

    The RPG/word game concept is a good idea and I think I executed it well enough when it came to the core game play features.  I’m pleased with it and would use it as a template for a sequel if I wanted to.

    3.  Getting an Artist to do the Artwork

    Obviously this is a no-brainer if you want something to look good.  I simply don’t have the artistic talent.  The take away here is that if you want something to be professional, you need to put the money up to get an artist.

    What Went Wrong

    1.  Some Graphics were not done by the Artist

    I decided to do some of the graphics myself which was stupid and I think it led to it looking a little unprofessional at times.  I also think the box art could have been better but I didn’t do anything about it.  A lot of people judge a game by the box art.

    2.  The story made no sense and was bad

    There’s not much to say here.  It wasn’t good.  It wasn’t interesting.  Maybe it was even laughable.  I’m not a writer and I don’t pretend to be one.  In the end, a lot of reviewers pointed this out but many would then positively point out that it didn’t matter if the story sucked because the game play was good.  The presentation of the story was low tech and uninteresting too.

    3.  The map was not implemented well and was not interesting

    The map needed to be a little better drawn and more interesting IMO.  The controls on the map were not done very well.  I should’ve used a model where the cursor was more freely controllable by the player.  The icon stand-in for the player was stupid.

    4.  Random encounters were confusing

    When you moved between locations on a map, you might randomly be attacked by an enemy.  At that point, you can either flee or fight.  If you fled, you incurred some HP damage.  If you fought, you could usually win the battle but it took too long.  This whole process was just not done very well and needed to be re-thought.

    5.  Shields were a dumb concept

    In combat, you could earn shields by scoring big words.  The AI character also had shields.  If you were about to get attacked, you could tap a button and raise the shields for 5 seconds or so of 50% or better armor.  The problem was, however, that human players couldn’t quickly raise the shields and ended up getting hit and then raising them.  I bet this made them feel like an idiot.  The AI of course perfectly handled shields and it was very unfair.  The shield concept should go!  You already wear armor so I don’t know what I was thinking.

    6.  Quests were not interesting

    A “Quest” in War of Words was just a scripted encounter (a single battle).  There were no other variations on this theme.  I should have went for more complicated quests that involved multiple encounters, travel, other game types, etc.  I have a lot of new ideas but they didn’t make it into the original and it got boring after a while.

    7.  Battles lasted too long

    Sometimes you’d spend 10 minutes or more on one character.  This isn’t good.  I did try to make this better at one point but still didn’t turn out as well as I had hoped.  If they were shorter, we could have had more of them or they could’ve been more interesting.  Another variation on this is that it might have been too difficult to beat.  If you got wasted at the last minute, you had to repeat the (long) encounter all over again.  I don’t know, I didn’t want the game to be too easy though.  It is really hard to judge the difficulty of your own game.

     

    There’s probably a lot more “wrongs” to write about, but I don’t want to beat up on myself too much here :).  I think the overall theme here is that polish was lacking in many areas and polish is what makes a game great. 

    Categories: Games

    Tags: War of Words