Tim's Blog

Another code blog

  • Announcement: Shadow Codex 2

    I just wanted to announce that I've started work on the sequel to Shadow Codex, tentatively named Shadow Codex 2.  Here is a list of new things coming to the game:

    • User Interface update and improvement - the game is going to portrait mode with the puzzle board in the middle and one opponent on top and one on the bottom.  This gives it a much better experience on the phone and uses the space available in a more phone-centric way.  You will be able to play using only one hand.
    • New characters to fight with more advanced AI.
    • An entirely new adventure, new story, and new quests.
    • About half of the spells will be new offerings (with some updates to existing spells).  Spells have been renamed "skills", btw.
    • New tile power-ups: XP and MP bonuses.
    • Two new mini-games.
    • There will be some sort of crafting (details are still being designed).
    • Generally, faster, sleeker and more polished.

    Technical:

    • Better support for screen sizes for iPhone X (all models) and iPad Pro
    • Possibly Android support

    I think the biggest change is going to be the move to portrait mode.  When I originally conceived the game, I had no expectations of releasing on phones.  That's why it has a 16:9 landscape orientation and doesn't fit quite as well with modern phones like the iPhone X.  With Shadow Codex 2, that will no longer be the case and all phone models will be supported so that the screen fills out and uses the most space it can to be universal, even on non-phone devices like iPad.

    As for monetization, I believe it will continue to use the same model as the original and be a pay up-front game (price TBD).  There will probably be a discounted bundle if you buy both games.  I don't think free-to-play (F2P) is a model I can live with as I personally don't like those types of games. There will be no ads and no IAP.

    I'll probably be posting some screenshots soon when I get a bit farther along and do some phone builds of the game.

  • Shadow Codex Released!

    Shadow Codex, a turn-based RPG that uses a word game combat mechanic has been relased on the popular indie games site itch.io!  It is available for Windows or macOS.  Alternatively, you can purchase on the Mac App Store.

    Watch the Trailer:

  • Shadow Codex Available For Pre-Order!

    My latest game project is finally done:  Shadow Codex which is a unique word-game/RPG hybrid.  It is available now for pre-order on the App Store for iPhone, iPad, and iPod Touch and is also available on the Mac App Store for macOS.

    Here is a video showing the first few minutes of gameplay:

      

    From the App Store Description:

    Battle for good against an evil force in this unique word/RPG hybrid game!  Face off against countless enemies by mastering words, spells, weapons, and items.  Explore an ever-unfolding map with characters to meet, stories to tell, and quests to embark upon.  Earn gold and visit shops to upgrade your equipment and buy items.  Earn experience to level up and improve your character's fighting stats in combat.

     

    In Shadow Codex, turn-based combat is accomplished by spelling words on a shared game board of letters.   Pick a letter and chain it to an adjacent letter to form a valid word.  Each word you spell will earn a certain score.  This score is added to your "action points".  If you reach a certain amount as defined by your equipped weapon, you will gain a weapon attack to use.  Rare letters and longer words give big scores where you can potentially gain extra turns or numerous attacks!  Gems on letters will double the score!

     

    In addition to word-spelling, you can also cast powerful spells.  These spells can change letters on the game board, heal you, damage your enemy, and can be learned by defeating foes and completing quests.

     

    EXPERIENCE COMBAT

    • Face off against over 45 characters and monsters w/ unique abilities!
    • Earn gold to purchase new weapons with unique stats and abilities.
    • Earn XP to level-up and improve your stats and magic power.
    • Cast spells to gain an advantage or aid your word-spelling ability.
    • Use items to heal yourself and win the battle!

     

    PLAY THE MINI-GAMES

    • 3 extra mini-games help you earn gold and experience!
    • Test your spelling speed in the Endurance Trial game.
    • Unlock chests by playing a word search-type game.  You never know what you'll find!
    • Collect or harvest items quickly to aid your fellow citizens and earn rewards!

    Categories: Games

  • RavenDB Survival Tip #3: Handling Polymorphism

    If you want to store objects in your database that are implementations of interfaces or base classes, you can do this pretty easily by alternating Raven’s JSON serializer settings so that when Raven serializes your object, it includes a special “$type” property in the JSON that records the full type name of the object.

    The documentation for RavenDB actually mentions this, but there’s a little change that I do to make it a little more efficient.  The docs say to use a TypeNameHandling convention of “All” but this will emit the $type property on every single object.  This is a waste of space and creates a lot of clutter.  You should instead use TypeNameHandling.Auto.  This setting will only include the $type property if the type of the property does not match the actual type of the object. 

    Here’s how you’d set this up (I’m assuming that you have a DocumentStore instance available).  You’d perform this setup only when initially creating the DocumentStore (once per app):

    store.Conventions.CustomizeJsonSerializer = serializer =>
    {
        serializer.TypeNameHandling = 
            Raven.Imports.Newtonsoft.Json.TypeNameHandling.Auto;
    };
    

    Categories: RavenDB

  • Parsing a Time String with JavaScript

    Let’s say you are building a UI where you’d like a user to enter a time into a text box.  This time value they might enter needs to handle a lot of different formats, but we’d like it to end up as the same value either way:

    • 1pm
    • 1:00pm
    • 1:00p
    • 13:00

    Here’s a little JavaScript function to do this work.  Most of the interesting bits are the regular expression which helps handle of these various scenarios.  It takes a string representation of a time and attempts to parse it, setting the time on the specified date object sent to the function as the second parameter.  In the case where you do not provide a second parameter, the current date will be used.

    function parseTime(timeStr, dt) {
        if (!dt) {
            dt = new Date();
        }
    
        var time = timeStr.match(/(\d+)(?::(\d\d))?\s*(p?)/i);
        if (!time) {
            return NaN;
        }
        var hours = parseInt(time[1], 10);
        if (hours == 12 && !time[3]) {
            hours = 0;
        }
        else {
            hours += (hours < 12 && time[3]) ? 12 : 0;
        }
    
        dt.setHours(hours);
        dt.setMinutes(parseInt(time[2], 10) || 0);
        dt.setSeconds(0, 0);
        return dt;
    }

    This function will return NaN if it can’t parse the input at all.  The logic immediately following the match() call is to handle the noon/midnight case correctly. 

    Here’s a jsFiddle of this in action:

    Categories: JavaScript

    Tags: Algorithms

  • Fun with Statistics Calculations

    A while back I was working on a system where we were going to score work items to measure risk of auditing.  Higher numbers would most likely result in an audit while lower numbers would pass.  The exact mechanism of measuring the risk is immaterial for this post, so we’ll treat it as a black box number.  Furthermore, we calculate the risk on all work items but only update our statistics (as described below) on work items that actually did get audited.

    I wanted to know whether the audit score for a particular work item was very far away from the mean or very under the mean.  If it was low, the audit risk should be low and vice versa.  What we are looking for here is a “sigma” level – or a number that indicates how far away from the mean something is.  If something has a sigma level of zero, it means it is equal to the mean.  If it has a sigma level of 1, it means it is 1 standard deviation above the mean.  -1 means that it is one standard deviation below the mean.  Lower levels of sigma are generally better than higher ones in this system.  In normalized data, we’d expect over two-thirds of the work items to score within +/- 1 sigma.  A sigma number of 6 or higher means that the score would be a very large outlier.

    To calculate this sigma value, we need two primary pieces of data – the mean and the standard deviation of the population or sample (i.e. the audit risk scores).  I did not want to calculate these values over the entire set of data each time I wanted to compute the sigma level – I just wanted to add it to the previous mean and standard deviation to make calculations really fast.

    Let’s start with the mean.  If we save the number of data points used (n) and the previous mean calculated (ca), we can derive the new mean given a new data point (x) with the following formula:

    new_mean = (x + n * ca) / (n + 1)

    Or in C#:

    public static double CalculateNewCumulativeAverage(int n, int x, double ca)
    {
        return (x + n * ca) / (n + 1);
    }
    

     

    The standard deviation calculation is a little harder.  The Wikipedia article at http://en.wikipedia.org/wiki/Standard_deviation#Rapid_calculation_methods describes a method for rapid calculation that requires you to only provide the following variables to compute a new standard deviation given a new data point (x): n – the number of previous data points used, s1 – sum of all previous x’s, and s2 – sum of all previous x^2 (squared).  Here’s the formula in C#:

    public static double CalculateNewStandardDeviation(int n, int x, int s1, long s2)
    {
        if (n == 0)
            return double.NaN;
        s1 += x;
        s2 += x * x;
        double num = (n + 1) * s2 - (s1 * s1);
        double denom = (n + 1) * n;
        return Math.Sqrt(num / denom);
    }
    

     

    This will be a very fast way of calculating standard deviation because you simply don’t have to go over all data points (which also means not reading values out of a database).

    The sigma value I talked about earlier can then be calculated given the data point (x), cumulative mean (ca) and standard deviation (s):

    public static double CalculateSigma(int x, double ca, double s)
    {
        return (x - ca) / s;
    }
    

     

    So all you will need to store in your database is the following scalar values to calculate these stats:

    1. Total number of data points (n).
    2. Sum of all data points (s1).
    3. Sum of the squares of all data points (s2).
    4. Cumulative average or mean (ca).
    5. Current standard deviation (s).

    To add a new data point (x) and update all variables to new values:

    public static void AddDataPoint(int x, ref int n, ref int s1, ref int s2, ref double ca, ref double s)
    {
        ca = CalculateNewCumulativeAverage(n, x, ca);
        s = CalculateNewStandardDeviation(n, x, s1, s2);
        n += 1;
        s1 += x;
        s2 += x * x;
    }
    

    Categories: C#

    Tags: Algorithms

  • Improving on KoLite

    In my posts on creating a dynamic menu system, I used KoLite commands to represent what gets invoked when clicking on a menu item.  KoLite adds a command abstraction to Knockout that cleanly encapsulates the execution of a command and the status of a command (with the canExecute() computed, you can disable UI widgets that cannot be executed in the current state).

    There was just one thing missing from the command that I wanted: the ability to tell if a command is toggled or latched.  Latching a command means that the command is “on” and all UI elements bound to this can update their state (for example, a menu item in a drop down menu typically has a checkmark or dot placed next to the item to convey that the item is on or toggled).

    The concept of latching is very simple to implement.  In knockout.command.js, I added the following code to the ko.command constructor function:

    ko.command = function (options) {
        var self = ko.observable(),
            canExecuteDelegate = options.canExecute,
            executeDelegate = options.execute,
            latchedDelegate = options.isLatched; // new
    
        // new computed; everything else is the same
        self.isLatched = ko.computed(function () {
            return latchedDelegate ? latchedDelegate() : false;
        });
    };
    

    This is really simple – it’s just adding a function that delegates to another function you supply when you setup the command.  This function you supply represents the logic required to tell if the command is in a latched state or not.

    Here’s an example of how this could be used:

    var showStatusBarCmd = ko.command({
        execute: function () {
            showStatusBar(!showStatusBar);
        },
        isLatched: function () {
            return showStatusBar() === true;
        }
    });
    

     

    In this example, there’s an observable in the outer scope called showStatusBar that will be used to determine if the status bar in the app is visible.  If the user clicks on a menu item bound to this command, the execute() handler will fire toggling the showStatusBar observable.  The status bar’s visibility is then bound to this value.  Now for latching, the isLatched handler will test to see if showStatusBar is true and will return true if it is.

    Now, in the menu system you would probably wire up an element that would show a checkmark or a dot next to the menu item if the command’s latched state is on.  Note that you could have two different UI elements bound to the same command (like a menu item and a toolbar button) and both would be synchronized automatically to changes in the app’s state.

    Categories: JavaScript

    Tags: Knockoutjs, KoLite

  • Getting Compile-on-Save to work for TypeScript in VS2012

    I’m poking around with TypeScript and the word on the street was that a recent addition to the plugin for VS2012 (v. 0.8.x) added the ability to do compilation from TypeScript to JavaScript when saving.  I discovered it doesn’t actually work quite right out of the box.  There are two things you need to do:

    1. Enable compile on save in the Tools –> Options menu.
    2. Add a <PropertyGroup> to your .csproj file to configure the TypeScript compiler.

    To enable compile-on-save, do the following:

    1. Navigate to Tools –> Options.
    2. On the left side, expand Text Editor and find TypeScript, and then the Project sub-node.
    3. Check the “Automatically compile TypeScript files which are part of a project”
    4. Click OK to save changes.

    Next, you need to update your project that you are using to contain the appropriate properties to configure the build target for the TypeScript compiler.  Add this XML to your .csproj (first, unload the project and then edit the .csproj file manually):

    <PropertyGroup Condition="'$(Configuration)' == 'Debug'">
      <TypeScriptTarget>ES5</TypeScriptTarget>
      <TypeScriptIncludeComments>true</TypeScriptIncludeComments>
      <TypeScriptSourceMap>true</TypeScriptSourceMap>
      <TypeScriptModuleKind>amd</TypeScriptModuleKind>
    </PropertyGroup>
    <PropertyGroup Condition="'$(Configuration)' == 'Release'">
      <TypeScriptTarget>ES5</TypeScriptTarget>
      <TypeScriptIncludeComments>false</TypeScriptIncludeComments>
      <TypeScriptSourceMap>false</TypeScriptSourceMap>
      <TypeScriptModuleKind>amd</TypeScriptModuleKind>
    </PropertyGroup>
    

     

    Notice that you can set various compiler options here that you normally set as a command-line parameter when using the tsc compiler program.  In this case, I added the <TypeScriptModuleKind> element which tells the compiler how to generate the module requiring code.  Here, I’ve set mine up to use AMD rather than CommonJS (which is the default).  My Target is also “ES5” so it will target ECMAScript 5 rather than the default ECMAScript 3. 

    Categories: Visual Studio

    Tags: TypeScript

  • Using C# Implicit Type Conversion

    I was recently required to connect to a SOAP-based web service that had a very nasty WSDL.  It contained tons of complex type definitions for objects that really didn’t need to exist (strings and decimals and integers would have been sufficient).  For example, here’s a snippet of a complex type defined in the WSDL that I needed to consume:

    public partial class TOTAL_AMT_TypeShape : object, INotifyPropertyChanged 
    {
        private decimal valueField;
    
        public decimal Value
        {
            get { return this.valueField; }
            set 
            {
                this.valueField = value;
                this.RaisePropertyChanged("Value");
            }
        }
    
        // snip...
    }
    
    

    This class is created by Visual Studio and is the proxy for the TOTAL_AMT_TypeShape object.  As you can probably surmise, this is simply a complex type wrapping a number (a C# decimal to be exact).  The name is awful, and the whole premise of requiring a complex type for a simple number amount (a dollar amount in this case) makes the use of this type really awkward:

    decimal amount = 100000.0m;
    TOTAL_AMT_TypeShape totalAmt = new TOTAL_AMT_TypeShape() { Value = amount };
    

     

    Now imagine this like times 100.  It can get really ugly, really fast.

    My solution was to rely on partial classes and implicit type conversion.  Implicit type conversion is a C# feature that allows you to convert between data types automatically by informing the compiler how to perform the conversion.  The conversion happens automatically and should only be used where the conversion will not result in any data loss or possibly throw an exception (if either of those scenarios exists, you should use an explicit cast conversion instead).  An example of an implicit conversion built into C# would be int to long conversion.  Since an int always fits inside a long, we can assign a long variable to an int without any special effort.  The opposite is not true, however, and we’d need to explicitly cast.

    Here’s my partial class with the implicit conversion operator added:

    public partial class TOTAL_AMT_TypeShape
    {
        public static implicit operator TOTAL_AMT_TypeShape(decimal value) 
        {
            return new TOTAL_AMT_TypeShape() { Value = value };
        }
    }
    

     

    The implicit conversion operator overload is defined for converting a decimal to a TOTAL_AMT_TypeShape (the output type is always the name of overload method).  We could also go the other way (convert a TOTAL_AMT_TypeShape into a decimal, but I didn’t need to in my case).  And because C# allows partial class definitions, if our proxy object definition changes because of a WSDL refresh, we keep our partial intact and the code for the implicit conversion won’t be overwritten.

    Here’s how we’d use it now:

    TOTAL_AMT_TypeShape totalAmt = 100000.0m;
    

     

    Nice and neat.

    Categories: C#

  • Distributing Monetary Amounts in C#

    Many times in financial applications, you will be tasked with distributing or allocating monetary amounts across a set of ratios (say, a 50/50 split or maybe a 30/70 split).  This can be tricky getting the rounding right so that no pennies are lost in the allocation.

    Here’s an example:  split $0.05 in a 30/70 ratio.  The 30% amount becomes $0.015 and the 70% will be $0.035.  Now in this case, they both add up to the original amount (which is an absolute requirement) but they must have a half penny each to accomplish this.  We can’t have half pennies in this situation, so something has to be done with the extra penny.

    Now the specifics on where you allocate the extra penny are up to your business requirements, so the solution I present below may not be exactly what you need.  It should, however, give you an idea on how to do this split without losing the extra penny.  Here’s a static method that will take a single monetary amount (as a C# decimal) and an array of ratios.  These ratios are your allocation percentages.  They could be the set [0.30, 0.70] or even [30, 70].  They don’t even need to sum to 1 or 100, it doesn’t matter:

    public static decimal[] DistributeAmount(decimal amount, decimal[] ratios)
    {
        decimal total = 0;
        decimal[] results = new decimal[ratios.Length];
    
        // find the total of the ratios
        for (int index = 0; index < ratios.Length; index++)
            total += ratios[index];
    
        // convert amount to a fixed point value (no mantissa portion)
        amount = amount * 100;
        decimal remainder = amount;
        for (int index = 0; index < results.Length; index++)
        {
            results[index] = Math.Floor(amount * ratios[index] / total);
            remainder -= results[index];
        }
    
        // allocate remainder across all amounts
        for (int index = 0; index < remainder; index++)
            results[index]++;
    
        // convert back to decimal portion
        for (int index = 0; index < results.Length; index++)
            results[index] = results[index] / 100;
    
        return results;
    }
    
    

    (Another thing here is that I am assuming pennies and dollars, or at least a currency system that divides their unit of currency into hundredths – notice the multiply by 100.  You can change that for universal currency systems to support any currency).

    This code works by converting amounts to a fixed point value with no mantissa.  So in our example, $0.05 turns into 5.  We then iterate over each ratio and compute the amount to distribute using a simple division.  The trick here is the Math.Floor().  We round all half pennies down.  The half-penny will stay in the remainder variable. 

    At the end of the distribution, we then distribute all of the fractional pennies that have built up evenly across the distributed amounts.  If there are no remaining pennies to distribute, it simply ends.  So in this implementation, the first ratios in the set tend to get the extra pennies and the last one loses out.  You can change this behavior to be almost anything you like (such as random, even-odd, or something else).

    At the very end, we convert back to a decimal portion by dividing each amount by 100.

    The final results for $0.05 would be { 0.02, 0.03 }, which adds up to 0.05.

    Categories: C#

    Tags: Algorithms