Tim's Blog

Another code blog

  • RavenDB Survival Tip #3: Handling Polymorphism

    If you want to store objects in your database that are implementations of interfaces or base classes, you can do this pretty easily by alternating Raven’s JSON serializer settings so that when Raven serializes your object, it includes a special “$type” property in the JSON that records the full type name of the object.

    The documentation for RavenDB actually mentions this, but there’s a little change that I do to make it a little more efficient.  The docs say to use a TypeNameHandling convention of “All” but this will emit the $type property on every single object.  This is a waste of space and creates a lot of clutter.  You should instead use TypeNameHandling.Auto.  This setting will only include the $type property if the type of the property does not match the actual type of the object. 

    Here’s how you’d set this up (I’m assuming that you have a DocumentStore instance available).  You’d perform this setup only when initially creating the DocumentStore (once per app):

    store.Conventions.CustomizeJsonSerializer = serializer =>
    {
        serializer.TypeNameHandling = 
            Raven.Imports.Newtonsoft.Json.TypeNameHandling.Auto;
    };
    

    Categories: RavenDB

  • Parsing a Time String with JavaScript

    Let’s say you are building a UI where you’d like a user to enter a time into a text box.  This time value they might enter needs to handle a lot of different formats, but we’d like it to end up as the same value either way:

    • 1pm
    • 1:00pm
    • 1:00p
    • 13:00

    Here’s a little JavaScript function to do this work.  Most of the interesting bits are the regular expression which helps handle of these various scenarios.  It takes a string representation of a time and attempts to parse it, setting the time on the specified date object sent to the function as the second parameter.  In the case where you do not provide a second parameter, the current date will be used.

    function parseTime(timeStr, dt) {
        if (!dt) {
            dt = new Date();
        }
    
        var time = timeStr.match(/(\d+)(?::(\d\d))?\s*(p?)/i);
        if (!time) {
            return NaN;
        }
        var hours = parseInt(time[1], 10);
        if (hours == 12 && !time[3]) {
            hours = 0;
        }
        else {
            hours += (hours < 12 && time[3]) ? 12 : 0;
        }
    
        dt.setHours(hours);
        dt.setMinutes(parseInt(time[2], 10) || 0);
        dt.setSeconds(0, 0);
        return dt;
    }

    This function will return NaN if it can’t parse the input at all.  The logic immediately following the match() call is to handle the noon/midnight case correctly. 

    Here’s a jsFiddle of this in action:

    Categories: JavaScript

    Tags: Algorithms

  • Fun with Statistics Calculations

    A while back I was working on a system where we were going to score work items to measure risk of auditing.  Higher numbers would most likely result in an audit while lower numbers would pass.  The exact mechanism of measuring the risk is immaterial for this post, so we’ll treat it as a black box number.  Furthermore, we calculate the risk on all work items but only update our statistics (as described below) on work items that actually did get audited.

    I wanted to know whether the audit score for a particular work item was very far away from the mean or very under the mean.  If it was low, the audit risk should be low and vice versa.  What we are looking for here is a “sigma” level – or a number that indicates how far away from the mean something is.  If something has a sigma level of zero, it means it is equal to the mean.  If it has a sigma level of 1, it means it is 1 standard deviation above the mean.  -1 means that it is one standard deviation below the mean.  Lower levels of sigma are generally better than higher ones in this system.  In normalized data, we’d expect over two-thirds of the work items to score within +/- 1 sigma.  A sigma number of 6 or higher means that the score would be a very large outlier.

    To calculate this sigma value, we need two primary pieces of data – the mean and the standard deviation of the population or sample (i.e. the audit risk scores).  I did not want to calculate these values over the entire set of data each time I wanted to compute the sigma level – I just wanted to add it to the previous mean and standard deviation to make calculations really fast.

    Let’s start with the mean.  If we save the number of data points used (n) and the previous mean calculated (ca), we can derive the new mean given a new data point (x) with the following formula:

    new_mean = (x + n * ca) / (n + 1)

    Or in C#:

    public static double CalculateNewCumulativeAverage(int n, int x, double ca)
    {
        return (x + n * ca) / (n + 1);
    }
    

     

    The standard deviation calculation is a little harder.  The Wikipedia article at http://en.wikipedia.org/wiki/Standard_deviation#Rapid_calculation_methods describes a method for rapid calculation that requires you to only provide the following variables to compute a new standard deviation given a new data point (x): n – the number of previous data points used, s1 – sum of all previous x’s, and s2 – sum of all previous x^2 (squared).  Here’s the formula in C#:

    public static double CalculateNewStandardDeviation(int n, int x, int s1, long s2)
    {
        if (n == 0)
            return double.NaN;
        s1 += x;
        s2 += x * x;
        double num = (n + 1) * s2 - (s1 * s1);
        double denom = (n + 1) * n;
        return Math.Sqrt(num / denom);
    }
    

     

    This will be a very fast way of calculating standard deviation because you simply don’t have to go over all data points (which also means not reading values out of a database).

    The sigma value I talked about earlier can then be calculated given the data point (x), cumulative mean (ca) and standard deviation (s):

    public static double CalculateSigma(int x, double ca, double s)
    {
        return (x - ca) / s;
    }
    

     

    So all you will need to store in your database is the following scalar values to calculate these stats:

    1. Total number of data points (n).
    2. Sum of all data points (s1).
    3. Sum of the squares of all data points (s2).
    4. Cumulative average or mean (ca).
    5. Current standard deviation (s).

    To add a new data point (x) and update all variables to new values:

    public static void AddDataPoint(int x, ref int n, ref int s1, ref int s2, ref double ca, ref double s)
    {
        ca = CalculateNewCumulativeAverage(n, x, ca);
        s = CalculateNewStandardDeviation(n, x, s1, s2);
        n += 1;
        s1 += x;
        s2 += x * x;
    }
    

    Categories: C#

    Tags: Algorithms

  • Improving on KoLite

    In my posts on creating a dynamic menu system, I used KoLite commands to represent what gets invoked when clicking on a menu item.  KoLite adds a command abstraction to Knockout that cleanly encapsulates the execution of a command and the status of a command (with the canExecute() computed, you can disable UI widgets that cannot be executed in the current state).

    There was just one thing missing from the command that I wanted: the ability to tell if a command is toggled or latched.  Latching a command means that the command is “on” and all UI elements bound to this can update their state (for example, a menu item in a drop down menu typically has a checkmark or dot placed next to the item to convey that the item is on or toggled).

    The concept of latching is very simple to implement.  In knockout.command.js, I added the following code to the ko.command constructor function:

    ko.command = function (options) {
        var self = ko.observable(),
            canExecuteDelegate = options.canExecute,
            executeDelegate = options.execute,
            latchedDelegate = options.isLatched; // new
    
        // new computed; everything else is the same
        self.isLatched = ko.computed(function () {
            return latchedDelegate ? latchedDelegate() : false;
        });
    };
    

    This is really simple – it’s just adding a function that delegates to another function you supply when you setup the command.  This function you supply represents the logic required to tell if the command is in a latched state or not.

    Here’s an example of how this could be used:

    var showStatusBarCmd = ko.command({
        execute: function () {
            showStatusBar(!showStatusBar);
        },
        isLatched: function () {
            return showStatusBar() === true;
        }
    });
    

     

    In this example, there’s an observable in the outer scope called showStatusBar that will be used to determine if the status bar in the app is visible.  If the user clicks on a menu item bound to this command, the execute() handler will fire toggling the showStatusBar observable.  The status bar’s visibility is then bound to this value.  Now for latching, the isLatched handler will test to see if showStatusBar is true and will return true if it is.

    Now, in the menu system you would probably wire up an element that would show a checkmark or a dot next to the menu item if the command’s latched state is on.  Note that you could have two different UI elements bound to the same command (like a menu item and a toolbar button) and both would be synchronized automatically to changes in the app’s state.

    Categories: JavaScript

    Tags: Knockoutjs, KoLite

  • Getting Compile-on-Save to work for TypeScript in VS2012

    I’m poking around with TypeScript and the word on the street was that a recent addition to the plugin for VS2012 (v. 0.8.x) added the ability to do compilation from TypeScript to JavaScript when saving.  I discovered it doesn’t actually work quite right out of the box.  There are two things you need to do:

    1. Enable compile on save in the Tools –> Options menu.
    2. Add a <PropertyGroup> to your .csproj file to configure the TypeScript compiler.

    To enable compile-on-save, do the following:

    1. Navigate to Tools –> Options.
    2. On the left side, expand Text Editor and find TypeScript, and then the Project sub-node.
    3. Check the “Automatically compile TypeScript files which are part of a project”
    4. Click OK to save changes.

    Next, you need to update your project that you are using to contain the appropriate properties to configure the build target for the TypeScript compiler.  Add this XML to your .csproj (first, unload the project and then edit the .csproj file manually):

    <PropertyGroup Condition="'$(Configuration)' == 'Debug'">
      <TypeScriptTarget>ES5</TypeScriptTarget>
      <TypeScriptIncludeComments>true</TypeScriptIncludeComments>
      <TypeScriptSourceMap>true</TypeScriptSourceMap>
      <TypeScriptModuleKind>amd</TypeScriptModuleKind>
    </PropertyGroup>
    <PropertyGroup Condition="'$(Configuration)' == 'Release'">
      <TypeScriptTarget>ES5</TypeScriptTarget>
      <TypeScriptIncludeComments>false</TypeScriptIncludeComments>
      <TypeScriptSourceMap>false</TypeScriptSourceMap>
      <TypeScriptModuleKind>amd</TypeScriptModuleKind>
    </PropertyGroup>
    

     

    Notice that you can set various compiler options here that you normally set as a command-line parameter when using the tsc compiler program.  In this case, I added the <TypeScriptModuleKind> element which tells the compiler how to generate the module requiring code.  Here, I’ve set mine up to use AMD rather than CommonJS (which is the default).  My Target is also “ES5” so it will target ECMAScript 5 rather than the default ECMAScript 3. 

    Categories: Visual Studio

    Tags: TypeScript

  • Using C# Implicit Type Conversion

    I was recently required to connect to a SOAP-based web service that had a very nasty WSDL.  It contained tons of complex type definitions for objects that really didn’t need to exist (strings and decimals and integers would have been sufficient).  For example, here’s a snippet of a complex type defined in the WSDL that I needed to consume:

    public partial class TOTAL_AMT_TypeShape : object, INotifyPropertyChanged 
    {
        private decimal valueField;
    
        public decimal Value
        {
            get { return this.valueField; }
            set 
            {
                this.valueField = value;
                this.RaisePropertyChanged("Value");
            }
        }
    
        // snip...
    }
    
    

    This class is created by Visual Studio and is the proxy for the TOTAL_AMT_TypeShape object.  As you can probably surmise, this is simply a complex type wrapping a number (a C# decimal to be exact).  The name is awful, and the whole premise of requiring a complex type for a simple number amount (a dollar amount in this case) makes the use of this type really awkward:

    decimal amount = 100000.0m;
    TOTAL_AMT_TypeShape totalAmt = new TOTAL_AMT_TypeShape() { Value = amount };
    

     

    Now imagine this like times 100.  It can get really ugly, really fast.

    My solution was to rely on partial classes and implicit type conversion.  Implicit type conversion is a C# feature that allows you to convert between data types automatically by informing the compiler how to perform the conversion.  The conversion happens automatically and should only be used where the conversion will not result in any data loss or possibly throw an exception (if either of those scenarios exists, you should use an explicit cast conversion instead).  An example of an implicit conversion built into C# would be int to long conversion.  Since an int always fits inside a long, we can assign a long variable to an int without any special effort.  The opposite is not true, however, and we’d need to explicitly cast.

    Here’s my partial class with the implicit conversion operator added:

    public partial class TOTAL_AMT_TypeShape
    {
        public static implicit operator TOTAL_AMT_TypeShape(decimal value) 
        {
            return new TOTAL_AMT_TypeShape() { Value = value };
        }
    }
    

     

    The implicit conversion operator overload is defined for converting a decimal to a TOTAL_AMT_TypeShape (the output type is always the name of overload method).  We could also go the other way (convert a TOTAL_AMT_TypeShape into a decimal, but I didn’t need to in my case).  And because C# allows partial class definitions, if our proxy object definition changes because of a WSDL refresh, we keep our partial intact and the code for the implicit conversion won’t be overwritten.

    Here’s how we’d use it now:

    TOTAL_AMT_TypeShape totalAmt = 100000.0m;
    

     

    Nice and neat.

    Categories: C#

  • Distributing Monetary Amounts in C#

    Many times in financial applications, you will be tasked with distributing or allocating monetary amounts across a set of ratios (say, a 50/50 split or maybe a 30/70 split).  This can be tricky getting the rounding right so that no pennies are lost in the allocation.

    Here’s an example:  split $0.05 in a 30/70 ratio.  The 30% amount becomes $0.015 and the 70% will be $0.035.  Now in this case, they both add up to the original amount (which is an absolute requirement) but they must have a half penny each to accomplish this.  We can’t have half pennies in this situation, so something has to be done with the extra penny.

    Now the specifics on where you allocate the extra penny are up to your business requirements, so the solution I present below may not be exactly what you need.  It should, however, give you an idea on how to do this split without losing the extra penny.  Here’s a static method that will take a single monetary amount (as a C# decimal) and an array of ratios.  These ratios are your allocation percentages.  They could be the set [0.30, 0.70] or even [30, 70].  They don’t even need to sum to 1 or 100, it doesn’t matter:

    public static decimal[] DistributeAmount(decimal amount, decimal[] ratios)
    {
        decimal total = 0;
        decimal[] results = new decimal[ratios.Length];
    
        // find the total of the ratios
        for (int index = 0; index < ratios.Length; index++)
            total += ratios[index];
    
        // convert amount to a fixed point value (no mantissa portion)
        amount = amount * 100;
        decimal remainder = amount;
        for (int index = 0; index < results.Length; index++)
        {
            results[index] = Math.Floor(amount * ratios[index] / total);
            remainder -= results[index];
        }
    
        // allocate remainder across all amounts
        for (int index = 0; index < remainder; index++)
            results[index]++;
    
        // convert back to decimal portion
        for (int index = 0; index < results.Length; index++)
            results[index] = results[index] / 100;
    
        return results;
    }
    
    

    (Another thing here is that I am assuming pennies and dollars, or at least a currency system that divides their unit of currency into hundredths – notice the multiply by 100.  You can change that for universal currency systems to support any currency).

    This code works by converting amounts to a fixed point value with no mantissa.  So in our example, $0.05 turns into 5.  We then iterate over each ratio and compute the amount to distribute using a simple division.  The trick here is the Math.Floor().  We round all half pennies down.  The half-penny will stay in the remainder variable. 

    At the end of the distribution, we then distribute all of the fractional pennies that have built up evenly across the distributed amounts.  If there are no remaining pennies to distribute, it simply ends.  So in this implementation, the first ratios in the set tend to get the extra pennies and the last one loses out.  You can change this behavior to be almost anything you like (such as random, even-odd, or something else).

    At the very end, we convert back to a decimal portion by dividing each amount by 100.

    The final results for $0.05 would be { 0.02, 0.03 }, which adds up to 0.05.

    Categories: C#

    Tags: Algorithms

  • Revisiting War of Words: Postmortem

    A postmortem usually happens within a small timeframe after release.  Well, I waited 3.5 years, lol.  Anyway, this is my take on what went right with design and development and what went wrong.

    What Went Right

    1.  Word Graphs and AI characters

    Implementing a word graph to store the word list was a great idea because it gave me so much flexibility in solving problems.  I was able to search the graph with the AI engine and find words very quickly.  I loved that the Game Studio Content Pipeline allowed me to create processors that could take lists of words and create a word graph structure out of them.  I saved off this structure to disk and loaded it very quickly at game startup.  I played other word games on the XBox Indie Games platform and many had long load times (probably because they were processing giant XML files or something).

    The AI was also a pretty good implementation IMO.  It looked very natural, and it scaled up and down in difficulty nicely.  It wasn’t perfect, but extending it and tweaking it was pretty simple.

    2.  Overall Game Concept

    The RPG/word game concept is a good idea and I think I executed it well enough when it came to the core game play features.  I’m pleased with it and would use it as a template for a sequel if I wanted to.

    3.  Getting an Artist to do the Artwork

    Obviously this is a no-brainer if you want something to look good.  I simply don’t have the artistic talent.  The take away here is that if you want something to be professional, you need to put the money up to get an artist.

    What Went Wrong

    1.  Some Graphics were not done by the Artist

    I decided to do some of the graphics myself which was stupid and I think it led to it looking a little unprofessional at times.  I also think the box art could have been better but I didn’t do anything about it.  A lot of people judge a game by the box art.

    2.  The story made no sense and was bad

    There’s not much to say here.  It wasn’t good.  It wasn’t interesting.  Maybe it was even laughable.  I’m not a writer and I don’t pretend to be one.  In the end, a lot of reviewers pointed this out but many would then positively point out that it didn’t matter if the story sucked because the game play was good.  The presentation of the story was low tech and uninteresting too.

    3.  The map was not implemented well and was not interesting

    The map needed to be a little better drawn and more interesting IMO.  The controls on the map were not done very well.  I should’ve used a model where the cursor was more freely controllable by the player.  The icon stand-in for the player was stupid.

    4.  Random encounters were confusing

    When you moved between locations on a map, you might randomly be attacked by an enemy.  At that point, you can either flee or fight.  If you fled, you incurred some HP damage.  If you fought, you could usually win the battle but it took too long.  This whole process was just not done very well and needed to be re-thought.

    5.  Shields were a dumb concept

    In combat, you could earn shields by scoring big words.  The AI character also had shields.  If you were about to get attacked, you could tap a button and raise the shields for 5 seconds or so of 50% or better armor.  The problem was, however, that human players couldn’t quickly raise the shields and ended up getting hit and then raising them.  I bet this made them feel like an idiot.  The AI of course perfectly handled shields and it was very unfair.  The shield concept should go!  You already wear armor so I don’t know what I was thinking.

    6.  Quests were not interesting

    A “Quest” in War of Words was just a scripted encounter (a single battle).  There were no other variations on this theme.  I should have went for more complicated quests that involved multiple encounters, travel, other game types, etc.  I have a lot of new ideas but they didn’t make it into the original and it got boring after a while.

    7.  Battles lasted too long

    Sometimes you’d spend 10 minutes or more on one character.  This isn’t good.  I did try to make this better at one point but still didn’t turn out as well as I had hoped.  If they were shorter, we could have had more of them or they could’ve been more interesting.  Another variation on this is that it might have been too difficult to beat.  If you got wasted at the last minute, you had to repeat the (long) encounter all over again.  I don’t know, I didn’t want the game to be too easy though.  It is really hard to judge the difficulty of your own game.

     

    There’s probably a lot more “wrongs” to write about, but I don’t want to beat up on myself too much here :).  I think the overall theme here is that polish was lacking in many areas and polish is what makes a game great. 

    Categories: Games

    Tags: War of Words

  • Building a Dynamic Application Menu with Durandal.js, Knockout, and Bootstrap (Pt. 3)

    In the last two posts of this series, we built a dynamic menu system.  Now it is time to wrap it up with a discussion on how to actually populate and use these menus.

    One idea is to create the concept of a workspace which represents the UI that the user sees for the application.  The workspace is like a top-level window in a desktop application.  The following module defines a workspace that contains a list of menus and defines a routine to take arbitrary menu layout objects and convert them to Menu and MenuItem instances:

    define(function (require) {
        var Menu = require('ui/menu'),
            MenuItem = require('ui/menuItem'),
            menus = ko.observableArray([]);
    
        function setupWorkspace(cmds) {
            menus([]);
    
            var menus = {
                "File": [
                    { text: "New", command: cmds.new },
                    { text: "Open", command: cmds.open },
                    { divider: true },
                    { text: "Save", command: cmds.save },
                    { text: "Save As", command: cmds.saveas },
                    { divider: true },
                    { text: "Sign out", command: cmds.signout }
                ],
                "Edit": [
                    { text: "Cut", command: cmds.cut },
                    { text: "Copy", command: cmds.copy },
                    { text: "Paste", command: cmds.paste }
                ],
                "View": [
                    { text: "View Mode", subItems: [
                        { text: "Simple", command: cmds.toggleSimpleView },
                        { text: "Advanced" command: cmds.toggleAdvancedView }
                    ]}
                ],
                "Help": [
                    { text: "Contents", command: cmds.helpcontents },
                    { divider: true },
                    { text: "About", command: cmds.about }
                ]
            };
    
            loadMenus(menus);
        }
    
        function loadMenus(menuDefinitions) {
            var menuText, menu;
            for (menuText in menuDefinitions) {
                menu = addMenu(menuText);
                addMenuItems(menu, menuDefinitions[menuText]);
            }
        }
    
        function addMenuItems(menuOrMenuItem, itemDefinitions) {
            for (var i = 0; i < itemDefinitions.length; i++) {
                var definitionItem = itemDefinitions[i];
                if (definitionItem.hasOwnProperty("divider")) {
                    menuOrMenuItem.addDivider();
                }
                else {
                    var menuItem = new MenuItem(definitionItem.text, definitionItem.command);
                    menuOrMenuItem.addMenuItem(menuItem);
                    if (definition.hasOwnProperty("subItems")) {
                        addMenuItems(menuItem, definitionItem.subItems);
                    }
                }
            }
        }
    
        function addMenu(text, position) {
            var menu = new Menu(text);
            if (position) {
                menus.splice(position, 0, menu);
            }
            else {
                menus.push(menu);
            }
    
            return menu;
        }
    
        var workspace = {
            menus: menus,
            addMenu: addMenu,
            setupWorkspace: setupWorkspace
        };
    
        return workspace;
    });
    
    

     

    The main application shell should call the workspace singleton’s setupWorkspace() function and pass in an object that contains references to the desired ko.commands that will get attached to the menu items.  It can also use the menus property in its data-binding to automatically create the UI (as seen in part 2 of this series).

    The setupWorkspace() function creates a menu definition which is just an inline object literal.  The source for this could actually come from the server as JSON, or be in another file, or loaded by a plugin.  The point is that there is a definition format that gets fed into the loadMenus() function that builds the menus by converting the definition into real Menu and MenuItem instances and adding them to the collection.

    The workspace module also exports the addMenu() function which allows someone to add a menu to the menu bar after the initial setup has taken place.  I think more functions (like remove) could be added if you really want to make this robust as far as configuration of menus is concerned (I’m just demoing this to illustrate a point).  And obviously, the commands aren’t built and this is very demo-specific, but you can just swap that out for whatever you want.  You could even send the menu definitions to the setupWorkspace() function instead of embedding it directly in the function.

    You can view a live demo of this series at: http://tblabonne.github.io/DynamicMenus/

    The complete source to the demo can be found at: http://github.com/tblabonne/DynamicMenus

    Categories: JavaScript

    Tags: Bootstrap, Durandal, Knockoutjs, KoLite

  • Revisiting War of Words

    Back in March 2010, I released a game called War of Words on the XBox 360 platform (Indie games).  This game was a hybrid RPG/Word game that used word spelling as the princaipal combat mechanism in the encounters.  It was very similar to Puzzle Quest in spirit, although the core game play mechanisms were quite different. 

    I had a lot of fun working on that game (and a lot of frustrations too).  I had hoped that it would have done better in the marketplace than it did, but Indie Games was not promoted by Microsoft much and I think the game also lacked some polish that would have made it more professional like an Arcade title.  For example, some graphics weren’t so great (as they were created by me) and the storyline was not very interesting (my fault again, as I am not a good writer either).  I do think the game was better than the average Indie game.  It currently is rated as 3.5/5 stars on the XBox marketplace with 232 votes (hardly any Indie games score over 3 stars, and many Arcade and AAA titles struggle to get over 4 stars). 

    Economics

    This game cost me about $600 USD to make.  About $250 of it was for a few hours of an artist’s time to draw the majority of the graphics.  Another $200 was spent on audio/music licensing.  I also had a domain name and website (which has been taken down) which cost $100 for a year.  I bought a few misc. things like video capture software to take video for promotions.

    As far as revenue, I can’t say exactly how much it made because the history of payment is long gone (most of the profits were made in the first 3 months of release and Microsoft does not keep more than about 18 months of history).  I started out selling the game for 400 points ($5 USD) but later dropped it to 240 points ($3 USD).  I make about 70% of that amount per game.  I do know that dropping the price increased the purchase to trial ratio to almost 25% which is quite excellent.  I think before the price drop, the ratio was between 10-12%, which is pretty good too. 

    I can tell you that I did not become rich with this game, obviously.  The real reason was downloads.  If people downloaded the game, you were pretty certain that at least 1 in 10 would buy it.  If 100,000 people downloaded it, you’d make a decent amount ($30,000 - $50,000).  But I didn’t get download numbers like that.  I pretty much blame this on the fact that Indie games is not a great service if you want to get noticed.  There’s too many bad games and demos that squeeze out good titles.  Also, you have to think of the audience and I think a word game on a console is probably not optimal.  If your game wasn’t about farting or beer drinking, it would not make it to the top of the list.  If you got on the top downloaded or top rated lists, you were going to actually get noticed in the dashboard because of the promotion you received.  If you didn’t get in these lists (I was in the recent released list for about 1-2 weeks and then gone forever), you got buried in the dashboard.  Frankly, I’m surprised that any one finds it today (there’s still a few purchases a week).  I know I didn’t really market it too much, but how was I supposed to do that exactly?

    I also could not translate the game to multiple languages (the fact that it is an English word game makes this doubly challenging than perhaps a simple shooter game).  I sold it in all XBox markets in the hopes that English speakers there would play it.  Foreign sales were pretty strong actually compared to what I thought.

    Sequel Plans

    I originally was very optimistic about a sequel even with low sales numbers.  I knew that I could take all of the money I made and the code/content I had and make a better sequel that would probably have made more money and taken less time to create.  But I lost interest.

    I basically quit Indie Games because I felt that the peer review concept was not optimal.  In fact, I was pretty much correct as Microsoft has basically dumped the technology behind Indie Games (XNA) and has focused instead on DirectX 11 for Windows 8 and whatever XBox One has.  Their stance on Indie publishing on the XBox One makes it likely that Indie Games won’t even run on it let alone Game Studio being ported to it.  You could tell something was up when guys like Shawn Hargreaves started to leave the team.

    I have a lot of design documents and ideas in my head for a sequel (including turning it into a turn-based game), but it will never happen with XNA, which is a real bummer to be honest.  I liked the platform and Game Studio was really cool.

    I toyed with putting it on Windows Phone 7 for a while and even had a very small prototype but in the end, it just didn’t feel right on a phone (without major changes to its design) and WP7 has no users and so a very small market.  I could try iOS, but it’s flooded and I don’t like Apple and I’m not versed in any of their programming languages/platforms.

    I think turning it into a HTML5 game would be interesting.  This type of game would be best for touch or mouse-clicking I think than using a controller.  But I’m pretty busy doing real paying work and being with family than doing this kind of thing.

    More to Come

    I’m planning on blogging about this game in more detail, especially with technical stuff like building directed acyclic word graphs (DAWGs), searching word graphs, building AIs that play games, character and RPG stats systems, etc.

    Categories: C#, Games

    Tags: War of Words