Search This Blog

Friday, June 3, 2016

Had fun with my new tools....



From time to time, I love to work on 3D objects and as I received my painting tool (Mari) I decided to play a bit with the different pieces of my pipeline, and for once make something a bit more dark. Sure I'm no master of horror, but I had some fun and the result is not all that bad (IMO)

Tuesday, May 17, 2016

SignalR as network protocol

SignalR is the web socket implementation written by Microsoft. It is actually a quite good implementation of web socket and let you develop using only a .NET language on the server side and JQuery on the client side.

You can call from JS C# function as well as from C# call JS functions. Objects are serialized / deserialized for you as well.

To create a chat with SignalR is really not complex and a good start point can be found here:
http://www.asp.net/signalr/overview/getting-started/tutorial-getting-started-with-signalr

However I use it for other purposes as well, it allows me to contact a server even if I'm behind a firewall and have real-time bi-directional communication. To do so I simply use the SignalR Client package (nuget) and from C# you can call a SignalR server and communicate with it.

Sure if it's just a pure "call function" SOAP / REST could be a better option, but they will never offer the bidirectional and fast reaction web sockets offers.

A good start point could be:
http://www.asp.net/signalr/overview/guide-to-the-api/hubs-api-guide-net-client

Friday, May 13, 2016

Stat & Skills => the first editor is actually a viewer

While creating a game maker or actually any software you need always need to think about who will use it and make it so that your soft can actually be used by your audience.

Some software are quite simple to use, and just require a well done interface, others on the other hand may require a "new comer" interface as well as allowing advanced users to access more powerful tools.

For example in many 3D graphical software you will find the option to develop your own tools or automate tasks either by writing some scripts, or developing full plugins. While this is certainly not something every new comers will start with, it's something bigger companies may need to integrate the different tools of their pipeline and at the end save time / money.

For our game maker the situation is a bit similar, we must offer a relatively simple "click to create" option where all what's needed will be provided as default, with options to tweak some rules and yet if we really want to offer a good option as game engine, we must offer a more advanced way for developers to develop.

Therefore the current choice is to offer both a scripting language which is near Javascript as syntax, and at the same time "Wizards" to just tweak parameters without having to edit the code. The concept is currently fully implemented and it seems to work quite well. To show the "wizards" editor, I made a first "stats" and "skills" editor which actually only allows you to view the defined stats & skills.



On top of the page you see the possible tweakable values, while under you see the original code. Most likely the code will not be shown by default to avoid to scare the new comers, however currently I thought it was fun to see all at once.

Offering an incomplete implementation of the tools allows to discuss the matter with example on hand, and therefore let others fully understand the concept which otherwise could seems overly complex or completely unclear.

Thursday, May 12, 2016

JS Profiling

In order to make your game fluid / fast, you should ideally reach around 60 FPS. Ensuring about 60 FPS also allows to ensure that all players will play with the same pace.

The first step in this direction is to see how well your game behave, and for this a little "gadget" could be handy:

http://darsa.in/fpsmeter/

This little tool shows a graph and a few values to see if your game manage to keep the needed speed.

Now that you have a view of your current speed, you may as me have the issue that the game is not optimized enough, however if your code is not just a few lines of code you may start to wonder what takes time.

In the old days the solution was to put a "stop watch" around some critical areas and see how long those areas took. Today even browsers starts to have really good debugging / profiling tools. I will here discuss only Chrome but for most parts you can use any other browser (maybe with some additional plugins).

To start the debugger press simply F12:


You will see the debugger contains loads of "tabs" and each of those is specialized in a different task. I usually end up using a lot the "Source" tab as it allows me to see my JS / TS code, put breakpoints and see how it is going within my code.

In this case as we want to debug the performances of the game we will use the "Profiles" tab. Once you click on the "Profiles" tab, you are greeted with a selection of the type of profiling you want to run. To see which function takes more time choose "Collect JS CPU Profile" and click the "Start" button.

At this point use your game as you usually do. But don't spend too much time on it. I would say about 10-20 sec should be enough to collect all the data you need. Once you decide you collected enough data click on the "Stop" button of the profiler, and here the magic happens:


On top of the result page, you will see a gray bar allowing to choose a few things. The first button allows to choose different type of views for the collected data, I find personally "Tree (Top Down)" more intuitive to read as it represents also the calls of your code. In my case, before I started to optimize, you see that 67.9% of the time is spent inside Play.GameLoop, expanding it, I see that 49% of the total time is within WorldRender.Render and digging again I find WorldArea.GetObjects taking 31%.

Clearly the I should work on this GetObjects function and see if I can speed up my soft. After nearly a day of work, I managed indeed to speed up a lot my engine:


From a first impression you may think: "Wait! We still have about 62% inside the Play.GameLoop!" yes, that's true, however if I dig down you see that now inside my WorldRender.Render most of the time is consumed by "drawImage" which is nothing else as the Canvas 2D drawImage function itself. This function cannot be directly optimized as it's an API call and therefore I indeed speed up my soft.

Before optimization I was about 20-30 FPS and now I run about 58-60 FPS, which is nearly a 2x improvement.

As you see, using the debugging / profiling tools offered by the browsers can indeed help you pin point where your soft is eating up resources / time. Just a quick note, if you see that all the time is eat up by a single function but you don't get more info than that, simply split your function in smaller pieces and the profiler will then be able to tell you in which of this piece the time is spent.

Wednesday, May 11, 2016

Action timer

Game balance is one of the hardest part of a game design in my opinion. I don't say that a render engine is easy to code, or that AI are always a piece of cake, however for me, balancing power / difficulty / fun factor is really something not easy.

With this game engine, I thought I would offer the game owner decide which skills the player will have, and each skill will be able to do whatever the game owner want... or nearly. By default I will offer some "default skills" as said, and to balance their power each skill will have a sort of "cool down" timer.

A quick action bar will be visible on the bottom of the screen allowing you to place up to 10 skills / items you will be able to switch / use using the 0-9 number keys. Switching to another active skill will then trigger another action when clicking on a monster.

So my default skills will be (hopefully) balanced for an average game, but game owners will be able to change all the game balance just by changing the cool down timer, or tweaking more, or even changing the logic code. That allows game owners to have easier or harder games, and play with the rules as much as they want.

Just one world of warning: once the game starts to be used, tweaking too much the rules is not good as it usually annoy a lot the players. So even if the balance is not perfect, once they start to play keep the rules as much as possible.

Monday, May 9, 2016

Skill script starts to work

Now that the parser is up and running (at least as it's infancy) it's time to go back to the game engine itself and implements the stats & skills.

The first skill I wanted to try is the "attack" skill which should hurt an enemy when you click on it, and the enemy is nearby. Ideally the attack should have a timer and while you remain in the range the game will attack every time the timer is over.

All that should be implemented at the script level, so that every game owner is then free to fully change the rules for his/her game. Of course we will provide some default rules to start with. Those rules will also quite certainly have a set of values you will be able to change via a simpler interface (not needing to go to the code level directly).

Anyhow that's all the "future". Currently I wrote one of the most plain attack skill possible:
// Default attack skill. Will be invoked while clicking on a monster.
function MonsterClick(monster)
{
    // Kill the monster if its nearby\n\
    if(Monster.DistanceToPlayer(monster) < 32)
        Monster.Kill(monster);
}
This code is run every time you click on a monster, then the code checks if the monster is in the range of 32 pixels and if yes kill the monster. Of course that's not how it should be, we should reduce the monster life and only if the life of the monster is 0 or smaller then kill it. However it's a good start to see that the engine is able to mix JS code (the own engine code) and our own little scripting language.

If multiple skills implements the "MonsterClick" function then currently all the skills functions will be invoked.

Finally, to make the code writing maybe a bit easier for the game owners we decided to make it case insensitive, which means lower or upper case don't change the meaning of the code.

Wednesday, May 4, 2016

Execution of the AST

Now that the parser is finished, I worked on the AST execution. The simplest (and quite certainly not the smartest or most optimized) way to do it is simply have a "Execute" function on each of your statement nodes. For example our "2", "+", "1" tree would have the "+" on top, and if I call the Execute function it will then check the 2 children of the node and execute them as well. As in this case the children are static numbers they would simply return the value and my main node would then sum the 2 values.

The same concept works for much more complex trees with function calls, variables or  whatever you can put inside your AST.

But what if you want to execute multiple "statements" like lines of code? Well simply you have an array of statements and you go through it from first to last.

For a procedural language (as it is in our case) we have a first "pass" of execution which will store the functions in the "known function list" and then when needed run them.

Variable scopes can is another issue to deal with when you develop a language. A variable defined in a block (like in a function) should not be accessible outside of the block. A simple way to implement it is to have an "execution environment" which contains all the variables in the scope, and once you enter a new scope you change your "execution environment".

All this is good, but what if we would like to gain speed? Multiple ways are open to optimize your script execution (only a few here but you may find many many more):

  1. AST optimization which basically tries to simplify your AST for example to pre-calculate constant operations. If you write "2+3*2" it doesn't make sense to run the whole tree all the time and could be replaced with the same value: "8".
  2. Loop unfolding: from a for loop, transform it to multiple lines of code, which would avoid to run checks and more which simply cost time.
  3. Dead code removal: remove nodes which cannot be called
  4. JIT like operations: you may transform your AST into something faster, in my example, transform my AST directly into Javascript and eval it having then the speed of Javascript and not the overhead of my AST on top.
All those can be done one after the other such to gain each time more speed. Of course the old statement that optimization tends to make the code harder to read is always true: if you implement AST optimization and JIT you will make your parser a lot more complex and harder to debug. So be sure it's really needed before entering this road.

Tuesday, May 3, 2016

3 step parsed language

The work on the parser is going forward, and actually is working quite well. The parser will be a 3 step parsed language:

  • Tokenize the source
  • Parse statements
  • Execute the AST
The first step is to be able to recognize the different parts of the source string into different pieces. For example "1+2" would be 3 tokens: [Number:"1", Operator:"+", Number:"2"]
Tokens are responsible of just having a content and knowing the type of data but don't yet know if it's valid to have a given type after another.

Parsing statement uses the tokens produces by the first step, and basically cascade the parsing from "expression", "and operator", "or operator" down to the "base statement". The order of the cascade actually defines the order of precedence of the different pieces, for example multiplications must be done before an addition: 3*2+4 must be done as (3*2)+4 and not 3*(2+4) as the result is not the same. In this step the parser can detect syntax errors, for example missing a parenthesis. At the end of this process we will have a AST (https://en.wikipedia.org/wiki/Abstract_syntax_tree) which can then be run.

Executing the AST is after that quite simple, each node knows how to execute its own operation and nothing more. Also if the AST is well built the execution of it should be fast. Ideally AST optimizations should be a step before the execution, and an AST could ideally be transformed into a binary code directly run by the CPU (that would produce a JIT if done on run-time).

Now you may wonder but why do I need to write all that, and yes it's not a small piece of software? Yes there is available parsers like the Javascript function "eval" or use 3rd party libs which would implement a scripting language for you. Well the answer is pretty simple: eval would be easy to use and cheap for the developer (me). However it would have a major issue: security, any valid code would be able to run, for example even nasty operations. 3rd party libs may actually much bigger than what we would ideally want and may be harder to integrate or debug in case of issues.

Stay tuned to see my parser first in action!

Friday, April 29, 2016

Further thinking about stats and skills

We are still thinking about how we want to handle the customization. On one side the option to give full customization is tempting, as it would really allows game owner/designers to change the rules as they want. On the other side it will for sure increase the difficulty for new comers to approach the system as they may need to write some code to change the rules.

This can be of course mitigated by the fact we offer some default content, documentation and tutorials but it may scare some people.

Does it then make sense to take this road or would it be better to offer mainly a form which need to be tweaked?

Let's take a possible example: our "Life" or "HP" stat:
// Example of the script behind the HP stat.
// Function returning the currently maximum allowed value.
function MaxValue()
{
    return API.GetStat('Level')*20;
}
// Function returning the currently minimum allowed value.
function MinValue()
{
    return 0;
}
// Function run every time the value is modified
function ValueChanged(newValue)
{
    if(newValue < 1)
    {
        API.Teleport(0,0,0,0);
        API.SetStat('HP',1);
    }
    if(newValue > MaxValue())
    {
        API.SetStat('HP',MaxValue());
    }
    UI.UpdateStatBar();
}
In this example the API and UI are functions offered by the engine. While the functions defined in the stat code will be called from the engine at some points. As you see this could open the door to multiple features like a "berserk" skill which could trigger once you reach 10% of life.

To implement this, I will first have to develop the full parser / runtime of this language, however if I do it now, it will be possible to use it later on for plug-ins development which would mean we would have a single language used in multiple places.

Also having such language could be used on the objects themselves for example a potion could restore a stat or kill you.

Again it's a question of balancing flexibility to user friendliness (easy to use) and here I'm a bit unsure which road we should take.

Thursday, April 28, 2016

Stats points, skills, make the engine flexible

To make the engine an engine and not just a fixed  game which would offer little flexibility the engine should let you define the stats points and the skills.

As we want to let the game owners create the games they want, being vampires, gangs or medieval, the engine must let you create whatever stats points you want as well as define any skill you want.

Stats would be like your current value on something, being the money, your life, or experience. Stats could recover themselves have triggers when reach some value (you die if you reach 0 life for example) or have a maximum level (energy for example). All that's is just matter of letting you define a name, and some fields. But what if a stat maximum depends of a "level" or depends on what you wear? Here the things starts to be a bit more complex and require some sort of logic built on the stats which must be defined by the game owner.

Of course we must offer standard stats with a standard logic such that new comers don't spend 10 years learning our tool but can start with something and later on tweak it.

To build all that, it will require also tools to view / modify those information, and store / retrieve them from the database. The rules must be defined by the owner and stored per game, and for each player a set of current stats must be stored as well.

All this require quiet some work which will not directly be visible, but it's a must to make the engine an engine.

Initially I plan to offer really few default stats, and we may increase the default offered later on:

  • HP / Life
  • Money
  • Experience
  • Level

As skill, I want to offer those one:

  • Attack
  • Defense

As you see it's really really limited, but if we manage to create a good system, adding further one in the default setup will be a piece of cake.

Hopefully we will not already need a full scripting language to support those stats / skills but it may end up nearly the same, with complex formula and actions to trigger when the values changes.

Wednesday, April 27, 2016

Coordinate transformations

Any graphical game will at some point transform coordinate from one kind to another. For example, for a grid map, you will need to transform the map to screen coordinate, which at first is really easy: screenX=mapX*tileWidth and the same for the Y coordinate.

On the other side, what if you want to center the screen around your player? Already there there is a bit more transformations, with some offsetX and Y for the top left corner for example. What if your maps are split in different areas? Like an area is a 100x100 like in my case, as soon as you goes out of this area you need to change the area index and restart the X,Y coordinate on the map.

You slowly see how I'm heading? The more features the more complexity for your coordinate system. And guess what? What you do in one direction you will most likely need to have in the other, what if I click on the screen and need to know on which cell of my map I'm? And here you are with the reverse of the previous calculation.

Therefore, it's smarter and certainly safer to have those transformation stored in two functions, and then always call those function. That will allows to debug only once and then be assured it will always work (or at least so it should be).

Example of my screen to map coordinate transformation:
public ScreenToMap(x: number, y: number): RenderScreenCoordinate
{
    var pos = $("#gameCanvas").position();
    var tileWidth = game.World.tileSetDefinition.background.width;
    var tileHeight = game.World.tileSetDefinition.background.height;
    var orx = Math.abs(this.offsetX) % tileWidth * (this.offsetX < 0 ? -1 : 1);
    var ory = Math.abs(this.offsetY) % tileHeight * (this.offsetY < 0 ? -1 : 1);
    var x = (x - pos.left) + this.offsetX;
    var y = (y - pos.top) + this.offsetY;
    var ox = x % tileWidth;
    var oy = y % tileHeight;
    x = Math.floor(x / tileWidth);
    y = Math.floor(y / tileHeight);
    var cx = this.areaX + Math.floor(x / this.world.areaWidth);
    var cy = this.areaY + Math.floor(y / this.world.areaHeight);
    var tx = x;
    var ty = y;
    if (tx < 0)
        tx = (this.world.areaWidth - 1) - (Math.abs(tx + 1) % this.world.areaWidth);
    else        tx %= this.world.areaWidth;
    if (ty < 0)
        ty = (this.world.areaHeight - 1) - (Math.abs(ty + 1) % this.world.areaHeight);
    else        ty %= this.world.areaHeight
    var rx = tx + (cx - this.areaX) * (this.world.areaWidth - ((cx - this.areaX) < 0 ? 1 : 0));
    var ry = ty + (cy - this.areaY) * (this.world.areaHeight - ((cy - this.areaY) < 0 ? 1 : 0));
    return { TileX: tx, TileY: ty, AreaX: cx, AreaY: cy, RelativeX: rx, RelativeY: ry, OffsetX: ox, OffsetY: oy };
}
As you see, not really a 2 line function. Yes some comments would also help to read what's going on, but that's more for an example than anything else.

Tuesday, April 26, 2016

Path solving

Path solving in game programming is something you will mostly use at some point. This kind of algorithm let you find ideally the shortest / quickest way between 2 points, either by using connections between points or cells on a grid map.

The most well known and the best one to use in most case is called A*.

A* is nothing else than a code which will try all possible routes and use the shortest one when it find it. At start it will goes in all the possible directions, add those as new starting point, and repeat. As optimizations you could sort the list of path to try by using the nearest one to the goal first. Of course don't test twice the same node or cell of the map.

Path solving may have some drawbacks: it may take quite some time to solve, so you may want to avoid testing it every single time. Second, if you work at a pixel level it may end up doing WAY too many checks, maybe use a grid on top to make things faster could help you. If blocking elements move you may need to rework you path, to avoid having to do it all the time, you may do it only when you get blocked while traveling the path.

As possible example of a running A* algorithm:
http://bgrins.github.io/javascript-astar/demo/

The same kind of algorithm will work in mazes or with weight on the cost of travel (for example swimming could be slower than walking).

On my own I have a couple of more issues to solve, like for example as my maps are split in "areas" crossing an area change the way you need to handle the path.

Another issue is that maybe I don't want that you can solve a maze simply by clicking the goal, as it would spoil the game. To solve this, I may simply introduce a limit on the number of steps my path solver will try to go through.

Having a good / smart path solver will actually increase your game experience a lot therefore don't spend too little time on it.

Monday, April 25, 2016

Async callback nightmare

One of the main complain I can have with node.js is its way to handle the "asynchronous" calls. Basically some calls take time to run, for example querying a database or connecting to a remote host. Instead of blocking the single thread on which your node.js code runs, those function will call you back once the operation is completed.

At a first thought you may think: great, I don't have anymore blocking calls and at the same time I don't need to deal with multi-threading and possible locks / semaphores.

Indeed this model where every part of your code runs within one thread and you are called when there is something for you is great to solve multi-threading issues. It is also great that you don't have to deal with shared variables or hoping some code is not interrupted in some nasty areas.

Yet, many tasks do require a cascade of events like:


  1. Connect to a database
  2. Execute a select
  3. For each elements of the result update a value
  4. Close the connection & free up
  5. Return the values
As you see, there is no way you can run on parallel these tasks, and you really need the result of the previous step to do the next one.

In node.js such code could be written like that (it's more pseudo-code than a real API):

db.connect(function (err,conn)
{
   conn.executeQuery("select * from users",function (err2, results)
   {
       for(var i=0;i < results.length;i++)
       {
         conn.executeQuery("update users set gold=gold+10 where id="+results[i].id,function(err3,results2)
        {
        });
     });
   });
});
Ooo wait! There is a bug! You can't call executeQuery to update within the loop, as the queries will be sent all in parallel and may actually be an issue if you have a limit of the number of queries to run at the same time. Would be better to run then one after the other. Also this is by far not readable if you end up in the 10th callback function.

So how can we solve that?

There is some work around found on the net, for example: https://github.com/yortus/asyncawait
I didn't tested them yet, but it should solve exactly this kind of figure where you need to do the operations in sequence (and believe me it's more frequent than to run them in parallel). This library don't actually block node.js after your await call, instead it will call you back transparently once the function you await completed.

As said that needs all to be tested to further understand how that works and if it works well, but hopefully I can clean up the mess created by an actually poorly thought framework. Why am I so aggressive against node.js? Because those problems should be solved at a language level and not via some 3rd party library. And if so many developers have the same issues as me it means that should be actually be a problem solved at the root.

Friday, April 22, 2016

Bugs... stupid bugs...

When you start developing you fight mainly with the language and the framework. While your knowledge grow you still fight but usually you fight for an algorithm.... or stupid bugs which resists you.

Today I fought about 3 hours to find a single little number which was wrong:
area.actors.splice(i, 0);
Instead of
area.actors.splice(i, 1);
Result? The first code do... nothing, while the second actually removes an item from an array.

You could think that his kind of bugs can be solved in no time, sadly the bigger the code base the harder it is to find such bugs. I had to isolate the area which produced this error, try to make sure it wasn't somewhere else, and at the same time I discovered a few other things which didn't really made sense either. As said... 3 hours for this.

Anyhow now I have rats walking around my world:


The movement of the rat is mainly random, it doesn't head yet toward the player. However I implemented collisions with background tiles, and for example water is non walk-able.

I will have to implement collisions with objects as well, yet for this part I need to think how I want it. Will it be a circular area round the ground position? Or could I choose the collision shape? All open questions.

I will need also to create monster spawner, which will let the game owner place monster where he/she want and not simply randomly scattered around the map.

Thursday, April 21, 2016

Monster handling

Code design (or game design) is sometimes more subtle than you may at first thing. In this case I'm talking about where to place some piece of code such that it is both logic and can handle the situation in a smart way.

My current problem is to make the monsters walk around the map, and under certain condition toward the player such to attack him later on. The first idea you would get is that a monster is an object by itself and therefore there is a class "monster" which will have all the logic. But is the logic of the monster not somewhat shared by the player as well? Will the monster and the player not somehow walk around on the same world? You see already that maybe both monsters and the player will have a parent class let's say "actor" which may contains the x,y position and some other information. Yet the monster will have a different logic as the player, as the player will be handled all by the guy/girl behind the screen while the monsters should walk alone. So some part will be specific for sure.

Also, how many monsters will we deal with? The whole infinite world? Or shall we keep the monsters only for the areas we have currently in memory? Quite certainly this second case, so monsters should be kept with their area, and when we destroy an area we will stop handling them as well. Freshly loaded or generated area should bring new monsters and therefore we will handle only a limited set of monsters at a given time. Infinite number of object handling is simply not possible so that seems a good option... yet what if a monster shall cross an area border? Then it need to be placed on the new area or.... killed if it goes out of the currently handled areas.

Remains also to think about the rendering, ideally the render part of the game engine should deal with players, monsters and NPC basically in the same way.

All that makes the whole idea a bit fuzzy / complex and therefore maybe need some further refinement.

What if we introduce "pets" or "friendly monsters"? What if we want to have monsters acting as a group? And so on... so I clearly don't have yet all the answers, and will try to clarify my mind while working on it.

Wednesday, April 20, 2016

New phone received => not much work done

Today I received a new phone, and therefore wasn't really so productive. I must also say I didn't slept well, and that doesn't help either.

Anyhow, just to make a short story really short, I spend my time moving stuff from my old phone to the new, configure the new one and sell the old one to somebody. Really productive isn't it?

What's good is that with all the "cloud" hosting of your settings most of it is actually pretty automatic, while the remaining parts are handled by a nice little touch of the phone maker: Sony in this case. I ended up having all my data on the new phone without having to re-type anything. Cool!

Tuesday, April 19, 2016

JSON Schema

Today JSON files replaces slowly but surely XML files. JSON certainly is more compact and is easier to work with in JS as they are nothing else than JS objects (JS Object Notation).

However even if JSON files are convenient for many things, being smaller, being faster to write, they had a huge drawback in my opinion over XML. The lack of schema definition which would allow to test if a JSON file is valid or not and makes the IDE aware of what we could write within.

Yet some IDE starts to support JSON Schema definitions ( http://json-schema.org/ ) for example Visual Studio (which actually is the one which interest me).

That finally close the issue I have with JSON and let me enforce a bit more strength inside my JSON configuration files.

Let's start with a small JSON file:
{
  "stores": [
    {
      "name""My Little Book Shop",
      "owner""Me Myself",
      "location": {
        "address""Av. Somewhere 29",
        "city""Someplace",
        "postal_code""5000",
        "country""MyCountry"      }
    }
  ],
  "books": {
    "How To Code for Dummies": {
      "isbn""12389298732",
      "price": 53.12
    },
    "I love ponies": {
      "isbn""98727356128",
      "price": 23.99
    }
  }
}
This file contains book shops and a list of books we can find.

Now so far nothing special, however how to ensure that the content of this JSON file is correct and that the next time we type something in we don't make mistakes? Quick answer: by using a JSON Schema!

To do so in our JSON file we need to add a property (usually the first one): "$schema": "book.schema.json"

This tells the IDE that for this JSON file we will use the file book.schema.json as schema. Of course this file is now missing so let's create it:
{
  "$schema""http://json-schema.org/draft-04/schema",
  "properties": {
    "$schema": { },
    "stores": { },
    "books": { }
  },
  "additionalProperties"false}
This small schema file defines that the JSON file can contains ONLY 3 properties: $schema, stores and books. Everything else would be prohibited. That's already a first step, but let's define what in the stores can be placed as value:
{
  "$schema""http://json-schema.org/draft-04/schema",
  "properties": {
    "$schema": { "type""string" },
    "stores": {
      "type""array",
      "items": {
        "type""object",
        "properties": {
          "name": { "type""string" },
          "owner": { "type""string" },
          "location": {
            "type""object",
            "properties": {
              "address": { "type""string" },
              "city": { "type""string" },
              "postal_code": { "type""string" },
              "country": { "type""string" }
            },
            "additionalProperties"false          }
        },
        "additionalProperties"false      }
    },
    "books": { }
  },
  "additionalProperties"false}
The same can be done for the "books" property, yet this one is a key / value pair and therefore need to be defined as "additionalProperties" while defining the type stored:
{
  "$schema""http://json-schema.org/draft-04/schema",
  "properties": {
    "$schema": { "type""string" },
    "stores": {
      "type""array",
      "items": {
        "type""object",
        "properties": {
          "name": { "type""string" },
          "owner": { "type""string" },
          "location": {
            "type""object",
            "properties": {
              "address": { "type""string" },
              "city": { "type""string" },
              "postal_code": { "type""string" },
              "country": { "type""string" }
            },
            "additionalProperties"false          }
        },
        "additionalProperties"false      }
    },
    "books": {
      "type""object",
      "additionalProperties": {
        "type""object",
        "properties": {
          "isbn": { "type""string" },
          "price": { "type""number" }
        },
        "additionalProperties"false      }
    }
  },
  "additionalProperties"false}
The schema is now complete for our JSON file, you could then check if something is valid or not, and also the IDE should propose you what kind of properties a given object should have. To further improve the schema I would strongly suggest to provide a description of the properties as well and therefore further help you while entering data within the JSON file:
{
  "$schema""http://json-schema.org/draft-04/schema",
  "properties": {
    "$schema": { "type""string" },
    "stores": {
      "type""array",
      "description""Lists all the shop in the francise.",
      "items": {
        "type""object",
        "properties": {
          "name": {
            "type""string",
            "description""Name of the shop."          },
          "owner": {
            "type""string",
            "description""Name of the owner of the shop."          },
          "location": {
            "type""object",
            "description""Location of the shop.",
            "properties": {
              "address": {
                "type""string",
                "description""Address of the shop"              },
              "city": {
                "type""string",
                "description""City where the shop is."              },
              "postal_code": {
                "type""string",
                "description""Postal code."              },
              "country": {
                "type""string",
                "description""Country of the shop."              }
            },
            "additionalProperties"false          }
        },
        "additionalProperties"false      }
    },
    "books": {
      "type""object",
      "description""List of all the books the shops can sell.",
      "additionalProperties": {
        "type""object",
        "properties": {
          "isbn": {
            "type""string",
            "description""Unique ID linked to a book. The ISBN is usually printed on the book itself."          },
          "price": {
            "type""number",
            "description""Usual price the book will be sold at."          }
        },
        "additionalProperties"false      }
    }
  },
  "additionalProperties"false}
Finally you can test the schema and the JSON provided here on an online validator like:
http://www.jsonschemavalidator.net/

The validator will not show the description while you type but will ensure the properties are acceptable for example.

After all this work, if you have a good IDE, you should have a result like this while working on your JSON file:


Monday, April 18, 2016

Map compression

Work on the map saving & restore went further and in order to decrease the amount of data stored on the server I decided to compress the map while keeping the data as string (and therefore don't have issues with JS). If would accept to keep the data as binary I could use a GZip or something similar.

The first step would be to store numbers in another format to be able to reduce the size. 0-9 cannot be reduced if you want to keep one them split and not merge them into a single byte. However bigger numbers like 2112 (just a number picked randomly) takes 4 bytes to write as string yet could be stored in a more compact form. Either being in HEX or... by storing it to a more compact form. For example using a-zA-Z as base (0 being a, 1 b and so on).

Let's write a small JS function for that:
var numberCompressionPossibleChars = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ";
// Numbers must be positive!
function NumberToString(source, nbChar)
{
var result = "";
var rest = source;
for (var i = 0; i < nbChar; i++)
{
result += numberCompressionPossibleChars.charAt(rest % numberCompressionPossibleChars.length);
rest = Math.floor(rest / numberCompressionPossibleChars.length);
}
return result;
}
 A JS Fiddle has been created for it: https://jsfiddle.net/68konLxw/

To transform this back into a number:
function StringToNumber(source, position, nbChar)
{
var result = 0;
for (var i = 0; i < nbChar; i++)
{
var c = source.charAt(i + position);
result += numberCompressionPossibleChars.indexOf(c) * Math.pow(numberCompressionPossibleChars.length, i);
}
return result;
}
And the JS Fiddle here allows to go from one and back: https://jsfiddle.net/gLwtwo73/

Now that's just the work of a single number, and we don't gain much unless the numbers are big. But what can we do on an array of numbers? Well, the next step is to implement the Run Length Encoding which counts the number of times the same piece comes:
// Numbers must be positive!
function NumberToString(source, nbChar)
{
var result = "";
var rest = source;
for (var i = 0; i < nbChar; i++)
{
result += numberCompressionPossibleChars.charAt(rest % numberCompressionPossibleChars.length);
rest = Math.floor(rest / numberCompressionPossibleChars.length);
}
return result;
}
function StringToNumber(source, position, nbChar)
{
var result = 0;
for (var i = 0; i < nbChar; i++)
{
var c = source.charAt(i + position);
result += numberCompressionPossibleChars.indexOf(c) * Math.pow(numberCompressionPossibleChars.length, i);
}
return result;
}
JS Fiddle of the array compression: https://jsfiddle.net/gfpyq7qo/

The first few characters of the compressed array contains the number of character used to compress the number.

And the last piece of our work, decompress this array:
function StringToArray(source)
{
var result = [];
var strNb = "";
var i = 0;
for (; i < source.length; i++)
{
var c = source.charAt(i);
if (c == "-")
break;
strNb += c;
}
i++;
var nbChar = parseInt(strNb);
strNb = "";
for (; i < source.length; i++)
{
var k = source.charCodeAt(i);
if (k >= 48 && k <= 57)
strNb += source.charAt(i);
else
{
var nb = StringToNumber(source, i, nbChar);
i += nbChar - 1;
if (strNb == "")
result.push(nb);
else
{
var n = parseInt(strNb);
for (var j = 0; j < n; j++)
result.push(nb);
strNb = "";
}
}
}
return result;
}
And the last Fiddle of this post: https://jsfiddle.net/p3o0rems/

Friday, April 15, 2016

Load & Save

Persistence in a game is something mandatory but is often not enough thought. At least in my opinion.

Also persistence is split between player state and world persistence. In many games the player had little to no influence to the world or at least the influence is only temporary. For example you break a lamp save quit the game and reload and the lamp will be as new. What you wear and your stats are usually saved and kept.

Nothing wrong with this, I mean, you have to think about how much it would take to store every little changes you do on the world.

So how could the persistence be implemented for a web game? First you need to think about what you want to do with the data. For example, if we work like me with a 2D grid of tiles, the server may not need to even handle them, and I quite certainly don't want to make odd queries on those data. Therefore a straight forward serialization like JSON.stringify will do the trick.

On the other side, if you would like to make reports on some data, for example find the best players, or those which has more money in bank, serializing the player class and storing it "as is" is not a good idea. Better then to store the values in a format you can then query correctly.

Another thing to think of is if you want to compress your data somehow or you agree to use all the space as needed. Compressing can either be done with libraries like ZIP, GZIP or such or simply do run length encoding (RLE for short). RLE is a simple method which counts how many times you have the same thing coming, and then having a number plus the thing. For example the string "AAAAABBBBCAAAD" could be written like "5A4B1C3A1D", reducing the string from 14 character to 10 which means a compression of 28% less space. Some times it works some times it doesn't specially if each character is used only once or very few times.

The compression could be made on the client such that the data would also take less space over the network and ideally be faster, or could be done on the server if it is something too complex to do on the client side and what you want is just save space on the your server disk.

Thursday, April 14, 2016

Backend of my hobby project => Node.JS

My hobby project is progressing, but before I can really go on, I need to start working a bit on the back-end allowing to store data, for example maps, as well as having a bit of access control.

As you may have saw from my previous posts, I currently decided to use Node.JS simply because it allows me to more easily host my project on an existing Linux VPS I already rent.

Node.JS has many advantages like being able to develop using a single language between the front-end and the back-end, having cool features like socket.io which handles the web sockets for you, and much more.

However Node.JS do have drawbacks (at least in my opinion). All (or most) works using asynchronous call, for example, you connect to a database (async) and make a query (yet another async). You end up easily to have loads of nested anonymous functions just to handle such cases.

This of course makes the code harder to read and to write. But you have also issues about error handling. If an exception is fired, you may simply fail to be able to catch it, which means normally to kill your server, and requires a restart (sure it can be automated). This means whatever you had in memory is lost.

Let me show you an example:
app.post('/backend/OwnerExists'function (req, res, next)
{
    if (!req.body.user)
    {
        res.writeHead(500, { 'Content-Type''text/json' });
        res.write(JSON.stringify({ error: "parameter 'user' is missing." }));
        res.end();
        return;
    }
    var connection = getConnection();
    if (!connection)
    {
        res.writeHead(500, { 'Content-Type''text/json' });
        res.write(JSON.stringify({ error: "connection failed." }));
        res.end();
        return;
    }
    connection.connect(function (err)
    {
        if (err != null)
        {
            connection.end();
            console.log(err);
            res.writeHead(500, { 'Content-Type''text/json' });
            res.write(JSON.stringify({ error: "error with database." }));
            res.end();
            return;
        }
        connection.query('select id from game_owners where name = ?', [req.body.user], function (err1, r1)
        {
            connection.end();
            if (err1 != null)
            {
                console.log(err1);
                res.writeHead(500, { 'Content-Type''text/json' });
                res.write(JSON.stringify({ error: "error with database." }));
                res.end();
                return;
            }
            // Not yet registered            if (r1.length == 0)
            {
                res.writeHead(200, { 'Content-Type''text/json' });
                res.write(JSON.stringify({ result: false }));
                res.end();
                return;
            }
            else            {
                res.writeHead(200, { 'Content-Type''text/json' });
                res.write(JSON.stringify({ result: true }));
                res.end();
                return;
            }
        });
    });
});
This connects to a MySQL database and makes a single query. You see already how many nested function I have.

Sure there is ways to improve the situation using 3rd party libs or working a bit differently, still at the end of the day the same will happen => each connection / query will require a new function callback.

I really wonder how you can handle big projects written with such framework.

Wednesday, April 13, 2016

Map editor is progressing

The map editor of my hobby project is progressing. Beside having finally plants and trees placed on the grass by the world generator, the map editor has by default a grid display to make it a bit easier to see where the tiles are. You can disable it if needed.

Also I worked on the first "functions" of the map editor which now let you choose either by painting tiles or placing / removing objects.



To make sure the render engine is not slowed down by searching the objects to render, a grid cache has been implemented which means objects are placed in a grid of the same size of tiles.

The drawback of such optimization means each time you change the objects or change the position of an object you will need to update the grid cache.

To make the look more natural objects are not placed on a grid either, they are freely to be placed anywhere you want on the map.

Finally, to make my life easier to deal with the JSON art definition I improved the typescript interface which let me directly access from within Typescript the the JSON:

interface TilesetInformation
{
    background: TilesetMap;
    objects: TilesetObject;
}
interface TilesetMap
{
    file: string;
    width: number;
    height: number;
    types: TilesetType;
    mainType: string;
    transitions?: TilesetTransition[];
    levels?: TilesetLevels[];
}
interface TilesetType
{
    [s: string]: number[];
}
interface TilesetTransition
{
    fromstring;
    to: string;
    transition: number[];
}
interface TilesetLevels
{
    typestring;
    maxLevel: number;
}
interface TilesetObject
{
    [s: string]: TilsetObjectDetails;
}
interface TilsetObjectDetails
{
    file: string;
    width: number;
    height: number;
    groundX?: number;
    groundY?: number;
    x: number;
    y: number;
    frequency?: number;
    placeOn?: string[];
}
Having a well defined JSON file ensures I don't mess around while trying to access information.

Tuesday, April 12, 2016

Perlin Noise and world generation

In the video game world, there is one formula which is used a lot while trying to generate procedural worlds: Perlin Noise.

Perlin Noise is basically function which returns a value between -1 and 1 for a given X,Y coordinate. Whatever coordinate you will give it will always return a value and if you call it back later on it should return the same value again for a given set of coordinate.

This function allows to generate terrains, where from -1 to 0 for example it would be the sea and whatever is above 0 will have an terrain height.

It works for 3D worlds but for 2D worlds it works just as fine, as you could change the tile based on the value the perlin noise returns.

I'm surprised how few developers I know actually know what this function is, it doesn't mean it is not know, actually it's one of the most used one in the game dev and you can find loads of tutorials and articles like this nice one here:

http://devmag.org.za/2009/04/25/perlin-noise/

Guess what? I will use perlin noise as well for my hobby project Stay tuned to see the results ;)

Node.JS and Visual Studio

With Visual Studio 2015 came the "official" support of Node.JS within Visual Studio. A great news for me which already works mainly with Visual Studio and switching from one IDE to the other is always a pain being for key bindings or just general productivity.

Keep in mind I'm not selling Visual Studio, and that I would not pretend it's the best solution for all. Simply for me it's a commodity to have yet this other possibility within VS.

For my hobby project having it written in .NET doesn't seems all that of a good choice specially for the cost of the hosting, and therefore I tried this weekend to create a new project using Node.JS, the result is actually far from being bad, and after some hard work (to understand a few things, import some definitions and so on), I have it mostly how I want it.

Basically I don't want to code directly in JavaScript and instead develop fully in Typescript which adds really the few tiny bits that makes life easier for developers. I therefore develop both the back-end and the front-end via Typescript and all that within the IDE I use every days.

One piece is missing for me and it's the deployment, too bad there is no SFTP integrated within Visual Studio and I didn't found any extension doing it. It's not a show stopper but it's again something which could improve the productivity.

Let's hope in future this support is even more developed and improved. Let's hope that we could then have native LESS compilation. Let's hope that the Typescript definition could be downloaded like NPM or NUGET packages...

We shall see how it goes. So far I'm already quite happy.

Friday, April 8, 2016

MVC and separation of concerns

There is a couple of "hype" words in the development world which somehow hurts my feeling of old monkey developer. One is "Test Driven Development" and the other is "MVC" framework.

https://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller

MVC is an old (1970s) design pattern which could split the responsibilities of a process in 3 pieces. This design pattern evolved a lot since then, and I would say it's not till the last 3-5 years that we really started to hear it for web development.

In an ideal implementation the MVC pattern should really be split in 3 parts, where the model contains just the data (and is maybe responsible of storing / retrieving them from the DB), the controller which received the input of the user and applies the modification to the model and finally the view which should present the data from the model to the user.

MVC pattern as implemented by Microsoft somehow mix parts together, where the controller also serves the view, and the view has more than a little bit of logic inside.

Overall this can be a design choice and can make sense specially in web development, however I tend to find difficult to call those implementations true MVC.

MVC for me has a few points which makes it interesting:
- Separation of concerns (which means each piece of code works only on its main task not mix the tasks of another piece).
- Possibility to change the view (and therefore have multiple views) without having to change neither the controller neither the model. For example to offer multiple interfaces to the same data.

However with those standard implementations, you hardly achieve those 2 goals due to the mixing of the concerns.

Thanks to the new web techs like AJAX which let you now make a Single Page Application, you could actually separate the view on the client, while keeping the controller and the model on the server. This would then allows to have multiple different views sharing the same back-end, and even offering full 3rd party API.

I would personally push others to really think a bit more out of the box and see what their needs are before jumping on the latest framework. Even if you develop with .NET as I do, it doesn't mean you MUST follow all what Microsoft offers. You can certainly use some of the ideas or the tools offered but use them with what fits your project best.

Wednesday, April 6, 2016

User Interface (UI) testing

While unit testing and integration tests usually focus on the back-end of your code, what your customer will still see is the user interface, or at least for most of our development that's it.

Therefore having tests on the interface is not useless while usually being quite hard to implement.

As I develop mainly web applications and therefore I will concentrate here on this particular kind of UI.

Depending how you developed your software, if it is a single page application (SPA) you could actually develop tests all in JavaScript. It's by far not as crazy as it seems, the issue is that you will need a full browser to run it, and you can't really fire it from a build server.

The other option is independent of how your page is developed and can (and will) run on a build server: WebDriver, Selenium & PhantomJS

If you are like me developing in .NET simply grab PhantomJS and Selenium.WebDriver having that let you have an headless Chrome, with all the binding to pilot it from your test functions.

Initializing the web driver is matter of:
DriverService = PhantomJSDriverService.CreateDefaultService();
DriverService.HideCommandPromptWindow = true;
DriverService.IgnoreSslErrors = true;
browser = new PhantomJSDriver(DriverService, new PhantomJSOptions());
Now that you have your "browser" you can issue your command like:
browser.Navigate().GoToUrl("http://myurl.com");
Finding an element on the page is done via:
browser.FindElement(By.Id(id))
With the IWebElement received you can check the content (.Text) or "click" it (.Click()) or type to it (.SendKeys("xxx"))

All that let you build your automated web UI tests, and finally check those interface to see that a change don't break everything down.

Don't be fooled however: it will be work to really test your UI, but I'm sure that once you have your UI covered with a good number of tests you will see how useful it is.

I did myself a smallish framework to help me writing those tests and at the end a test can be written like:
[TestMethod]
public void QuickSearch_LabelSearch()
{
    Url("/#action=ListBootPc");
    WaitElementStartsWith("contentArea""Boot PC");
    WaitElement("quickSearch").SendKeys("CR12");
    WaitVisible("quickSearchResult");
    Assert.AreEqual(10, WaitElement("quickSearchResult").FindElements(By.TagName("a")).Count());
}
As you see it's really seen from .NET as a normal Unit Test and therefore can be understood by TFS and run like once a day.

Tuesday, April 5, 2016

Team Foundation Server Update 2

I'm actually astonished by the speed the development team at Microsoft is working and it may actually be more difficult to get the news than actually get an update.

Today while I was a bit browsing one of the MS blog I found this:
https://www.visualstudio.com/en-us/news/tfs2015-update2-vs.aspx

Of course I wanted to see it in action, and since our Team Foundation Server (TFS for short) server is now running in a standard way, being part of the domain as well as the DB being part of the domain, I thought it should not be such a big deal to update. And so it was.

I still took the precaution to backup the DB as well as taking a snapshot of the VM running our TFS installation, as last time I installed an update I lost quite some time trying to recover from a bad situation.

Anyhow, this time all the precautions have not been used which is even better. I installed the update and after all its automated work I got the TFS running again with the brand new shiny features up and running. Great job Microsoft!

What's clear is that we must dig a bit more in the Release Management part of TFS to see how we could use it.