Search This Blog

Thursday, March 31, 2016

Finished the migration to code first

Small steps after small steps I managed to migrate my code from one design type to another. I know I know some of you may wonder why I stick to this idea to convert the kind of "database first" to "code first" but the point is => keep the project maintainable. By having all the login in CS code and not in a mixed CS and XML with the CS being generated it helps us to keep track of the changes.

The migration was a lot more painful than planned, and I'm sure the story is not over, but at least from what I can see it seems to work as well as before.

I can't be happy enough about our integration tests which helped me to iron out a huge number of issues, and would not even have tried to do this change without the tests. Why? It's a question of confidence and consequences. Just being able to compile a code doesn't mean it is correct and as the software grows so does the complexity (at least usually). In my case testing all by myself is impossible, and having tests checking a good chunk of the functionalities helps.

I can now continue to work on adding more features and squashing older open issues.

For those willing to take the same road as me:
- Use the T4 macro which generates your object classes by modifying the TT file and adding the needed attributes.
- Use the embedded Visual Studio debugger of your T4 files => place a break-point where you want on the template then right click on the TT file and choose debug.
- You will need to have key, key generator, foreign key and table name attributes.
- Many info will require you to use the EDMX file as XML with a XPATH to dig the values out of it.
- Finally change the links for many to many and one to one links inside the OnModelCreating function of your DbContext class.

The road is possible, but it doesn't mean it is always simple. If your project has only a few table then certainly it makes the thing easier, if like me you have over 120 tables, then it takes more time to ensure all works and is correctly done.

Tuesday, March 29, 2016

Still fighting with my code...

I'm currently trying to convert a "Database First" design with Entity Framework to "Code First", sadly things don't work as smoothly as I would love to.

I thought I could simply use the TT model to generate the classes with the required changes to have then a code which would work without EDMX file (which is an XML with the diagram and the mapping).

First problem is that the T4 format (TT file) somehow don't get code completion nor coloring by default. I wonder why as that's what Microsoft uses anyhow. I had to download a plugin to get the coloring but the code completion don't work (unless you pay for a license). Understanding how the T4 works takes already quite some time, with included files which comes from the VS directory itself.

Solving for example the addition of the table name took me more than half a day, but let's say this part is over.

Now blame me for my lack of knowledge but I don't know how the two ways binding of tables work and currently it doesn't work as expected. Sadly documentation on the net is messy, with mix of older versions of Entity Framework and with people having tons of different issues which never seems to relate to mine.

Not really a good day, and this is already not the first one invested in this trial path. I still have hopes to succeed at some point, but it is by far something which could be made a lot more straight forward (a right click on the EDMX for example?).

Thursday, March 24, 2016

A small self written LINQ as example

I talk a lot about LINQ as it's one of the main feature I love in .NET, but if you are not in the .NET world or even if you are a .NET developer you may not know how this works.

I will not dig into the LINQ / SQL because it is a bit more complex and requires more work, however a simple LINQ of a memory list is quite easy to do implement. That's what I will present here.

As basis let's start with some data and a class to work with it:

using System;
using System.Collections.Generic;
namespace SmallLinq
{
    class MyData    {
        public string Name { getset; }
        public int Score { getset; }
    }
    class Program    {
        static void Main(string[] args)
        {
            var myData = new List<MyData> {
                new MyData { Name = "Roberto", Score = 5 },
                new MyData { Name = "Chantal", Score = 10 },
                new MyData { Name = "Ivan", Score = 2 },
                new MyData { Name = "Ingrid", Score = 4 },
                new MyData { Name = "Florian", Score = 20 },
                new MyData { Name = "Stephan", Score = 1 }
            };
        }
    }
}
Now let's start with a simple "where" filter where we could define what we want to extract. To do so, I would like to make it like LINQ do, an extension function which should work with whatever IEnumerable (or nearly):

using System;
using System.Collections.Generic;
namespace SmallLinq
{
    class MyData    {
        public string Name { getset; }
        public int Score { getset; }
    }
    class Program    {
        static void Main(string[] args)
        {
            var myData = new List<MyData> {
                new MyData { Name = "Roberto", Score = 5 },
                new MyData { Name = "Chantal", Score = 10 },
                new MyData { Name = "Ivan", Score = 2 },
                new MyData { Name = "Ingrid", Score = 4 },
                new MyData { Name = "Florian", Score = 20 },
                new MyData { Name = "Stephan", Score = 1 }
            };
            foreach (var i in myData.MiniWhere(row => row.Name.StartsWith("I")))
            {
                Console.WriteLine(i.Name + " " + i.Score);
            }
            Console.ReadKey();
        }
    }
    public static class MiniLinq    {
        public static IEnumerable<TType> MiniWhere<TType>(this IEnumerable<TType> source, Func<TTypebool> predicate)
        {
            foreach (var i in source)
            {
                if (predicate(i))
                    yield return i;
            }
        }
    }
}
What I added here is the "MiniLinq" static class, with a single static function inside. The <TType> statement let me work with basically any type, and will simply instruct the compiler to return an enumerable of the same type as it got. The first parameter has the keyword "this" and you see in my call in the Main function that it can then be applied to a List class which is not defined by me. This would really work with any enumerable classes. Finally the last parameter of my function is a "predicate" or if you want a callback which should return a Boolean based on a value received, in the call I use a lambda expression but it's nothing else than a function created on the fly for me. The syntax is really small but let you do then whatever filtering you want. In the example it simply shows whoever has the name starting with a capital i.

Let's continue and add a some sorting functions as well:

    class Program    {
...
            foreach (var i in myData.MiniOrderDescending(row => row.Score))
            {
                Console.WriteLine(i.Name + " " + i.Score);
            }
            Console.ReadKey();
        }
    }
    public static class MiniLinq    {
...
        public static IEnumerable<TType> MiniOrder<TTypeTKey>(this IEnumerable<TType> source, Func<TTypeTKey> keySelector) where TKey : IComparable        {
            var list = new List<TType>(source);
            list.Sort((a, b) => keySelector(a).CompareTo(keySelector(b)));
            return list;
        }
        public static IEnumerable<TType> MiniOrderDescending<TTypeTKey>(this IEnumerable<TType> source, Func<TTypeTKey> keySelector) where TKey : IComparable        {
            var list = new List<TType>(source);
            list.Sort((a, b) => keySelector(b).CompareTo(keySelector(a)));
            return list;
        }
    }
To reduce the clutter I removed the parts which are the same (at the end of the post you will have all the code).

The sorting implemented uses a restriction in which the key must be IComparable, other than that it is still left mostly to the end user to choose how to sort.

One part which is really useful with LINQ is the selector which let you transform a source into something else. Let's create is as well:

using System;
using System.Collections.Generic;
namespace SmallLinq
{
    class MyData    {
        public string Name { getset; }
        public int Score { getset; }
    }
    class Program    {
        static void Main(string[] args)
        {
            var myData = new List<MyData> {
                new MyData { Name = "Roberto", Score = 5 },
                new MyData { Name = "Chantal", Score = 10 },
                new MyData { Name = "Ivan", Score = 2 },
                new MyData { Name = "Ingrid", Score = 4 },
                new MyData { Name = "Florian", Score = 20 },
                new MyData { Name = "Stephan", Score = 1 }
            };
            foreach (var i in myData.MiniOrderDescending(row => row.Score).MiniSelect(row=>row.Name))
            {
                Console.WriteLine(i);
            }
            Console.ReadKey();
        }
    }
    public static class MiniLinq    {
        public static IEnumerable<TType> MiniWhere<TType>(this IEnumerable<TType> source, Func<TTypebool> predicate)
        {
            foreach (var i in source)
            {
                if (predicate(i))
                    yield return i;
            }
        }
        public static IEnumerable<TType> MiniOrder<TTypeTKey>(this IEnumerable<TType> source, Func<TTypeTKey> keySelector) where TKey : IComparable        {
            var list = new List<TType>(source);
            list.Sort((a, b) => keySelector(a).CompareTo(keySelector(b)));
            return list;
        }
        public static IEnumerable<TType> MiniOrderDescending<TTypeTKey>(this IEnumerable<TType> source, Func<TTypeTKey> keySelector) where TKey : IComparable        {
            var list = new List<TType>(source);
            list.Sort((a, b) => keySelector(b).CompareTo(keySelector(a)));
            return list;
        }
        public static IEnumerable<TResult> MiniSelect<TTypeTResult>(this IEnumerable<TType> source, Func<TTypeTResult> selector)
        {
            foreach (var i in source)
            {
                yield return selector(i);
            }
        }
    }
}

The selected is surprising simple to implement as you can see. This time I gave the whole source such that you can check it by yourself.
I hope this small tour of how LINQ could be implemented opens the door to more tricks in your code, and actually make things clearer in your mind and is not meant to replace what .NET already offers.

Wednesday, March 23, 2016

Entity Framework Code First

.NET offers many ways to access your database data being simply ADO.NET which let you connect to a database and then executing queries, to LINQ to SQL or Entity Framework.

Entity Framework (EF for short) is the biggest and maybe the more complex / complete officially supported way to access data with .NET. Yes there is other options like the open source project nhibernate or commercial solutions like LinqConnect.

Anyhow I would like to show here how simply it can be to create a mapping with EF.

Let's start from the beginning: let's add the NUGET package "EntityFramework".

Once this is done, let's create a context class:
using System.Data.Entity;
namespace TestEFCodeFirst
{
    public partial class DataContext : DbContext    {
        public DataContext()
            : base("name=DataContext")
        {
        }
        protected override void OnModelCreating(DbModelBuilder modelBuilder)
        {
        }
    }
}
This is where your data will be visible.

Now let's create a "Table" class:
namespace TestEFCodeFirst
{
    public class User
    {
        public int Id { getset; }
        public string Username { getset; }
        public string FirstName { getset; }
        public string LastName { getset; }
    }
}
Finally add this class to the context:
using System.Data.Entity;
namespace TestEFCodeFirst
{
    public partial class DataContext : DbContext    {
        public DataContext()
            : base("name=DataContext")
        {
        }
        protected override void OnModelCreating(DbModelBuilder modelBuilder)
        {
        }
        public virtual DbSet<User> Users { getset; }
    }
}
We need now just a database, and the application settings to point to it:
  <connectionStrings>    <add name="DataContext" connectionString="Data Source=(localdb)\MSSQLLocalDB;Initial Catalog=Integrated Security=True;"  providerName="System.Data.SqlClient"/>  </connectionStrings>
And create the table:
CREATE TABLE Users(Id INT NOT NULL IDENTITY,Username VARCHAR(30) NOT NULL UNIQUE,FirstName VARCHAR(100),LastName VARCHAR(100));
Finally we can create the main class and test EF:
using System;
namespace TestEFCodeFirst
{
    class Program    {
        static void Main(string[] args)
        {
            using (var ctx = new DataContext())
            {
                // Cleanup the table (if we run it multiple times)                ctx.Users.RemoveRange(ctx.Users);
                // Insert 2 users                ctx.Users.Add(new User                {
                    Username = "Admin",
                    FirstName = "Administrator",
                    LastName = "The Guru"                });
                ctx.Users.Add(new User                {
                    Username = "User",
                    FirstName = "User",
                    LastName = "The simple guy"                });
                // Commit the changes to the database                ctx.SaveChanges();
                // Let's see what I have in the table                foreach (var i in ctx.Users)
                {
                    Console.WriteLine(i.Id + " " + i.Username);
                }
            }
            Console.ReadKey();
        }
    }
}
As you see, we did a DELETE, INSERT and SELECT all directly via code, using our classes and yet if you browse the database you will see that the data are actually stored in the table.

You can also query (with where), grab elements and modify them, and then save.

As developer it doesn't change if your data are in the database or would be in a list, it would work the same way. SQL injections are nothing to be worried about, and even better, your queries are (mostly) checked while compiling, which means forget typos into your SQL statements that you would discover only by testing.

You may wonder how the mapping works, well, there is many options letting you define the keys or the columns as you want, even renaming them from what you have on the database as what you have in the code.

With such minimalist work, being able to work on the database as you would with memory objects if for me something simply amazing. Don't get me wrong, for those which know what I'm talking about other tools than EF will do more or less the same work. Still it's great to have such tools available and even more so if they are part of the main framework.

Tuesday, March 22, 2016

User interface needs to show the progress

Single Page Applications (SPA for short) are great as they can provide a faster user experience and more "intelligence" as you are suddenly not anymore "state less". For example you can easily check a form and provide information about what is missing while still keeping that data inside the form. Sure you could do that without a Single Page Application but it would be both slower for the user (as it requires a full reload) and more complex for you as you would need to send back all the data the user provided.

However one thing developers of SPA forget is user feedback when an operation is running. For example I'm creating X items and this operation may take more than just 1 sec, the user cannot see that the operation is going on. Therefore a small message, or a spinning icon, or some visual feedback is a must.

For very long operations it would be even wiser to have like a progress bar or a percentage of completion displayed, even if that may slow down the overall operation it would be perceived as faster and less stressful for your users.

Sure all that increase the code you need to write, but it will also greatly increase the quality of your software.

A good way to have a progress information is instead of sending 4000 commands in one shot, send them in smaller blocks like 20 or so, and every time you update the interface to show how far you are. Sure if the user kills the window the work will be done only partially. If you don't have to have this issue then you need to spawn a process on the server side when you send the 4000 commands, and have a way to query the progress to update the user interface. Depending on your language / framework this can or cannot be implemented.

Monday, March 21, 2016

Uses of Excel for developers

Many developers don't even think that they could get some help with an "old" tool we mostly all have: Excel (or other spreadsheet software).

How can Excel help us will you ask and I shall try to give you some examples where I do use Excel and maybe that will trigger you some new ideas how to use it yourself.

Create fake data in your tables
When you develop applications and need some data in your tables to see if your software works, Excel can be your best friend. To do it, you simply do as your would fill an excel list, and the last column can be used to compose the SQL command. String concatenation in excel is done via the "&" operator, and double quotes escapes are done by placing a double quote in front (""). Something like
="INSERT INTO tableABC(Id,Name) VALUES("&A1&",'"&B1&"');"

As Excel can generate lines for you, being random values or simply sequences. Random values can be done using the Excel function RAND:
=ROUND(RAND()*100,0)

Sequences is simply matter of dragging the small "+" sign in the corner of the cell and excel shall create the sequence for you.

Create or update data
Same as the previous example, you can export your data to excel and update it on a easier to use interface or create data (with formula) and then generate your SQL statements.

Try some formula
Many times when you develop games or any stat software you will need to tweak some formula or actually create some. For example to know how many Experience points you need to level up, or how much damage each level of quality a sword will do. Excel can be used to see how those number are progressing. On the first column you have the "level" and the second you can place your formula and see how the 2 are related and how good is your formula.

Try curves
Excel do have some simple plotting tool which can be used to see the shape of simple or not so simple math functions. You can also try to see how your data are compared to a formula, however don't expect to have all the features of a statistic tool or a mathematical tool like Mathlab. Yet excel can even plot 3D surfaces and you could see if your terrain function seems ok or simply garbage.

Reports
Instead of producing PDF or HTML reports, ever thought about producing formatted or not formatted Excel files? The advantage for your customers is that they could continue to work with the data you produced, and the advantage for you is that it may at the end save you time trying to produce all the kind of reports they may want. I'm not talking about full data dump here, but more pre-digested data. In a previous post ( here ) I explained how to directly produce Excel files out of your Javascript, well the same can be done on the server side (it doesn't matter if your server is running on Linux or Windows, nor it matters what language you use).

Offline work
Here it may require more work, but I used it for a software at work. Some people may want to work with data while not being connected, one option would be to write some macros and pull / push data from your application into excel. The advantage is that you can then work offline, and when you are back online you can push back the modifications. You could also block some cells such that only some fields can be updated.

Visual Studio offers something similar using excel to edit multiple TFS rows, so if even Microsoft uses Excel for such things why shouldn't we?

Conclusion
As you see the usages you can have with excel are nearly infinite and some times it can be really a life saver.

Friday, March 18, 2016

Some times take the time...

From time to time, take your time to think about more in general and stop working directly on your tasks. Why? Because if you focus too much yourself of small tasks you tends to lose the overview. Keeping the overview of your goal and being able to judge your own work requires some steps back and that can only happen if you take your time to do so.

Therefore every one in a while don't go through your to-do list and instead try to have a look at the overall shape of your project.

Let's take an example, you are developing a web site and you start by checking out the layout, then try to improve the CSS and maybe add some JavaScript to make this part more interactive or this other more dynamic, you start to write some content here and there, then try to find pictures to fill some gaps and so on. If you continue like that for a while you will concentrate more and more on details yet without having an honest overview of the page. So leave the site alone for a couple of days and then open it again with a fresh new eyes to see how it works all together and not just all the small details all by themselves.

For bigger projects it is also really important to keep an overview of the tasks to do, think about strategies, discuss with your customers and see if somehow the project is still on track. Many times taking this few steps back may actually speed up your productivity as you may find tools to improve your work, being library of components or being able to streamline your workflow.

Finally, I can say that taking a day off from time to time restores your mind and mood. At least for me. And after a small break I'm even more eager to work, but again that's me.

Thursday, March 17, 2016

iOS devices can be hacked.... without any interaction

This news just come to my view:

http://thehackernews.com/2016/03/how-to-hack-iphone.html

If I understood it well there is first a "man in the middle" attack, hacker get auth code which can then be used to send software to iOS devices.

Now it matter really little about how this specific attack has been done. If your OS allows remote installation of software like Apple and Google offer, then for sure you can hack it in some way to send your own stuff.

Sure it may be more complex, it may require loads of work and maybe it doesn't even really make sense to do, but what's clear is that is doable.

When I was working for one of the major banks one rule was the rule to follow: NEVER ever attach to internet the internal network. The bank network was completely cut off internet, and if you wanted / needed internet access then this particular PC would have just that but no company network connection. The goal being, if there is no link you can't attack. Or at least, you can't attack from outside without having somebody inside doing something.

Same on phones or anything connected, if you are connected you could be potentially be hacked. Don't trust ANYONE which will tell you that this or that device is safe, that's pure lies. It is safe till somebody will discover a way to hack it. Old phones which didn't allowed any 3rd party software to run, and didn't had any real connection beside SMS / voice calls could still suffer from some odd SMS or network attacks even if it wasn't maybe possible to do much as no 3rd party software would ever run. I still believe you could actually hack some of the features even there but as the hardware was limited, the software was near non-existent, so are the attacks vectors as well.

Think about the following, you could own an old computer at home which you don't connect to anything else. Now how could an attacker attack it? From outside your home? Impossible, so only if they reach the computer they could load some malware / virus or others on it. This would really limit the widespread of any attacks.

Of course we DO WANT to be connected, we DO WANT to have more and more software running on our gadgets, and yet we expect to be safe? No way. The more lines of code the more bugs, and the more bugs the bigger chances we get hacked or get malware on our beloved gadget.

At the end of the day, if you don't have data connections, you don't have Bluetooth, and you use your phone only to take pictures and call people (with some SMS maybe), I can tell you that it will be much safer than even the latest release of any other gadget. Yet don't be fooled even SMS can be a vector of attack like this article points out:

http://www.pcworld.com/article/246528/remote_sms_attack_can_force_mobile_phones_to_send_premiumrate_text_messages.html

(If you search for SMS attacks you will find loads of info)

To come back to the first article, where iOS devices can be hacked, well, what upset me is that a company (in this case Apple) sell its device as really secure, and others talk about how secure iOS is compared to Android:

https://www.sophos.com/en-us/security-news-trends/security-trends/malware-goes-mobile/why-ios-is-safer-than-android.aspx

That is all smoke! Any OS (and really any OS!) can be hacked. It is just a matter of the money you put to try to hack it. Don't try to tell me that this is safer, I will not believe it. And the safety by obfuscation (as Apple played all the time) is one of the worse way to make your product safe.

Windows Cluster

The "cluster" word may be seen by many as over kill or too complex specially for small hobby project where a VPS or a dedicated server is enough. However in a professional environment where down times are expensive or dangerous a cluster is the first step to provide an higher availability than a good single server.

What is a cluster exactly? A cluster is a group of machines which for a set of services should react as a single machine on the client side. For example, if I have a clustered web hosting, as customer I can connect via my browser to one of the hosted site without having to know on which node of the cluster this will be handled. There is however some subtleties between clustering and load balancing. Load balancing will split the work between the nodes, while a more traditional cluster will not and may actually load all on a single node. Load balancing doesn't ensure an higher availability but can depending on how the balancer is made.

I will not talk about load balancing but simply about how to setup a simple cluster system with Windows server.

Windows Server standard edition can without troubles create a cluster of two or more nodes. Two being the minimal set to make it work. While you can create a cluster with a so called "share nothing" configuration I will here instead explain how to setup a cluster with shared disks. For that you will need at least 2 servers plus an external storage like a Synology NAS.


The NAS should be configured to server ideally 2 iSCSI disks (one quite small for the quorum and one bigger for the shared data). Once you created your iSCSI disks and configured them allowing multiple concurrent connections, mount them on both machines via the iSCSI Initiator (Control panel => Administrative tools => iSCSI Initiator).

After having on both server the same disks mounted, you can start building the cluster. For that you will need to add the cluster role, and then via the cluster manager panel create a new cluster. To be able to create a cluster windows requires to be part of an active directory domain, if your servers are not yet part of it, it is time to do so. You could run the active directory on the cluster nodes as well, but it would be better to have an external machine handling the domain.

During the creation of the cluster, windows will check what is available and should add the possible shared disks. If not you will need add them yourself afterward.

Use the smaller disk for the quorum while keeping the bigger disk for shared services like SQL Server or simply a clustered file system.

Be warned that by itself IIS is not really part of the cluster, however you can (even if it requires some setup) run a shared IIS configuration and even shared WWW directory on the shared file system. If well configured your IIS will appear as a clustered service as if you stop one node the other will take over the requests, however the cluster manager doesn't check IIS for you automatically, and therefore either you write yourself some scripts or you will need to manually migrate the nodes if one is failing.

What is a cluster good for? For example if you need to apply windows updates with reboots and such, you will have basically no down time as everything will fall on the remaining node. If one of the 2 servers break you will not have to worry either as you will still have one node working. But be prepared to work with a more complex environment than a standalone installation.

Wednesday, March 16, 2016

Excel export directly from Javascript

Modern HTML pages relies more and more on JavaScript (or other languages which compiles to it). Personally I use Typescript but at the end of the day it's basically the same, as the code run is run by the JavaScript engine, and it's the browser which must deal with it.

I saw that some of my users had a need to copy results from one of the many lists my software produce and paste it inside Excel. I thought "wouldn't be cool if there was a button to do that automatically?"

Sure I could produce a CSV (coma separated values) file in a text area which would then allows to easily paste inside anything else. However I wanted something a bit more sophisticated. Why? Because what if I want to have some formatting for example and also reduce the number of manual steps.

So the first step for me was to investigate about file format Excel today uses. I worked long time ago with the old Excel 2.0 binary format (which is somewhat easy to produce), and the previous version of my software was using Excel directly via COM+ (interop). I know that latest office products works with XML and thought, well I could produce the XML directly, shouldn't be soooo hard. My bad! The standard Excel format is indeed XML.... but multiple files scattered inside a directory structure and then Zipped. Not something you really want to do in JavaScript.

After checking a bit more I discovered a "pure" XML format which could be produced. This format is actually pretty straight forward:

https://en.wikipedia.org/wiki/Microsoft_Office_XML_formats#Excel_XML_Spreadsheet_example

Actually this can be further simplified by removing some of the tags which are not mandatory.

Anyhow, starting using this one, I could produce an XML out of my result list, remains the question of how to send it back to the user. Again sure I could write it down on a text area and let the user copy / paste it, but that's somewhat not what I wanted.

So how to send back the data to the user? Well there is actually 2 roads:

For Chrome:
var link = <any>document.createElement('a');
link.download = "my.xlm";
link.href = "data:application/vnd.ms-excel;filename=my.xlm;base64," + btoa(excel);
link.click();

For Firefox:
document.location.href = "data:application/vnd.ms-excel;base64," + btoa(excel);

Why two roads? Yes Chrome could handle the thing the same way as Firefox but you cannot really handle the file name with it (or I didn't succeed at it).

What's going on here? Basically you create a pseudo location or a link to a pseudo location which contains data, as the mime type (application/vnd.ms-excel) triggers the download the data will be downloaded. The data must be base64 encoded and that's what btoa does.

I didn't succeed to make it work with IE but at least 2 of the major browser work.

Monday, March 14, 2016

The importance of the GUI

In the development world there is usually a clear cut between code developers and graphical designers. The first usually don't know much what design means or have no clues how to make something look good / usable. Sure IT is so huge that you can't know all (from security, to frameworks, to server handling and design just to name a few), but somehow if you design a product you must have an overview of the whole picture.

What counts for your users is not what counts for you. Sure security is a must, and cannot be left aside, and sure you must have good performances and a correct logic. However what your users will first see is for sure the interface. Also a good interface speedups the work while a badly designed one will slow you down. It is therefore mandatory to invest a lot of time trying things and see how you can improve it. Here a designer should help you at least with the look of it, but could as well help with you the usability side. You must also share the UI design with potential users to hear from their feedback but don't expect that they will provide you the best solution.

A way to work and design is simply to use the old pen & paper to draw / sketch designs and discuss how things are placed. Don't worry about the tech side yet, it is a lot more important to know how it may look like than how to do it at this point.

Some tools may help you try the design, from a plain "photoshop" or to things like http://www.invisionapp.com/ or https://moqups.com/.

However clearly the only way to really test your UI at the end is within the software once everything is nearly working. To be able to change the UI without changing the code means you must somehow separate the look & feel with the functionalities. MVC designs or other simpler separation like templating can be of huge help.

Friday, March 11, 2016

Continuous integration

For those not knowing what those words means together:
https://en.wikipedia.org/wiki/Continuous_integration

Basically it's simply the fact that you have automated tests which will be run for you either when you commit your code to a repository or once every now and them by some trigger (like once a day).

After the tests runs you will get a report of how it went.

If your code is well covered by those automated tests then you should be somewhat sure that your last modification didn't break anything.

With the installation of TFS (Team Foundation Server) I started to dig more and more into this kind of development. Where I really dislike test driven development (I shall explain why on another post), I actually like the feeling of having tests checking the integrity of my code. I don't have the time to test my whole soft every time I change something, therefore having tests which even cover partially my code should ensure some sort of insurance against breaking changes.

In my case I have 2 sets of checks, one is "integration tests" that I made on all the modification functions of my back end and I have user interface automated tests to check the GUI part of the code.

The integration tests run every time I do a commit (sure it take some time, but as it doesn't block me in any way as they run on the TFS server, having them running all the time is not a bad thing), the UI tests are run once a day.

The main question is how much actually those tests will catch errors, and you may (or may not) be surprised that I first detected bugs while writing my tests, but as well I got quite a few reports of bugs afterward while I was working on some new feature. So yes all that costed me time but it increased significantly the quality of the software. Don't be fooled however, tests will not make your code bug free. They may tend to reduce bugs but will never show every possible bugs. Also there is a diminishing result the more tests you create. I mean, at first when you create your tests you will indeed catch more and more bugs, but while you reach 80% or more of the code coverage you will need more and more tests to actually discover fewer bugs. Even if you reach 100% of code coverage you will not actually discover 100% of the bugs. So don't loose yourself in such game.

When would you need to have such infrastructure like I set up myself? Well, if you write some prototypes you don't need any of those. However when you start developing for customers you should start to think about the quality as bugs are seen as something really negative by your customers. Try all your best to reduce the number of bugs and be prepared to deploy quickly "hot fixes" to fix any possible issue.

Having a "test instance" where you can deploy the work in progress and a "production instance" where the product is used by your customers will also ensure to give a chance to your customers to test the software before it's actually release. Even if they don't test it, you can always argue they could have done so.

Thursday, March 10, 2016

Adding new features doesn't mean it will not cost in the long term

Today we had a meeting where somebody asked to have support for tablet and mobile devices. In principle you may think well it's just matter of tweaking a bit the CSS to make it work. Well actually it may look like that at first. But if you want to be sure that it will be usable from nearly all the devices, it may be actually harder. Also if you modify the layout to fit your needs, any further feature you may add later may actually either to be tweaked to also support mobile devices but may actually break the existing layout. Therefore it's not just matter of "we shall do it now" and then hope it will not cost on the overall, it will cost overall and you must actually think about it.

While for some of you supporting mobile is a must, here in my case it was a "nice to have" which somehow wanted to go on top of the to do list. It's not because it may look cool to run around with your tablet that you must push this request. You could also use notebooks which would actually solve other problems as well.

People simply fail to understand how must a request will actually cost in the long term and think that today such features should be pretty cheap if not free. I even heard yea but you could simply serve a different CSS based on the resolution of the device. WRONG! You don't receive the device resolution by default on the server side. That would mean also to have multiple CSS for the multiple devices,  what a nightmare! And who does the testing on all those possible platforms?

At the end of the day what counts is really checking what is a "must" or a "nice to have" and see if the cost (even just as time to invest) is really well understood by the customer. If at the end of this evaluation the result is "yes we shall do it", then fine, but not just do it because somebody think it would be cool to have it so.

Wednesday, March 9, 2016

Work on your framework before starting, or maybe not

Odd title, right? I will try to explain, ideally you should plan your framework and code structure before starting to invest loads of time on more details code, however many times it is difficult to come up with a good code structure / framework till you really know what you want to have and how you want to do it.

You have therefore a couple of choices:

  1. Use some pre-made structure / framework which will impose the way to work. This is good if you have not overall a very good background, or if you want to work with something let's say "standard" which would allow 3rd party to participate quickly in your project development.
  2. Plan carefully your framework and spend quite some time design it before starting actually to code the details. You may develop a couple of "modules" to see if your framework make sense but don't go too far into the real development till you have the framework set.
  3. Start to develop all around, and be prepared to develop your framework once you know what you want to have and know how it should work. However be prepared to re-design your "modules" to fit your new vision of the framework. This last route may require multiple re-designs but may offer the best solution if you agree to rewrite / refactor your code.
In any case it's really important that you document your framework / design once you decided how it will be. That will allow other people to jump on the boat and will clarify your mind as well. A good designed framework can be reused on multiple projects therefore time invested on it is never actually really lost.

For the framework / design, for me even the directory structure of your project is important. Why? Because as the project grows, the number of files should grow as well (not just a couple of big files, but better many smaller one). So if your directory structure is not good you will lose a lot of time searching things around.

Personally I like to split features as "modules" and have all the files related to the feature in one directory. Like if I do web development I would have the template, back-end, typescript / javascript and css files related to a single feature all in a single directory. So if I need to work on that feature I will find all the related files in a single directory.

Other people like to split things differently like having the "back-end" in a directory, all the css in another, the templates in a 3rd and so on.

Which structure fit you best is your own personal design choice, and as in many cases there is no single "best" or "right" solution.

Tuesday, March 8, 2016

SQL Server on linux

You may first wonder what is SQL server if you are not in the Windows development world. Well just for a very short answer: it's the database developed by Microsoft. So far SQL Server required that you run on Windows Server (yes you can run a SQL Server express on a desktop version of Windows).

The news is here:
https://blogs.microsoft.com/blog/2016/03/07/announcing-sql-server-on-linux/

So far Micorosft has been really conservative with its products and on which OS they may run. Beside having Office on Mac OSX (for clear reasons), nearly nothing worked outside of the Windows world.

So why would Microsoft change its mind and now allows to run its soft on another platform? For me (but that's really just my personal opinion) it would show that Microsoft is not playing closed in its sandbox instead offers many different options and in the medium / long term developers may want instead to pick the "best option" once they started to try the product and go for a all "Microsoft" world. That should at the end be more prolific to Microsoft that scaring developers away since the beginning.

This game has been already started with the latest Visual Studio which offers a (limited) community edition which works on Linux and Mac OSX. Not only that, but the latest Visual Studio (2015) allows even to work with Python, Javascript (node.js) and other languages and techs which are not dependent on the Microsoft world. Why? Again to attract yet more developers and offers a single central point for all their need, with the end goal of converting them (quite certainly).

For me this is all positive, sure I'm already working in this Microsoft World. But having yet more developers using it can only push the boundaries and offer better tools.

Monday, March 7, 2016

Alpha and Beta phases are never enough

I develop software professionally (that means for work and being payed for it) since a good number of years and the software I write for work tends to be quite big and complex. As any complex software they will have more errors than simple software. To be able to debug and improve the quality alpha and beta tests are a must.

What are those? Alpha and beta tests are done by people outside of the development group, ideally a subset or all your future customers. Alpha are tests on an incomplete software which means not all the needed / planned feature are implemented, while beta tests are on a feature complete software which clearly contains still bugs and should not used in production.

After those phases we may release "Release Candidates" (RC for short) which should be a feature complete software which could be used in production.

Alpha and Beta phases are what allows customers to give a feedback before the software is actually finished and that allows the developers to improve the software based on direct user feedback.

Sad story however is that in my environment while I tends to always give an opportunity to my customers to test the software before it goes in production they don't pick the time to even check it. This ends up that I deliver the software at a given date and afterward I start to get reports of non-functioning parts with remarks like "didn't you tested the software before release?". Tests made by the developer himself is not as useful as an external user because the developer tends to do things the way he got them in mind, not trying side roads. Also, I cannot imagine which feature is must for a customer if they don't communicate with me.

Lesson to learn? Deliver quickly something and be ready to react quickly to your customers when they start to use your tool.

Sunday, March 6, 2016

Why I dislike REST

From time to time "PR" words come to live and suddenly they become the "must use" by every developers. We got AJAX, we got Web 2.0, we got MVC and now lately we got REST. Maybe the order is not exactly as I gave it in this list, still those are the "keywords" people used a lot in the web development world. Doesn't matter which language / platform / framework used, if your soft wasn't using those keyword then it wouldn't be considered.

Here I'm already against those words, why? Because at the end of the day what counts is the functionalities your soft offer, not how it's implemented. It is more important that your software do what it is expected to do instead of "looking good on paper".

Let's concentrate on the REST word. REST is nothing else as an "idea" of having simple data access via HTTP. Think about a database, select would be done via GET, update will be done via POST, delete via DELETE, and insert via PUT. So basically we are all covered, we can do all the CRUD operations via HTTP! Great! Yet do you really want to offer full DB access via web? Well, for most the answer is no! At least we should authenticate the request, and already here there is no "standard". Sure you can send the auth via cookie or via a token passed on the URL. But what if I would like to add a "where" clause to my "select"? Oddly enough again there is no "standard" and it's actually not well considered too beside if you want to grab a single item based on the id. Come on, what if my interface offer more than a single id selection? Well out of luck either you develop your own solution or... you do the selection on the GUI side while transferring the whole answer. Now don't pick me wrong some REST frameworks do offer solutions for all those problems but there is no "standard" even if REST is not a standard.

But didn't we had before things like XML-RPC which offered ways to call functions with whatever parameters we would like? Or SOAP which would offer even a self describing API? Don't tell me those are complex to use, or verbose (due to XML), honestly in most case it's not an issue. For the complexity most frameworks do offer pre-prepared SOAP solutions (Java, C#, PHP, etc...) and the verbosity is a non-issue for most.

Where I do agree is that to call SOAP API via Javascript is not really all that fun, and for that REST is much easier. So why not try to combine the flexibility of SOAP and the easy to use of REST? That's what I did, let's call it a JSON / SOAP interface which let me develop simple C# classes with functions which can directly be called from JQuery.

Of course it is not "standard" as it's a self written solution, but don't come with this and try to sell me REST as it's not standard either, and even more you need to describe your API manually, while my own solution produce the documentation automatically with even test pages.

At the end, for me, it's more important to think about your goal, and what would be the best solution in the medium / long term instead of running after the last PR word which may very well end up with a useless tech.

Friday, March 4, 2016

Bug in chrome? Quite certainly

Again I fought against CSS, oddly enough this time it was Chrome which reacted oddly and the other 2 big did what I was expecting. I'm talking about the menu bar of my soft. When I was painting on a canvas within the page the menu items disappeared. Odd.

Yet I tried to create a small test set:
https://jsfiddle.net/9g11usuu/

And I must admit I failed. My test work as expected but not my page.

At the end I changed the positioning of the elements, and with a little of fiddling around I managed to get it working. Those things can be incredibly time consuming.

Again whoever say that CSS is easy is either under-estimating it or simply he/she didn't reached the point to see where things starts to be difficult.

Menu bar has been added

When you need many different actions and pack all them in a short space I still didn't found a better option than having a menu bar. Sure a menu bar within a web page is not what you usually see, yet when you design a single page application it tends to be a good compromise.


This shall allows to add nearly as many tools as we may need later on, while still not wasting too much space on the page.

The framework already can handle multiple "pages" and will load the content for it once needed. That should open the door to the next tool which will be the map maker.

Of course I still have a long road ahead, and don't plan to have a fully working product before a year time but it's slowly progressing.

Thursday, March 3, 2016

Bad comments in a bug report? I got them too!

Today I got an "interesting" bug report from a guy here at work. He said that one part of my soft doesn't display values in the first row of the table:


The soft updates the cells in real time when the variables changes on the servers. As you may notice some cells don't have any values, and therefore for those like me not knowing the meaning of the variables it is not easy to debug the content of those tables. There is well over 700'000 variables overall in this system and clearly nobody knows them all.

I don't say there is no bugs in my soft but as always (sadly) any software will contain bugs. It's just a question of math, the bigger the soft is the higher the number of bugs you will have. To fix them you will need to check the soft. In my case I can't test all, because first I don't have the time for it, and second as said I don't have the knowledge and therefore I expect to get bug reports from the different experts.

The bug report I got however contained a little sentence which I could simply not accept: "Work without engagement". Sorry but such sentence is not something you want to hear for a guy like me which work like mad to bring all the wishes live.

My answer to his bug was: with such attitude you can find somebody else to fix the bug for you.

However I was curious and I checked what produced the bug. At the end it was a stupid mistake: checking if a value exists with a "if ( value )" is ok in Javascript however it will be false even if the value is 0 not only if it's null or undefined. So a bad if of my side.

Wednesday, March 2, 2016

Math formula which are more than useful

There are a few formula which can be quite useful and this in many different usage, from a brush shape to a "gain" function.

Let's start with something called "Sigmoid". It's a "S" shape like function which has the advantage to flat in / out. The formula for it is 1/(1+POWER(EXP(1),-X)) and look likes:



The next one which I use pretty often too is a "Gaussian" curve which can also be called "bell" curve. The formula for it is 1/EXP(X*X)



You have of course the well known one like X*X

X*X*X


SIN(X)


SQRT(ABS(X))


You may wonder when I use formulas like that, well, for example to reproduce movements, to make a movement more life like or to create more interesting transitions. For a programmer not knowing even a bit of math is simply not acceptable as at a some point he/she will need it.

Color chooser and brush settings

The tile painter is progressing and I finally went through a first implementation of a color chooser and the brush settings. It's certainly not the most user friendly interface but it does its job for a first release.


The brush is previewed with the current color (while a bit zoomed for clarity). On the side of the brush preview 2 sliders allows to change the size and the "smoothness" of the brush. The smoothness of the brush is still far from being perfect but somehow works.

Still a lot needs to be done before I can consider it as the tool for the first release but slowly it's going there.

I also need to think how I want to save the images, either in the database with the advantage of having all in the database (for backups for example), or keep the images out of the DB, or having images in the DB with a static copy outside for speed.

I also wonder if I want to implement layers inside my tile maker or not. Layers are not so hard to implement and would have the advantage of offering per layer effects, however saving the image would require to keep the layer info somehow.

Tuesday, March 1, 2016

When bugs are not bugs

Being a dev-op is by no mean an easy task, and if you do user support then at the same time you can't do development. Of course somebody need to support users, and of course a developer should know his/her product. However it doesn't mean it's the best choice for the company to have "one man shows".

Yesterday somebody called to report that he wasn't able to do some action. Sadly the action is not a simple "click and see what happens", so to reproduce and check what's going on it took me more than 1 hour. At the end it was because the data itself prevented (correctly) to do the action that the software wasn't doing it. I had to call back the user and say, sorry but it's the correct behavior and explain why. We then had like 20 min discussion about why it's like that and why it's not a good idea to change it.

That just stack up to the stress, take you away from your main job which in my case is developer, and overall frustrate me.

I wish companies would understand that dev-ops should not be the solution.