I will speak at JavaZone 2015

I’m excited to announce that I will be speaking at JavaZone 2015! If you did not know already, JavaZone is the largest IT conference in Norway. Up to 2500 software engineers gather each year in Oslo to attend 2 days packed with presentations, lightning talks and workshops!

I will be doing an hour presentation on the upcoming Angular 2.0; “Angular 2 + TypeScript = true. Let’s Play!”. Make sure to book your ticket today and I will see you in September! ๐Ÿ™‚

Updating $scope Data in Asynchronous Functions

One of the many lovely things about Angular is the fluid two-way data binding that we get with the framework. Although not entirely perfect when it comes down to performance, it saves us quite a lot of timeย and effort when writing web applications.

When it comes to asynchronousy, however, it should be noted that doing changes to your $scope data does not propagate to the view implicitly (!). In other words, if you do a change to a $scope variable from asynchronous context, the change is not reflected in the view. Let’s look at an example. Consider the following controller that uses JavaScript’s asynchronous setInterval function to update a counter every second:

function Ctrl ($scope) {
    $scope.counter = 0;
    setInterval(function() {
    }, 1000);

This code looks pretty good and dandy, right? Unfortunately, no. Although the counter does get updated every second, it does not propagate the change to the view. There are a couple of ways to solve this issue.

Invoking $apply Manually

The simplest and most straightforward way is to invoke $apply manually in the asynchronous context. Consider the following change to our initial example (lines 5 – 7):

function Ctrl ($scope) {
    $scope.counter = 0;
    setInterval(function() {
        $scope.$apply(function () {
    }, 1000);

This change will force the counter updates through to the view.

Using Angular’s Asynchronous Services

This approach is case specific. For instance, you can use the services $interval or $timeout, which actually behind the scenes invoke $apply – relieving you from doing so manually. Considering our initial example, we can therefore inject and use $interval in our controller instead:

function Ctrl ($scope, $interval) {
    $scope.counter = 0;

    $interval(function () {
    }, 1000);

As the previous approach, this will propagate the counter updates to the view. I recommend using Angular’s services wherever this is possible and seems appropriate to the case at hand. However, always keep in mind that such services invoke $apply for you behind the scenes.

Hope you enjoyed this Sunday’s little reading! ๐Ÿ™‚

Proper Dependency Injection in Your AngularJS TypeScript Apps

I’ve witnessed quite a lot of legacy AngularJS TypeScript code (yes, including my very own) where dependency injection is done in an impractical way. The most common way of doing dependency injection, is by manually injecting the dependencies while loading your AngularJS components and their accommodating TypeScript classes. What this basically means is that, while injecting a dependency, you have to do this three times (!). Not only that, all three times have to be similar in order for the injection (string matching) to work. Consider the following example:

angular.module("myApp", []).controller("MyController", ["$scope", "MyService", ($scope, MyService)
    => new Application.Controllers.MyController($scope, MyService)]);
module Application.Controllers {
    import Services = Application.Services;
    export class MyController {
        scope: any;
        myService: Services.IMyService;
        constructor($scope: ng.IScope, myService: Services.IMyService) {
            this.scope = $scope;
            this.myService = myService;

MyController is a class that takes in two dependencies – $scope and MyService. However, in the first code block, the injection itself is written three times (lines 1 and 2). Not only does this result in a maintenance hell, it greatly affects the readability of the code.

So, how do we solve this issue? Simple, we use AngularJS’ $inject property to do the injection in our TypeScript class. In MyController, we statically use $inject and define the dependencies like so (line 10):

module Application.Controllers {
    import Services = Application.Services;
    export class MyController {
        scope: any;
        myService: Services.IMyService;
        static $inject = ["$scope", "MyService"];

        constructor($scope: ng.IScope, myService: Services.IMyService) {
            this.scope = $scope;
            this.myService = myService;

And then simply change the wiring like so:

angular.module("myApp", []).controller("MyController", Application.Controllers.MyController);

Notice how elegant the wiring looks? No explicit injection that we need to worry about, all we need is to inspect the class itself to find the dependencies.

I will speak at JavaZone 2014

I’m very happy and excited to announce that I will be speaking at JavaZone 2014! If you did not know already, JavaZone is the largest IT conference in Norway. Up to 2500 software engineers and developers gather each year in Oslo during September to attend 3 days packed with presentations, lightning talks and workshops!

I will be doing a 60 minute presentation on the topic “TDD with AngularJS and TypeScript”. Make sure to book your ticket today and I will see you in September! ๐Ÿ™‚

I will speak at the Trondheim Developer Conference

I’m thrilled to announce that I will be speaking at the Trondheim Developer Conference in October this year! If you didn’t know already, this is an awesome conference arranged once a year by technology user groups in the Trondheim area, Norway. Among the user groups behind the conference is the Norwegian .NET User Group, JavaBin Trondheim, Trondheim XP & Agile Meetup and many more!

The conference kicks off on October 27th, and the topic of my talk is “TDD with AngularJS and TypeScript”. Make sure to book your tickets today, and I will see you in October!


Setting up a Node Server and Securing it with SSL on RaspberryPi

RaspberryPis are awesome little things that you can do a lot of cool stuff with. Basically, a RaspberryPi (RasPi) is a 35 dollar, mini computer. It has LAN, USB, audio, RCA, HDMI and power ports as well as an SD card input. The A model has a 700 MHz processor and 256 MB of RAM. An SD card is flashed with an operating system and used on the device, there are lots of operating systems to choose from, I personally prefer Debian Wheezy. I purchased my RasPi some time last year and have had quite some fun with it since. When I first got it, I set up XBMC on it and used it as a humble HTPC or at least as a video media outlet in my apartment.


Recently, I decided to use my RasPi for something far more awesome. I decided to set up a publicly secure node server on it and have it control an AR Drone in my apartment through REST calls (I don’t have any drone just yet though). In the following tutorial, I will show you how to set up a secure node server on your own RasPi and make it accessible from anywhere in the world.

First things first: Install Node

The first step is to install node on your RasPi. Power up your RasPi, log in as root user and then simply type the following commands:

wget http://nodejs.org/dist/v0.10.2/node-v0.10.2-linux-arm-pi.tar.gz
tar -xvzf node-v0.10.2-linux-arm-pi.tar.gz

This will download node and unzip it for you. If you then type ls you’ll see that node-v0.10.2-linux-arm-pi was unzipped successfully in your current folder. The next step is to set node in your global path so you can start it from anywhere. Type the following:


The node packet manager (npm) comes bundled with node, we will use that to install external modules. Node and npm should now be accessible from any location in your RasPi.

Install External Node Modules

You may need to install node’s native build tool for future use, to do that, simply type:

npm install -g node-gyp

The node-gyp build tool is now installed and located at:


You will find all future installed modules located in this folder. Next up is to install express (we will use this module to create our secure node server later on):

npm install -g express

It may be a good idea to install the socket.io module as well for future use:

npm install -g socket.io

We now have all the external modules that are needed.

Change from Dynamic to Static Local IP for the RaspberryPi

Since the aim is getting the RasPi accessible from the outside world, you need to have a static IP for it. Chances are you are using a router, and thus a dynamic IP is provided to your RasPi by the router. In this case, you need to make the dynamic IP static. You can start by typing the following command:


Note the fields inet addr, Bcast and Mask. Here is an example:

inet addr:  

Now type the following command:

route -nee

Note the field Gateway, which is the IP of your router:


Now type the following command:

nano /etc/network/interfaces

Change the line iface eth0 inet dhcp to iface eth0 inet static. Now, change the information to include the following:

iface eth0 inet static

Save changes made to this file and then exit (CTRL+X, Y and Enter). Now reboot your RasPi in order for the changes to take effect:


Log in as root user. That’s it, your RasPi now has a static local IP. In this example, the IP is The next step is to port forward your global IP to this IP.

Port Forward Global IP to the RaspberryPi Static Local IP

In order to access the RasPi from the outside, we need to port forward our global IP to the RasPi static local IP. I prefer to use the router UI for this, I use a Linksys WRT54G router, here is a screenshot of how I do the port forwarding using this router:


Once you’ve done port forwarding, the RasPi should now be accessible from the outside.

Generate a Certificate Signing Request (CSR) for SSL

We need to generate a Certificate Signing Request (CSR) to sign the SSL certificate that we will use for the RasPi node server. We will use *cough* OpenSSL *cough* to generate the CSR, if OpenSSL is not already installed in your RasPi then simply install it by typing the following command:

apt-get install openssl

Now generate the CSR like so:

openssl req -nodes -newkey rsa:2048 -keyout private.key -out server.csr

Note: private.key is the private key which is for your eyes only, make sure to move it to a secure location. We will use this key later along with the SSL certificate to create the node server.

You will now get some questions that you need to answer, here is an example:

Country Name (2 letter code) [AU]: NO
State or Province Name (full name) [Some-State]: Oslo
Locality Name (eg, city) []: Oslo
Organization Name (eg, company) [Internet Widgits Pty Ltd]: Salih AS
Organizational Unit Name (eg, section) []: IT
Common Name (eg, YOUR name) []: mysubdomain.mydomain.com
Email Address []:
Please enter the following 'extra' attributes to be sent with your certificate request

A challenge password []: 
An optional company name []:

Note: Email address, challenge password and optional company name can be left blank. The most important field here is Common Name, you need to type in the domain that you want to use for your RasPi. Setting up SSL requires a domain (unless you’re going for a mighty expensive and special SSL certificate), therefore you need to have one (in my case, I use https://raspi.sirars.com). You can purchase a domain at a good price from providers such as GoDaddy.

Generate the SSL Certificate

We will now generate an SSL certificate using the CSR we got in the previous step. An SSL certificate costs money. I personally prefer SSLs.com for their cheap price range, fast delivery and good documentation. I recommend that you order your SSL certificate from them. Once you’ve done that, edit the server.csr by typing the following command:

nano server.csr

Copy the entire content. Now go to this link and follow the steps thoroughly. When you’re done, you’ll get an e-mail from SSLs.com containing your SSL certificate (which has the name format mysubdomain_mydomain_com.crt).

Point Your Domain to the Global IP of the RaspberryPi

The next step is to point your domain to the global IP. Say that you own the domain http://raspi.domain.com, in that case you need to point this domain to the now global IP of your RasPi. I purchased my domain from GoDaddy, here is a screenshot showing how I pointed my subdomain to the global IP address of my RasPi using GoDaddy’s domain management panel:


@ points the whole domain, while subdomain raspi points to a specific IP. For security reasons, my IP addresses are censored in the screenshot. Once you’ve pointed either your domain or subdomain to the global IP of the RasPi, it will take around 10 hours or so to take effect depending on the domain provider.

Create Node Server Using the SSL Certificate

The last step, is to create the node server itself with the generated SSL certificate. Start editing the JavaScript file by typing:

nano node_server.js

Now create the secure server by inserting the following JavaScript code (make sure to use absolute file paths):

var fs = require('fs');
var https = require('https');
var privateKey = fs.readFileSync('private.key', 'utf8');
var certificate = fs.readFileSync('mysubdomain_mydomain_com.crt', 'utf8');
var credentials = {key: privateKey, cert: certificate};
var express = require('%HOME%/node-v0.10.2-linux-arm-pi/lib/node_modules/express');
var app = express();
var ip = "";
var port = 443;          //HTTPS

app.get('/', function(request, response){
    response.send("Welcome to my RaspberryPi Node Server!");

var httpsServer = https.createServer(credentials, app);
httpsServer.listen(port, ip);
console.log('Node express server started on port %s', port);

This code simply creates the node server for you using the SSL certificate that was generated earlier. Note the usage of the static local IP. We also created here a simple GET request on “/” that returns a welcome a message to the user, so when the user types your domain name in the browser (which is basically doing a GET request, using HTTPS) he/she will be greeted by this message.

To start the node server, simply type the command:

node node_server.js

Test out your secure node server by visiting your domain using HTTPS, it should work by returning a welcome message. You can find my own server here.

Create a Script to Automatically Start Node Server on Boot (optional)

This step is optional but highly recommendable. Once you’ve finished setting up your secure node server, the next thing you want to do is to create a simple script that will automatically start up the server every time your RasPi is rebooted. Start editing the script file by typing the command:

nano startup_script.sh

Insert the following lines in the script:

node /home/pi/node_server.js

Save the file and then exit. Make the file executable by typing the following command:

chmod +x startup_script.sh

Now, you need to move this script to the /etc/init.d folder:

mv startup_script.sh /etc/init.d

Reboot your RasPi, and the node server should now start automatically.

Hope you enjoyed this tutorial! I will do a second part soon explaining how this secure RasPi node server will control an AR Drone through REST calls. ๐Ÿ™‚

Algorithm Computation in the Cloud: Microsoft Azure Web Roles

Worker and Web Roles are some of the great features that Microsoft Azure has to offer. These two features are designed to do computation for you in the cloud. A worker role is basically a virtual machine that can be used as a back-end application server in the cloud. Similarly, a web role is a virtual machine hosted in the cloud, but the difference is that this one is used as a front-end server that requires Internet Information Services (IIS). So you can use a web role if you want to have an interface exposed to the client – for example an ASP.NET web site – that makes interaction from the outside possible.

In the following tutorial, I will focus on web roles. I will show you how to create a web role and host it on an Azure web site. The web role’s task will be to launch a console application – which is a path finder algorithm that I wrote in C++ – that reads input from the client, computes and then returns results. The reason for choosing a web role for this, is to make communication possible through REST, so the client can use a web site or simply do a GET request to fire up the console application in the cloud and get back the results as JSON.

Creating the Azure Web Role

The first step is to download the Microsoft Azure SDK. Since I’m a .NET developer, I downloaded the Visual Studio 2013 version. The SDK includes an Azure emulator, that we will be using locally. The web role has to be hosted somewhere, we will choose a cloud service for that. So, start up Visual Studio and create a Windows Azure Cloud Service project:


Choose ASP.NET Web Role:


Choosing the front-end stack type is up to you, I choose an MVC Web API without authentication:


Go to the WebRole class in the WebRole project and set a break point inside the OnStart() method. Now build and run your cloud service, you will see that an Azure Compute Emulator is started up and the break point is reached:


Press F5 to continue. Your web role web site is now up and running. Right click the Azure Compute Emulator icon in the task bar, and choose “Show Compute Emulator UI”. A new window shows up, click on your web role in the left column:


This shows you the status of the web role. A web role inherits the abstract class RoleEntryPoint that has three virtual methods; OnStart(), Run() and OnStop(). These methods are called as their name suggests and can be overridden. We already overrode the OnStart() method as we saw earlier. Now, the next step is to launch the console application as a process from the web role.

Starting a Console Application Process from the Azure Web Role

Delete the default override of OnStart() in the WebRole class. We want to call a custom method from our web role independently. Create an interface IWebRole that looks like this:

public interface IWebRole
   string RunInternalProcess(string stringParams);

Make WebRole implement this interface. RunInternalProcess(string stringParams) is a custom method that we will call from the client. The method will then launch a console application process and return results back to the client as JSON. We want the process to do the operation asynchronously. Here is part of how the implementation looks like:

public string RunInternalProcess(string stringParams)
    var path = HttpContext.Current.Server.MapPath("..\\TSP_Genetic_Algorithm.exe");
    var result = RunProcessAndGetOutputAsync(stringParams, path).Result;
    return result.Replace(" ", "\n");

private static async Task<string> RunProcessAndGetOutputAsync(string stringParams, string path)
    return await RunProcessAndGetOutput(stringParams, path);

private static Task<string> RunProcessAndGetOutput(string stringParams, string path)
    var process = CreateProcess(stringParams, path);
    var result = process.StandardOutput.ReadToEnd();
    var taskCompletionSource = CreateTaskCompletionSourceAndSet(result);
    return taskCompletionSource.Task;

As you can see, the method starts a process called TSP_Genetic_Algorithm.exe which is included in the project. It’s important to set the “Copy to Output Directory” property of this executable file to “Copy always”, such that it’s always copied to the project directory. You can do this by right clicking the executable, and choosing “Properties”:


The next step is to make the client call up the web role through an HTTP GET request.

Calling the Azure Web Role from the Client

We need to make it possible for the client to call the web role, we will do this by creating an HttpGet ActionResult. Go to HomeController and inject the WebRole interface there:

private readonly IWebRole _webRole;

public HomeController(IWebRole webRole)
    _webRole = webRole;

Create an HttpGet ActionResult, it can look like this:

public ActionResult Solve(string[] c)
    var coordinates = c.Aggregate(string.Empty, (current, t) =&gt; current + (t + " "));
    _results = _webRole.RunInternalProcess(coordinates);
    return Json(new { results = _results }, JsonRequestBehavior.AllowGet);

This GET request takes in an array of string coordinates, calls the web role with these coordinates, which in turn launches up the process with the input and then finally return the results back to the client as JSON. Beautiful, isn’t it? Build and run your cloud service. Now type this in the browser address field:,300&c=400,300&c=400,400&c=400,200&c=500,200&c=200,400&c=400,300

And here are the results:


Publishing the Azure Cloud Service

The final step is to publish our Azure cloud service so it goes online. This process is pretty straightforward, but it assumes that you have a Microsoft Azure subscription and a target profile. Once you have registered a subscription, right click the cloud service project and choose “Publish…”:


Go through the wizard to create a target profile:


Enable Remote Desktop if you want remote access to your virtual machine in Azure, this is pretty handy. Once done, click on Publish in the last dialog:


The publishing process will start in Visual Studio:


And then complete:


That’s it! Your Azure web role web site is now online, and you can now do the GET request through the web:


You can go to the Azure Portal to view statistics and maintain your web role there:


Hope you enjoyed this tutorial!

I will speak at the Norwegian Developers Conference in Oslo

It’s finally official, I will be speaking at the Norwegian Developers Conference in Oslo this year! If you didn’t know already, NDC is a famous worldwide conference that takes place in June and lasts for five whole days (two days pre-conference workshops). This means a great deal to me, since it is the first time that I will do a talk there. NDC is my favorite conference and I always wished this day would come, so I am very excited! The best thing of all, I have been granted two full hours at the conference. I will be doing a two part workshop about test driven development in AngularJS and TypeScript. So make sure to book your conference tickets today and I will see you in June! ๐Ÿ™‚

ReSharper Downfalls and Anti-Patterns

As part of a dedicated refactoring team in a big customer project, I get to use JetBrains ReSharper quite heavily on a daily basis. If you didn’t know already, ReSharper is the best refactoring tool made for Visual Studio. Not only does it increase programming efficiency by multitudes, it also changes the way you think as a programmer. Especially when you’ve just started out on your programming career. I’ve been using ReSharper professionally for at least three years, and today I can’t even imagine how I survived as a (.NET) programmer without it.

Ironically, it wasn’t until I joined a professional refactoring team that I discovered the downfalls and anti-patterns of using ReSharper. Without doubt, the tool is continuously being developed by a brilliant team, so the downfalls that I see today may very well be investigated in the next versions of the tool. There are also some features that ReSharper simply lack today, that I hope will be added in the future. I will explain what I am talking about in the next sections.

Helper Methods are Extracted as Static by Default

If you press Ctrl+RM you’ll get the option to extract a method. A local helper method is by default extracted as static:


You get to choose whether to make it static, but it is static by default. You’ll find that this is extremely annoying, as there is a chance that you have to insert non-static content in the method at a later time. So, the fix? Remove the static part manually (!). Note, in the following animation below, the lack of intellisense as I start typing the _webRole object:


This may not be a big deal in a small project, but it quickly becomes cumbersome in a big legacy application.

Lack of Listing Multiple Object Properties Feature

Very often you will create objects and want to set their properties. There is no way to list all the properties of the created object, leaving you having to manually type and set each property:


Wouldn’t it be nice to have ReSharper list all the properties for us?

Lack of “Find Usages in Multiple Solutions” Feature

One of the most fundamental things when you are refactoring a huge application with many solutions, is the ability to locate the usage of a component across those solutions. If you right click on a class or method and choose “Find Usages Advanced…”:


ReSharper will show you a dialog where you can choose where to search:


What would be nice is to have the option “Solutions…” where you can specify solutions or simply choose all. This makes our life easier, and saves us the need to open each solution manually and perform the search per solution.

Refactoring Overkill

This one falls under anti-patterns. Every now and then, ReSharper will suggest refactorings that actually break readability of your code. Inverted ifs and complicated LINQ expressions fall under this category. How many times has ReSharper asked you to invert your if statement when it looked perfectly readable? Or how many refactorings were you asked to do by ReSharper on your one single LINQ expression? Chances are, many times. There is no need to invert your if, if you and your code reviewer agree that it looks fine. Although you can probably turn off this type of suggestion, ReSharper should be intelligent enough not to ask you to do this.

Complicated LINQ expressions, where do I start? Once you write a LINQ expression that does something, ReSharper will often suggest to write it differently. As the expression gets more complicated, so does ReSharper. It will ask you to keep refactoring, sometimes up to 3 times (!) on a single LINQ expression. The ending result is a hideous piece of code that takes time to understand. So again, ReSharper should be intelligent enough to take readability into account here.

Different Key Binding Schemes?!

Something that has annoyed me recently is that the key binding schemes of ReSharper seem to vary from environment to another. Ever since I installed ReSharper 8.1 (which I upgraded to 8.2 today, by the way), the schema on my development machine has changed, and I have to memorize different key bindings. This is a hassle when I do pair programming with another developer using his machine, as he would have the older schema. Ultimately I memorized both schemes in order to work efficiently. You would think that simply applying the Visual Studio schema would make things consistent in all developer machines:


In final words, ReSharper is a wonderful refactoring tool that I will be using for years and years to come. That doesn’t by any means make it perfect, and there are still features that I see lacking. Seeing how the ReSharper team is doing an incredible job developing the product (ReSharper 8.2 was just released), I’m not worried that they’ll look into this and make our favorite tool even better. ๐Ÿ™‚