Introduction into Docker and Dockerizing simple Node.js application

If you asked me “What was the most important software technology that emerged in the past 5 years?” My answer would be – Docker! In my opinion, it has the biggest impact on how we perform our day-to-day tasks. It’s reshaping the way applications are being designed, built, tested and deployed these days. When Docker gained popularity, it started a huge wave of innovations that were not previously possible. For me personally, it completely changed the way I think about software engineering. So, why is it such a big deal, you may ask.

Docker is a lightweight virtualisation and packaging platform based on LXC – Linux containers. As the name suggests, this technology is only available on Linux OS family, therefore to run it on Windows or Mac, you will have to run Linux in some sort of Virtual Machine.

Installing docker on Linux is as easy as running sudo yum install docker  or sudo apt-get install docker , depending on your package manager. For Mac and Windows, Docker team prepared a nice installation packages that includes all the necessities. Just follow installation steps and you will get your Docker running in no time.

What are those features that make it so revolutionary? First of all, Docker is a lightweight virtualisation platform that allows running processes in isolation. Unlike normal VM it doesn’t run the entire virtualised OS but only isolates processes running in it from underlaying host OS. The most important benefits are startup time and performance overhead.

It takes just a fraction of a second to spin up a new container in Docker and it has almost no difference in performance whether you run a process in the container or on the host machine. In most cases a Docker container has only one process running – the one that you want to isolate. If your stack consists of several components, like Apache Web Server and MySql server, for example, they would normally run in separate containers. Docker allows linking containers together to simplify inter-container communication.

Another killer feature of Docker is that it allows building and packaging software modules together with all dependencies starting from OS level – building Docker images. Essentially, it creates an isolated and self-contained mini-OS including all libraries that your project uses. What it means? It illuminates “But it works on my machine” issue once and for all! Once Docker image is built, it contains all binaries that your project needs and doesn’t matter where you run that image (running image called container) – on your local development machine, on some cloud cluster or somewhere else, it will behave the same way.

These great capabilities of Docker enabled creation of a github for Docker – docker hub. It’s a place where anyone can upload their Docker images and make them publicly available – for free! Today, docker hub hosts thousands of production ready images for virtually any platform or service you may think of. Want to try some new DB or Web server? No need for installation and configuration anymore. Just go to docker hub and grab official mysql or apache docker image – any version you want. Skip configuration process and save precious time! Don’t want mysql any more? Remove mysql docker container and image from your machine and have no garbage left! Want to have multiple versions installed at the same time? Easy!

Pulling image and running docker containers

Pulling image and running docker containers

How about building Docker image for your own project?

When image is being built, changes applied to its content layer by layer. This way you can start with some fairly basic image and then add your own artefacts. For example, if I wanted to build a Node.js application, I would grab official image for Node.js from docker hub,

Docker image structure for Node.js

Docker image structure for “node:5”

then add my code and build my docker image. I will use a simple Node.js web service example – web-service-dockerized to illustrate two basic approaches to building Docker images.

Docker image structure for "web-service-dockerized"

Docker image structure for “web-service-dockerized”

Building Docker image interactively

This approach is more suitable when you want to experiment with a new image or when you’re not sure what commands you will have to run to achieve desired result, or generally speaking, when you want to build image manually.

First, we will create and start a new Docker container (pull command can be omitted – Docker will automatically pull image if it doesn’t exist locally.

Once Docker is done pulling new image (it may take some time, but only required if image doesn’t exist locally), we will see command prompt of bash. Now, let’s create our Node.js project inside the container

Then, install express npm module. This is probably the most popular Web-framework for Node.js

And finally, create our script

Oops. Looks like vim doesn’t exist in our Docker image. It’s ok, we can fix it.

Then paste this simple code snipped, save changes and exit the editor.

Now we can start the service and test it using curl

You should see curl output

At the moment we have on our host:

  • a basic node:5 image that we downloaded from docker-hub
  • a running container that has everything what node:5 has plus stuff that we just installed: vim and our web-service-dockerized plus all nmp dependencies

The problem is that containers are volatile and not supposed to be reused. They are similar to class instances while images are similar to classes. Therefore if we want to reuse our container in the future, we would better turn it into an image.

First we need to exit our running Docker container simply by executing exit command (that will terminate current bash session and, consequently, stop the container). When container is stopped, all changes that we made inside it still exist, but no processes are running.

To create a new image out of a container

And verify

As you can see we’ve just created image with the same name – web-service-dockerized. At this point we don’t even need our container any more. So we will delete it.

And our final goal is to start dockerized web application in daemon mode from the host machine. Note, that we can configure which port service should listen on and select container networking mode – in this example we will use host networking mode to simplify testing. In this mode Docker container network interface is not virtualised, so we can access service, running inside the container using IP address of the host machine.

Building Docker image automatically

Building Docker image manually is easy, you can see intermediate results and experiment, but it’s not quite suitable for automated builds and Continuous Integration. For this purpose, we will use docker build  command and Dockerfile descriptor.

We will start from scratch again. Note that this time we start on host machine. No docker containers running at the moment.

Then paste the same code snipped as before, save changes and exit the editor.

Note, that this time we didn’t have to install vim, as we running it on our host machine. Now we will create our Dockerfile with the following content

After saving the changes we can build our image

When build finishes, we can start new docker container

Note, that this time we used all default parameters, so the container is running in default networking mode which is bridged network. This means that we will need to find IP address of our container. We will use  docker inspect  command. It returns detailed informations about running containers in JSON format. This time we only need NetworkSettings.IPAddress element


And finally, we can test our service

At this point  we have 3 docker images and 1 running container on our host machine.

To remove images (in case you don’t need them any more) use  docker rmi.

To remove container – first stop it  docker kill  and then remove it  docker rm.


Share This

Share This

Share this post with your friends!