Monthly Archives: September 2016

Docker Swarm Mode with visualizer

Introduction

Recently I came across this project. Looked quite cool and it’s something I have been looking for quite a while. I like Docker and Swarm, but things can get quite overwhelming at times, certainly if you run containers at scale. So a visualisation tool is more than welcome.

There is one important point to mention here: The readme file mentions the following:

This only works with Docker Swarm Mode in Docker Engine 1.12.0 and later. It does not work with the separate Docker Swarm project

So when using a Docker Swarm cluster, you will need to follow this tutorial to set it up. In case you were using for instance this post to create the cluster, the visualizer will not work and the UI would show but you won’t see the nodes displayed.

So in this post, I’m assuming you have followed this post first to create the Swarm cluster.

Run the visualiser

Just to make sure nothing is running, do the following:

WAUTERW-M-G007:~ wauterw$ docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
WAUTERW-M-G007:~ wauterw$

Then follow the instruction in the README file

WAUTERW-M-G007:~ wauterw$ eval "$(docker-machine env swarm-manager-01)"
WAUTERW-M-G007:~ wauterw$ docker run -it -d -p 8080:8080 -e HOST=192.168.99.103 -v /var/run/docker.sock:/var/run/docker.sock manomarks/visualizer
3604953cfc600611d526fb88eecabe7ee4b8fd351cb8a95cd68c3fdc8506855c

Update June 2017: I noticed that the visualizer repository ‘manomarks/visualizer’ does not exist anymore. You can use the following command instead:

WAUTERW-M-G007:~ wauterw$ docker run -it -d -p 8080:8080 -e HOST=192.168.99.109 -v /var/run/docker.sock:/var/run/docker.sock dockersamples/visualizer

Quickly check again if the container is running:

WAUTERW-M-G007:~ wauterw$ docker ps
CONTAINER ID        IMAGE                  COMMAND             CREATED             STATUS              PORTS                    NAMES
3604953cfc60        manomarks/visualizer   "npm start"         5 minutes ago       Up 5 minutes        0.0.0.0:8080->8080/tcp   determined_newton

Note: if you just see a blue page with the Docker whale, but not the nodes, it probably means that you used the name (e.g swarm-manager-01) in the HOST instead of the IP address.

If all goes well, you should see a screen similar to the below:

swarm-visualiser-1

Running applications

Let’s run some containers on our Swarm cluster now. We start with launching two containers. For that, ensure you are ssh’ed into the Swarm Manager.

WAUTERW-M-G007:~ wauterw$ docker-machine ssh swarm-manager-01
                        ##         .
                  ## ## ##        ==
               ## ## ## ## ##    ===
           /"""""""""""""""""\___/ ===
      ~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ /  ===- ~~~
           \______ o           __/
             \    \         __/
              \____\_______/
 _                 _   ____     _            _
| |__   ___   ___ | |_|___ \ __| | ___   ___| | _____ _ __
| '_ \ / _ \ / _ \| __| __) / _` |/ _ \ / __| |/ / _ \ '__|
| |_) | (_) | (_) | |_ / __/ (_| | (_) | (__|   <  __/ |
|_.__/ \___/ \___/ \__|_____\__,_|\___/ \___|_|\_\___|_|
Boot2Docker version 1.12.2, build HEAD : 9d8e41b - Tue Oct 11 23:40:08 UTC 2016
Docker version 1.12.2, build bb80604
docker@swarm-manager-01:~$ docker service create --replicas 1 --name helloworld alpine ping docker.com
83m4jk4h90i2iip77lp8p77v9
docker@swarm-manager-01:~$ docker service create --replicas 1 --name helloworld-1 alpine ping docker.com
3x1l6vsl0sson8874d4o1djlh

You will see them running on the Swarm cluster.
swarm-visualiser-3

Launching an additional one, you see that the first worker has been selected to run the container:

swarm-visualiser-4

Another way to launch some containers and generate some load on the cluster is to use the docker service create command. In the below case, we will launch 10 nginx containers

WAUTERW-M-G007:~ wauterw$ docker service create --name web --replicas=10 -p 30000:80 nginx

swarm-visualiser-5
After a while, when all containers are started, you will see their status go to green:
swarm-visualiser-6

That's it for now. In a next post, we will create an overlay network on top of our Swarm cluster.

Docker Swarm mode

Introduction

In this post we created a Swarm cluster. However, Swarm was a separate application if you want. Since Docker release 1.12, Docker introduced the concept of Docker Swarm mode. It enables the ability to deploy containers across multiple Docker hosts, using overlay networks for service discovery. It also brings a built-in load balancer for scaling the services. Since Docker version 1.12, Swarm Mode is managed as part of the Docker CLI, making it a seamless experience to the Docker ecosystem.

I found the below video on Docker Swarm mode really helpful.

Create the swarm cluster

Docker Swarm mode uses the concept of manager nodes and worker nodes. In general, to deploy your application to a swarm cluster, you submit a service definition to a manager node and the manager node dispatches these tasks to worker nodes. So we’ll start with creating a Swarm Manager. In this post, we will use the virtualbox driver to show the concept.

WAUTERW-M-G007:~ wauterw$ docker-machine create -d virtualbox swarm-manager-01
Running pre-create checks...
Creating machine...
(swarm-manager-01) Copying /Users/wauterw/.docker/machine/cache/boot2docker.iso to /Users/wauterw/.docker/machine/machines/swarm-manager-01/boot2docker.iso...
(swarm-manager-01) Creating VirtualBox VM...
(swarm-manager-01) Creating SSH key...
(swarm-manager-01) Starting the VM...
(swarm-manager-01) Check network to re-create if needed...
(swarm-manager-01) Waiting for an IP...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with boot2docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env swarm-manager-01

Worker nodes receive and execute tasks dispatched from manager nodes. Note: By default manager nodes are also worker nodes, but you can configure managers to be manager-only nodes. So we continue with adding some worker nodes to our setup.

WAUTERW-M-G007:~ wauterw$ docker-machine create -d virtualbox swarm-node-01
Running pre-create checks...
Creating machine...
(swarm-node-01) Copying /Users/wauterw/.docker/machine/cache/boot2docker.iso to /Users/wauterw/.docker/machine/machines/swarm-node-01/boot2docker.iso...
(swarm-node-01) Creating VirtualBox VM...
(swarm-node-01) Creating SSH key...
(swarm-node-01) Starting the VM...
(swarm-node-01) Check network to re-create if needed...
(swarm-node-01) Waiting for an IP...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with boot2docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env swarm-node-01

For this example, we will also create a second Swarm worker node, called swarm-node-02:

WAUTERW-M-G007:~ wauterw$ docker-machine create -d virtualbox swarm-node-02
Running pre-create checks...
Creating machine...
(swarm-node-02) Copying /Users/wauterw/.docker/machine/cache/boot2docker.iso to /Users/wauterw/.docker/machine/machines/swarm-node-02/boot2docker.iso...
(swarm-node-02) Creating VirtualBox VM...
(swarm-node-02) Creating SSH key...
(swarm-node-02) Starting the VM...
(swarm-node-02) Check network to re-create if needed...
(swarm-node-02) Waiting for an IP...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with boot2docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env swarm-node-02

In the end, you will see the following resources in virtualbox

virtualbox

Install Swarm manager

In the previous section, we merely created the hosts that will be running the Swarm manager nodes and worker nodes. In this section, we will continue with the installation of the manager application.

To do so, we first need to find the IP address of the swarm manager host:

WAUTERW-M-G007:~ wauterw$ docker-machine ip swarm-manager-01
192.168.99.103

Then, ssh into the swarm manager and execute the swarm init command to ensure the node gets promoted to swarm manager.

WAUTERW-M-G007:~ wauterw$ docker-machine ssh swarm-manager-01 
                        ##         .
                  ## ## ##        ==
               ## ## ## ## ##    ===
           /"""""""""""""""""\___/ ===
      ~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ /  ===- ~~~
           \______ o           __/
             \    \         __/
              \____\_______/
 _                 _   ____     _            _
| |__   ___   ___ | |_|___ \ __| | ___   ___| | _____ _ __
| '_ \ / _ \ / _ \| __| __) / _` |/ _ \ / __| |/ / _ \ '__|
| |_) | (_) | (_) | |_ / __/ (_| | (_) | (__|   <  __/ |
|_.__/ \___/ \___/ \__|_____\__,_|\___/ \___|_|\_\___|_|
Boot2Docker version 1.12.3, build HEAD : 7fc7575 - Thu Oct 27 17:23:17 UTC 2016
Docker version 1.12.3, build 6b644ec
docker@swarm-manager-01:~$ docker swarm init --advertise-addr 192.168.99.103

Swarm initialized: current node (ciocp3tyw2g3aolsurb7gewd1) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join \
    --token SWMTKN-1-5w1dcw483wtxflh7m17kry73z9ly9oxmwr20fe472zkpydjvrd-3wrf3k0rn73tuuuy4eprijfbg \
    192.168.99.103:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

Add the nodes to the Swarm cluster

As you can see in the output from the above commands, Docker basically tells you how to add worker nodes to the Swarm cluster. No real difficulties here. So ssh into the nodes and add (join) them to the cluster as follows:

WAUTERW-M-G007:~ wauterw$ docker-machine ssh swarm-node-01
                        ##         .
                  ## ## ##        ==
               ## ## ## ## ##    ===
           /"""""""""""""""""\___/ ===
      ~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ /  ===- ~~~
           \______ o           __/
             \    \         __/
              \____\_______/
 _                 _   ____     _            _
| |__   ___   ___ | |_|___ \ __| | ___   ___| | _____ _ __
| '_ \ / _ \ / _ \| __| __) / _` |/ _ \ / __| |/ / _ \ '__|
| |_) | (_) | (_) | |_ / __/ (_| | (_) | (__|   <  __/ |
|_.__/ \___/ \___/ \__|_____\__,_|\___/ \___|_|\_\___|_|
Boot2Docker version 1.12.2, build HEAD : 9d8e41b - Tue Oct 11 23:40:08 UTC 2016
Docker version 1.12.2, build bb80604
docker@swarm-node-01:~$ docker swarm join --token SWMTKN-1-5w1dcw483wtxflh7m17kry73z9ly9oxmwr20fe472zkpydjvrd-3wrf3k0rn73tuuuy4eprijfbg 192.1
68.99.103:2377
This node joined a swarm as a work

and do the same for the second node

WAUTERW-M-G007:~ wauterw$ docker-machine ssh swarm-node-02
                        ##         .
                  ## ## ##        ==
               ## ## ## ## ##    ===
           /"""""""""""""""""\___/ ===
      ~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ /  ===- ~~~
           \______ o           __/
             \    \         __/
              \____\_______/
 _                 _   ____     _            _
| |__   ___   ___ | |_|___ \ __| | ___   ___| | _____ _ __
| '_ \ / _ \ / _ \| __| __) / _` |/ _ \ / __| |/ / _ \ '__|
| |_) | (_) | (_) | |_ / __/ (_| | (_) | (__|   <  __/ |
|_.__/ \___/ \___/ \__|_____\__,_|\___/ \___|_|\_\___|_|
Boot2Docker version 1.12.2, build HEAD : 9d8e41b - Tue Oct 11 23:40:08 UTC 2016
Docker version 1.12.2, build bb80604
docker@swarm-node-02:~$ docker swarm join --token SWMTKN-1-5w1dcw483wtxflh7m17kry73z9ly9oxmwr20fe472zkpydjvrd-3wrf3k0rn73tuuuy4eprijfbg 192.1
68.99.103:2377
This node joined a swarm as a worker.
docker@swarm-node-02:~$

Check the Swarm cluster

To see if everything went successful, use the docker node ls command:

WAUTERW-M-G007:~ wauterw$ eval "$(docker-machine env swarm-manager-01)"
WAUTERW-M-G007:~ wauterw$ docker node ls
ID                           HOSTNAME          STATUS  AVAILABILITY  MANAGER STATUS
2vrym0843ay688wzbotjxfybr    swarm-node-02     Ready   Active
67ajqtfidu12mywedw9qux8b0    swarm-node-01     Ready   Active
ciocp3tyw2g3aolsurb7gewd1 *  swarm-manager-01  Ready   Active        Leader
WAUTERW-M-G007:~ wauterw$

You can see here indeed that we have 1 swarm manager which is the leader and two active workers.

That's how easy Docker version 1.12 makes it to create a Swarm cluster!

Docker: multi-host networking with overlay (with Docker Swarm)

Introduction

In this post, we created a multi-host network through an overlay network. At the end of that post, I made the promise that I would repeat the same tests, having a Swarm cluster instead if independent hosts. The reason is only that I’m curious to see if Docker Swarm brings any kind of overlay network out of the box.

In the next sections, we will setup a Swarm cluster (just using a simple Bash script), run a couple of containers and see if they can ‘talk’ to each other. We will then continue creating an overlay network and repeat the same tests.

Create Swarm cluster

Let’s start with creating the Swarm cluster. You can do it manually or just copy/paste the below bash script. In any case, you can see we first setup a node to run the Consul service discovery (see here for more information) and then we create two hosts that each are added to the Swarm cluster. The first node is also serving as the Swarm master.

WAUTERW-M-G007:config wauterw$ cat multi-host-swarm.sh
#!/bin/bash

set -e

# Docker Machine Setup
docker-machine create \
    -d virtualbox \
    consul

docker $(docker-machine config consul) run -d \
    -p "8500:8500" \
    -h "consul" \
    progrium/consul -server -bootstrap

docker-machine create \
    -d virtualbox \
    --virtualbox-disk-size 50000 \
    --swarm \
    --swarm-master \
    --swarm-discovery="consul://$(docker-machine ip consul):8500" \
    --engine-opt="cluster-store=consul://$(docker-machine ip consul):8500" \
    --engine-opt="cluster-advertise=eth1:0" \
    node-01

docker-machine create \
    -d virtualbox \
    --virtualbox-disk-size 50000 \
    --swarm \
    --swarm-discovery="consul://$(docker-machine ip consul):8500" \
    --engine-opt="cluster-store=consul://$(docker-machine ip consul):8500" \
    --engine-opt="cluster-advertise=eth1:0" \
    node-02

Check that you have the following machines running:

WAUTERW-M-G007:config wauterw$ docker-machine ls
NAME               ACTIVE   DRIVER      STATE   URL   SWARM   DOCKER    ERRORS
consul             -    virtualbox   Running   tcp://192.168.99.103:2376                      v1.12.2
node-01            -    virtualbox   Running   tcp://192.168.99.104:2376   node-01 (master)   v1.12.2
node-02            -    virtualbox   Running   tcp://192.168.99.105:2376   node-01            v1.12.2

Check the network on the Swarm cluster

Next, let’s see the current network setup. You’ll see that the available networks are prefixed with the node name. Given each default Docker host has 3 networks and we have 2 nodes, you should see six networks in total. Note: use the –swarm option to see the cluster information

WAUTERW-M-G007:docker wauterw$ eval $(docker-machine env --swarm node-01)
WAUTERW-M-G007:config wauterw$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
099752c2474c        node-01/bridge      bridge              local
b61ef3761157        node-01/host        host                local
b31c085aad95        node-01/none        null                local
6aad7382ccd7        node-02/bridge      bridge              local
3aa06995a53b        node-02/host        host                local
8b906500f86f        node-02/none        null                local

Next, we will run a container called web1 on node-01. Note: we are ‘forcing’ this by using the constraint:node parameter.

WAUTERW-M-G007:config wauterw$ docker run -itd --name=web1 --env="constraint:node==node-01" nginx
6e9d201e697f895eb9f988ed745980569fe21c105396457e3391dba22a63c958
WAUTERW-M-G007:config wauterw$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
6e9d201e697f        nginx               "nginx -g 'daemon off"   27 seconds ago      Up 27 seconds       80/tcp, 443/tcp     node-01/web1

Next, we will run the busybox container on node-02.

WAUTERW-M-G007:config wauterw$ docker run -it --rm --env="contstraint:node==node-02" busybox wget -qO- http://web1
Unable to find image 'busybox:latest' locally
latest: Pulling from library/busybox
56bec22e3559: Pull complete
Digest: sha256:29f5d56d12684887bdfa50dcd29fc31eea4aaf4ad3bec43daf19026a7ce69912
Status: Downloaded newer image for busybox:latest
wget: bad address 'web1'

When trying to retrieve the website from the NGINX container, you’ll see that this does not work.

Creating the overlay network

Next step is to create the overlay network to allow the containers to communicate.

WAUTERW-M-G007:config wauterw$ docker network create -d overlay mynet
e11705e1a1ff7872bc213ee2b858ce66c1b06b5a3b6ed93217702c3fca3939fc
WAUTERW-M-G007:config wauterw$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
e11705e1a1ff        mynet               overlay             global
099752c2474c        node-01/bridge      bridge              local
b61ef3761157        node-01/host        host                local
b31c085aad95        node-01/none        null                local
6aad7382ccd7        node-02/bridge      bridge              local
3aa06995a53b        node-02/host        host                local
8b906500f86f        node-02/none        null                local

Again, we run a new NGINX container on node-01:

WAUTERW-M-G007:config wauterw$ docker run -itd --name=web2 --env="constraint:node==node-01" --net=mynet nginx
5edea2882b30a269ded19b10d547e931c9c93b21063e6f6dc7be52a2e82fd577
WAUTERW-M-G007:config wauterw$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
5edea2882b30        nginx               "nginx -g 'daemon off"   4 seconds ago       Up 3 seconds        80/tcp, 443/tcp     node-01/web2
6e9d201e697f        nginx               "nginx -g 'daemon off"   8 minutes ago       Up 8 minutes        80/tcp, 443/tcp     node-01/web1

And we use the busybox container again to check connectivity again:

WAUTERW-M-G007:config wauterw$ docker run -it --rm --env="contstraint:node==node-02" --net=mynet busybox wget -qO- http://web2



Welcome to nginx!



Welcome to nginx!

If you see this page, the nginx web server is successfully installed and working. Further configuration is required.

For online documentation and support please refer to nginx.org.
Commercial support is available at nginx.com.

Thank you for using nginx.

As you can see, we are successfully retrieving the default NGINX webpage which proofs indeed we need to setup an overlay network also when running a Swarm cluster.