Monthly Archives: April 2016

Docker: Multi-container applications on Docker Swarm using Docker Compose

Introduction

In this post, we saw how to create a Docker Swarm cluster using Consul Service Discovery. We will continue this effort by installing a multicontainer application on top of this Swarm cluster. To achieve that, we will be using the same application as we already used many times before.

First some statistics

So let’s first verify how many containers we have running on the various nodes.

WAUTERW-M-G007:~ wauterw$ eval "$(docker-machine env --swarm swarm-master)"
WAUTERW-M-G007:~ wauterw$ docker info
Containers: 5
 Running: 4
 Paused: 0
 Stopped: 1
Images: 3
Server Version: swarm/1.2.0
Role: primary
Strategy: spread
Filters: health, port, dependency, affinity, constraint
Nodes: 3
 swarm-master: 192.168.99.102:2376
  └ Status: Healthy
  └ Containers: 3
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 1.021 GiB
  └ Labels: executiondriver=, kernelversion=4.1.19-boot2docker, operatingsystem=Boot2Docker 1.11.0 (TCL 7.0); HEAD : 32ee7e9 - Wed Apr 13 20:06:49 UTC 2016, provider=virtualbox, storagedriver=aufs
  └ Error: (none)
  └ UpdatedAt: 2016-04-21T08:36:51Z
  └ ServerVersion: 1.11.0
 swarm-node-01: 192.168.99.103:2376
  └ Status: Healthy
  └ Containers: 1
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 1.021 GiB
  └ Labels: executiondriver=, kernelversion=4.1.19-boot2docker, operatingsystem=Boot2Docker 1.11.0 (TCL 7.0); HEAD : 32ee7e9 - Wed Apr 13 20:06:49 UTC 2016, provider=virtualbox, storagedriver=aufs
  └ Error: (none)
  └ UpdatedAt: 2016-04-21T08:37:06Z
  └ ServerVersion: 1.11.0
 swarm-node-02: 192.168.99.104:2376
  └ Status: Healthy
  └ Containers: 1
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 1.021 GiB
  └ Labels: executiondriver=, kernelversion=4.1.19-boot2docker, operatingsystem=Boot2Docker 1.11.0 (TCL 7.0); HEAD : 32ee7e9 - Wed Apr 13 20:06:49 UTC 2016, provider=virtualbox, storagedriver=aufs
  └ Error: (none)
  └ UpdatedAt: 2016-04-21T08:36:58Z
  └ ServerVersion: 1.11.0
Plugins:
 Volume:
 Network:
Kernel Version: 4.1.19-boot2docker
Operating System: linux
Architecture: amd64
CPUs: 3
Total Memory: 3.064 GiB
Name: 6b2c27a806de
Docker Root Dir:
Debug mode (client): false
Debug mode (server): false

So we have now 4 running containers and 1 container that has stopped. Let’s have a look at what they are:

WAUTERW-M-G007:~ wauterw$ docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS                      PORTS                                     NAMES
85ad1823c909        swarm               "/swarm list consul:/"   26 minutes ago      Exited (0) 26 minutes ago                                             swarm-master/sharp_saha
65a93f731272        swarm:latest        "/swarm join --advert"   34 minutes ago      Up 34 minutes               2375/tcp                                  swarm-node-02/swarm-agent
ec757c42ab59        swarm:latest        "/swarm join --advert"   36 minutes ago      Up 36 minutes               2375/tcp                                  swarm-node-01/swarm-agent
b8c7a7baf2dc        swarm:latest        "/swarm join --advert"   42 minutes ago      Up 42 minutes               2375/tcp                                  swarm-master/swarm-agent
6b2c27a806de        swarm:latest        "/swarm manage --tlsv"   42 minutes ago      Up 42 minutes               2375/tcp, 192.168.99.102:3376->3376/tcp   swarm-master/swarm-agent-master

Running multi-container application via Docker Compose

We refer to this post, to get the application. In short, run a git clone for the application and make the changes to the config/database.js file.
Once you have made these changes, create the Dockerfile and docker-compose.yml file as described in that post. For you convenience, I have added them below.

WAUTERW-M-G007:Express_Todo_Mongo_API_Jade wauterw$ cat docker-compose.yml
version: '2'
services:
  express_container:
    build: .
    ports:
     - "3000:3000"
    volumes:
     - .:/usr/src/app
    depends_on:
     - mongo_container
  mongo_container:
    image: mongo
WAUTERW-M-G007:Express_Todo_Mongo_API_Jade wauterw$ cat Dockerfile
FROM ubuntu:14.04

# Enable EPEL for Node.js

RUN apt-get update
RUN apt-get -y install build-essential
RUN apt-get -y install nodejs
RUN apt-get -y install npm
RUN apt-get -y install git
RUN apt-get -y install git-core


# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app

# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install

# Bundle app source
COPY . /usr/src/app

EXPOSE 3000

CMD ["nodejs", "/usr/src/app/bin/www"]

OK, let’s continue to run the application now.

WAUTERW-M-G007:Express_Todo_Mongo_API_Jade wauterw$ docker-compose up -d
....
Step 13 : EXPOSE 3000
 ---> Running in 3547ec3857d2
 ---> 75d8dc8099b4
Removing intermediate container 3547ec3857d2
Step 14 : CMD nodejs /usr/src/app/bin/www
 ---> Running in ec353450e588
 ---> 6207051bce75
Removing intermediate container ec353450e588
Successfully built 6207051bce75
WARNING: Image for service express_container was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Creating expresstodomongoapijade_mongo_container_1
Creating expresstodomongoapijade_express_container_1

So, you can see from the above that two additional containers have been launched. This means, we should be able to go to the IP address of the host were the application is running on and see the UI. If you are following along this tutorial, go to http://192.168.99.103:3000/todos to see the UI of our application (apparently, the Swarm cluster decided to run my application on the node-01).

Note: I was expecting that we could just use the IP address of the Swarm manager to find out on which node our app was running. This seems not to be the case and I’m not (yet) sure why this is.

Note: When I look into the Consul UI, I can’t see these services or applications running. I also can’t see an overview of the nodes. I thought Consul would show these when it discovered them. Probably I’m not really understanding what it should look like. Here is a screenshot:
Consul2

Consul3

Consul4

In the last screenshot, one can see that the nodes are known to Consul, but not sure why on the second screenshot above it does not display these nodes.

In any case, let’s find out some more info on the containers themselves.

WAUTERW-M-G007:Express_Todo_Mongo_API_Jade wauterw$ eval "$(docker-machine env --swarm swarm-master)"
WAUTERW-M-G007:Express_Todo_Mongo_API_Jade wauterw$ docker info
Containers: 7
 Running: 6
 Paused: 0
 Stopped: 1
Images: 8
Server Version: swarm/1.2.0
Role: primary
Strategy: spread
Filters: health, port, dependency, affinity, constraint
Nodes: 3
 swarm-master: 192.168.99.102:2376
  └ Status: Healthy
  └ Containers: 3
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 1.021 GiB
  └ Labels: executiondriver=, kernelversion=4.1.19-boot2docker, operatingsystem=Boot2Docker 1.11.0 (TCL 7.0); HEAD : 32ee7e9 - Wed Apr 13 20:06:49 UTC 2016, provider=virtualbox, storagedriver=aufs
  └ Error: (none)
  └ UpdatedAt: 2016-04-21T08:58:43Z
  └ ServerVersion: 1.11.0
 swarm-node-01: 192.168.99.103:2376
  └ Status: Healthy
  └ Containers: 3
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 1.021 GiB
  └ Labels: executiondriver=, kernelversion=4.1.19-boot2docker, operatingsystem=Boot2Docker 1.11.0 (TCL 7.0); HEAD : 32ee7e9 - Wed Apr 13 20:06:49 UTC 2016, provider=virtualbox, storagedriver=aufs
  └ Error: (none)
  └ UpdatedAt: 2016-04-21T08:58:31Z
  └ ServerVersion: 1.11.0
 swarm-node-02: 192.168.99.104:2376
  └ Status: Healthy
  └ Containers: 1
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 1.021 GiB
  └ Labels: executiondriver=, kernelversion=4.1.19-boot2docker, operatingsystem=Boot2Docker 1.11.0 (TCL 7.0); HEAD : 32ee7e9 - Wed Apr 13 20:06:49 UTC 2016, provider=virtualbox, storagedriver=aufs
  └ Error: (none)
  └ UpdatedAt: 2016-04-21T08:59:05Z
  └ ServerVersion: 1.11.0
Plugins:
 Volume:
 Network:
Kernel Version: 4.1.19-boot2docker
Operating System: linux
Architecture: amd64
CPUs: 3
Total Memory: 3.064 GiB
Name: 6b2c27a806de
Docker Root Dir:
Debug mode (client): false
Debug mode (server): false
WARNING: No kernel memory limit support

Next, let’s see the containers overview:

WAUTERW-M-G007:Express_Todo_Mongo_API_Jade wauterw$ docker ps -a
CONTAINER ID        IMAGE                                       COMMAND                  CREATED             STATUS                      PORTS                                     NAMES
5a7ab0c156d2        expresstodomongoapijade_express_container   "nodejs /usr/src/app/"   6 minutes ago       Up 6 minutes                192.168.99.103:3000->3000/tcp             swarm-node-01/expresstodomongoapijade_express_container_1
ec0ecbd659d8        mongo                                       "/entrypoint.sh mongo"   6 minutes ago       Up 6 minutes                27017/tcp                                 swarm-node-01/expresstodomongoapijade_mongo_container_1
85ad1823c909        swarm                                       "/swarm list consul:/"   48 minutes ago      Exited (0) 48 minutes ago                                             swarm-master/sharp_saha
65a93f731272        swarm:latest                                "/swarm join --advert"   55 minutes ago      Up 55 minutes               2375/tcp                                  swarm-node-02/swarm-agent
ec757c42ab59        swarm:latest                                "/swarm join --advert"   57 minutes ago      Up 57 minutes               2375/tcp                                  swarm-node-01/swarm-agent
b8c7a7baf2dc        swarm:latest                                "/swarm join --advert"   About an hour ago   Up About an hour            2375/tcp                                  swarm-master/swarm-agent
6b2c27a806de        swarm:latest                                "/swarm manage --tlsv"   About an hour ago   Up About an hour            2375/tcp, 192.168.99.102:3376->3376/tcp

From the above, we can indeed see that our containers are running on node-01 (192.168.99.103). How do we know that? Use the following command:

WAUTERW-M-G007:Express_Todo_Mongo_API_Jade wauterw$ docker-machine ip swarm-node-01
192.168.99.103

Looks like ‘swarm-node-01’ corresponds to ‘192.168.99.103’.

Running another application

Let’s run another application. Also something we have done before, but I will repeat the steps here. Create the following three files:

index.js

var express = require('express')
var app = express()

app.get('/', function (req, res) {
  res.send('Extra application for example purposes')
})

var server = app.listen(3001, function () {

  var host = server.address().address
  var port = server.address().port

  console.log('Application listening at http://%s:%s', host, port)

})

package.json

WAUTERW-M-G007:extra_app wauterw$ cat package.json
{
  "name": "docker-express-container1",
  "private": true,
  "version": "0.0.1",
  "description": "Express application displaying some string",
  "author": "Wim Wauters ",
  "dependencies": {
    "express": "3.2.4"
  }
}

Dockerfile

WAUTERW-M-G007:extra_app wauterw$ cat Dockerfile
FROM ubuntu:14.04

# Enable EPEL for Node.js
RUN apt-get update
RUN apt-get -y install build-essential
RUN apt-get -y install nodejs
RUN apt-get -y install npm
RUN apt-get -y install git
RUN apt-get -y install git-core

# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app

# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install

# Bundle app source
COPY . /usr/src/app

EXPOSE  3001
CMD ["nodejs", "/usr/src/app/index.js"]

Then run the command to build the application:

WAUTERW-M-G007:extra_app wauterw$ docker build -t express-container-example-1 .

And once the build has been completed successfully, then run:

WAUTERW-M-G007:extra_app wauterw$ docker run -it -d -p 3002:3001 express-container-example-1
40ec3483c854625459da0f4c30d92602734a51892f92ac5c3225189b2c1ae763

Let’s have a look to see on which node this app was installed.

CONTAINER ID        IMAGE                                       COMMAND                  CREATED             STATUS                         PORTS                                     NAMES
40ec3483c854        express-container-example-1                 "nodejs /usr/src/app/"   5 seconds ago       Up 4 seconds                   192.168.99.103:3002->3001/tcp             swarm-node-01/stupefied_mccarthy

Ok folks, that’s it for now. See you later!

Docker: Create Docker Swarm cluster with Consul Discovery

Introduction

In this post, we created a Swarm cluster with the internal Docker discovery service. As I read a lot about Consul, I also wanted to see how this interacts with Docker Swarm.

Creating consul host

WAUTERW-M-G007:~ wauterw$ docker-machine create -d virtualbox consul-host
Running pre-create checks...
Creating machine...
(consul-host) Copying /Users/wauterw/.docker/machine/cache/boot2docker.iso to /Users/wauterw/.docker/machine/machines/consul-host/boot2docker.iso...
(consul-host) Creating VirtualBox VM...
(consul-host) Creating SSH key...
(consul-host) Starting the VM...
(consul-host) Check network to re-create if needed...
(consul-host) Waiting for an IP...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with boot2docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env consul-host

We can then point our terminal to the newly created consul-host and install Consul itself.

WAUTERW-M-G007:bin wauterw$ eval $(docker-machine env consul-host)

Create the following ‘docker-compose.yml’ file:

myconsul:
  image: progrium/consul
  restart: always
  hostname: consul
  ports:
    - 8500:8500
  command: "-server -bootstrap"

And then run the ‘docker-compose up -d’ command:

Pulling myconsul (progrium/consul:latest)...
latest: Pulling from progrium/consul
c862d82a67a2: Pull complete
0e7f3c08384e: Pull complete
0e221e32327a: Pull complete
09a952464e47: Pull complete
60a1b927414d: Pull complete
4c9f46b5ccce: Pull complete
417d86672aa4: Pull complete
b0d47ad24447: Pull complete
fd5300bd53f0: Pull complete
a3ed95caeb02: Pull complete
d023b445076e: Pull complete
ba8851f89e33: Pull complete
5d1cefca2a28: Pull complete
Digest: sha256:8cc8023462905929df9a79ff67ee435a36848ce7a10f18d6d0faba9306b97274
Status: Downloaded newer image for progrium/consul:latest
Creating wauterw_myconsul_1

Creating Swarm master

Now that we have a Consul host up and running, we can continue to create the Docker Swarm.

WAUTERW-M-G007:~ wauterw$ docker-machine create -d virtualbox --swarm --swarm-master --swarm-discovery="consul://$(docker-machine ip consul-host):8500" --engine-opt="cluster-store=consul://$(docker-machine ip consul-host):8500" --engine-opt="cluster-advertise=eth1:2376" swarm-master
Running pre-create checks...
Creating machine...
(swarm-master) Copying /Users/wauterw/.docker/machine/cache/boot2docker.iso to /Users/wauterw/.docker/machine/machines/swarm-master/boot2docker.iso...
(swarm-master) Creating VirtualBox VM...
(swarm-master) Creating SSH key...
(swarm-master) Starting the VM...
(swarm-master) Check network to re-create if needed...
(swarm-master) Waiting for an IP...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with boot2docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Configuring swarm...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env swarm-master

We can then verify the machines that are up and running already:

WAUTERW-M-G007:~ wauterw$ docker-machine ls
NAME           ACTIVE   DRIVER       STATE     URL                         SWARM                   DOCKER    ERRORS
consul-host    *        virtualbox   Running   tcp://192.168.99.101:2376                           v1.11.0
swarm-master   -        virtualbox   Running   tcp://192.168.99.102:2376   swarm-master (master)   v1.11.0

Adding nodes to Swarm cluster

Now that we have a Consul host as well as a Swarm master, we can continue to add nodes:

WAUTERW-M-G007:~ wauterw$ docker-machine create -d virtualbox --swarm --swarm-discovery="consul://$(docker-machine ip consul-host):8500" --engine-opt="cluster-store=consul://$(docker-machine ip consul-host):8500" --engine-opt="cluster-advertise=eth1:2376" swarm-node-01
Running pre-create checks...
Creating machine...
(swarm-node-01) Copying /Users/wauterw/.docker/machine/cache/boot2docker.iso to /Users/wauterw/.docker/machine/machines/swarm-node-01/boot2docker.iso...
(swarm-node-01) Creating VirtualBox VM...
(swarm-node-01) Creating SSH key...
(swarm-node-01) Starting the VM...
(swarm-node-01) Check network to re-create if needed...
(swarm-node-01) Waiting for an IP...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with boot2docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Configuring swarm...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env swarm-node-01

and a second node:

WAUTERW-M-G007:~ wauterw$ docker-machine create -d virtualbox --swarm --swarm-discovery="consul://$(docker-machine ip consul-host):8500" --engine-opt="cluster-store=consul://$(docker-machine ip consul-host):8500" --engine-opt="cluster-advertise=eth1:2376" swarm-node-02
Running pre-create checks...
Creating machine...
(swarm-node-02) Copying /Users/wauterw/.docker/machine/cache/boot2docker.iso to /Users/wauterw/.docker/machine/machines/swarm-node-02/boot2docker.iso...
(swarm-node-02) Creating VirtualBox VM...
(swarm-node-02) Creating SSH key...
(swarm-node-02) Starting the VM...
(swarm-node-02) Check network to re-create if needed...
(swarm-node-02) Waiting for an IP...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with boot2docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Configuring swarm...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env swarm-node-02

If we then check all hosts, we see the complete list:

WAUTERW-M-G007:~ wauterw$ docker-machine ls
NAME            ACTIVE   DRIVER       STATE     URL                         SWARM                   DOCKER    ERRORS
consul-host     *        virtualbox   Running   tcp://192.168.99.101:2376                           v1.11.0
swarm-master    -        virtualbox   Running   tcp://192.168.99.102:2376   swarm-master (master)   v1.11.0
swarm-node-01   -        virtualbox   Running   tcp://192.168.99.103:2376   swarm-master            v1.11.0
swarm-node-02   -        virtualbox   Running   tcp://192.168.99.104:2376   swarm-master            v1.11.0

We can also look at some more info on the Docker Swarm cluster:

WAUTERW-M-G007:~ wauterw$ eval "$(docker-machine env --swarm swarm-master)"
WAUTERW-M-G007:~ wauterw$ docker info
Containers: 4
 Running: 4
 Paused: 0
 Stopped: 0
Images: 3
Server Version: swarm/1.2.0
Role: primary
Strategy: spread
Filters: health, port, dependency, affinity, constraint
Nodes: 3
 swarm-master: 192.168.99.102:2376
  └ Status: Healthy
  └ Containers: 2
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 1.021 GiB
  └ Labels: executiondriver=, kernelversion=4.1.19-boot2docker, operatingsystem=Boot2Docker 1.11.0 (TCL 7.0); HEAD : 32ee7e9 - Wed Apr 13 20:06:49 UTC 2016, provider=virtualbox, storagedriver=aufs
  └ Error: (none)
  └ UpdatedAt: 2016-04-21T08:06:19Z
  └ ServerVersion: 1.11.0
 swarm-node-01: 192.168.99.103:2376
  └ Status: Healthy
  └ Containers: 1
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 1.021 GiB
  └ Labels: executiondriver=, kernelversion=4.1.19-boot2docker, operatingsystem=Boot2Docker 1.11.0 (TCL 7.0); HEAD : 32ee7e9 - Wed Apr 13 20:06:49 UTC 2016, provider=virtualbox, storagedriver=aufs
  └ Error: (none)
  └ UpdatedAt: 2016-04-21T08:06:08Z
  └ ServerVersion: 1.11.0
 swarm-node-02: 192.168.99.104:2376
  └ Status: Healthy
  └ Containers: 1
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 1.021 GiB
  └ Labels: executiondriver=, kernelversion=4.1.19-boot2docker, operatingsystem=Boot2Docker 1.11.0 (TCL 7.0); HEAD : 32ee7e9 - Wed Apr 13 20:06:49 UTC 2016, provider=virtualbox, storagedriver=aufs
  └ Error: (none)
  └ UpdatedAt: 2016-04-21T08:05:56Z
  └ ServerVersion: 1.11.0
Plugins:
 Volume:
 Network:
Kernel Version: 4.1.19-boot2docker
Operating System: linux
Architecture: amd64
CPUs: 3
Total Memory: 3.064 GiB
Name: 6b2c27a806de
Docker Root Dir:
Debug mode (client): false
Debug mode (server): false
WARNING: No kernel memory limit support

Note: Take note of the fact that we used the –swarm flag in the previous command eval “$(docker-machine env –swarm swarm-master)”. The above command will address the entire cluster. You can also run eval “$(docker-machine env swarm-master)” but this will address the swarm-manager ‘host’, not the cluster.

WAUTERW-M-G007:~ wauterw$ eval "$(docker-machine env swarm-master)"
WAUTERW-M-G007:~ wauterw$ docker info
Containers: 2
 Running: 2
 Paused: 0
 Stopped: 0
Images: 1
Server Version: 1.11.0
Storage Driver: aufs
 Root Dir: /mnt/sda1/var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 12
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: null host bridge
Kernel Version: 4.1.19-boot2docker
Operating System: Boot2Docker 1.11.0 (TCL 7.0); HEAD : 32ee7e9 - Wed Apr 13 20:06:49 UTC 2016
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 996.1 MiB
Name: swarm-master
ID: X54E:OPXM:PMNZ:72KN:JBPU:6T6G:KA5L:YDFH:MYSB:KL5B:T4IE:7J5W
Docker Root Dir: /mnt/sda1/var/lib/docker
Debug mode (client): false
Debug mode (server): true
 File Descriptors: 26
 Goroutines: 71
 System Time: 2016-04-21T08:07:25.222630177Z
 EventsListeners: 1
Registry: https://index.docker.io/v1/
Labels:
 provider=virtualbox
Cluster store: consul://192.168.99.101:8500
Cluster advertise: 192.168.99.102:2376

Let’s also take a look at the individual nodes:

WAUTERW-M-G007:~ wauterw$ eval $(docker-machine env swarm-node-01)
WAUTERW-M-G007:~ wauterw$ docker info
Containers: 1
 Running: 1
 Paused: 0
 Stopped: 0
Images: 1
Server Version: 1.11.0
Storage Driver: aufs
 Root Dir: /mnt/sda1/var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 10
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: null host bridge
Kernel Version: 4.1.19-boot2docker
Operating System: Boot2Docker 1.11.0 (TCL 7.0); HEAD : 32ee7e9 - Wed Apr 13 20:06:49 UTC 2016
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 996.1 MiB
Name: swarm-node-01
ID: KKID:COTH:PXOE:RVKX:YC6R:QX2T:6OI7:3L55:MEWC:GBCO:4XEU:ARDI
Docker Root Dir: /mnt/sda1/var/lib/docker
Debug mode (client): false
Debug mode (server): true
 File Descriptors: 23
 Goroutines: 65
 System Time: 2016-04-21T08:13:40.624893775Z
 EventsListeners: 1
Registry: https://index.docker.io/v1/
Labels:
 provider=virtualbox
Cluster store: consul://192.168.99.101:8500
Cluster advertise: 192.168.99.103:2376

And then the second hosts:

WAUTERW-M-G007:~ wauterw$ eval $(docker-machine env swarm-node-02)
WAUTERW-M-G007:~ wauterw$ docker info
Containers: 1
 Running: 1
 Paused: 0
 Stopped: 0
Images: 1
Server Version: 1.11.0
Storage Driver: aufs
 Root Dir: /mnt/sda1/var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 10
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge null host
Kernel Version: 4.1.19-boot2docker
Operating System: Boot2Docker 1.11.0 (TCL 7.0); HEAD : 32ee7e9 - Wed Apr 13 20:06:49 UTC 2016
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 996.1 MiB
Name: swarm-node-02
ID: VEQO:6VTZ:ZEPV:L3XB:62QF:V7OJ:DYAX:OO6S:ZWOA:2ULI:4I4H:2TIJ
Docker Root Dir: /mnt/sda1/var/lib/docker
Debug mode (client): false
Debug mode (server): true
 File Descriptors: 23
 Goroutines: 65
 System Time: 2016-04-21T08:14:04.588596253Z
 EventsListeners: 1
Registry: https://index.docker.io/v1/
Labels:
 provider=virtualbox
Cluster store: consul://192.168.99.101:8500
Cluster advertise: 192.168.99.104:2376

So, we see that across all three hosts (Swarm-manager, Swarm-node-01 and Swarm-node-02) that we have 4 containers in total. To see what they are, do the following:

WAUTERW-M-G007:~ wauterw$ eval "$(docker-machine env --swarm swarm-master)"
WAUTERW-M-G007:~ wauterw$ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
WAUTERW-M-G007:~ wauterw$ docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS                     PORTS                                     NAMES
85ad1823c909        swarm               "/swarm list consul:/"   4 minutes ago       Exited (0) 4 minutes ago                                             swarm-master/sharp_saha
65a93f731272        swarm:latest        "/swarm join --advert"   12 minutes ago      Up 12 minutes              2375/tcp                                  swarm-node-02/swarm-agent
ec757c42ab59        swarm:latest        "/swarm join --advert"   13 minutes ago      Up 13 minutes              2375/tcp                                  swarm-node-01/swarm-agent
b8c7a7baf2dc        swarm:latest        "/swarm join --advert"   19 minutes ago      Up 19 minutes              2375/tcp                                  swarm-master/swarm-agent
6b2c27a806de        swarm:latest        "/swarm manage --tlsv"   19 minutes ago      Up 19 minutes              2375/tcp, 192.168.99.102:3376->3376/tcp   swarm-master/swarm-agent-master

Last item for this topic, we can also query the consul host to list all nodes that are part of the cluster:

WAUTERW-M-G007:~ wauterw$ docker run swarm list consul://$(docker-machine ip consul-host):8500
time="2016-04-21T08:11:48Z" level=info msg="Initializing discovery without TLS"
192.168.99.102:2376
192.168.99.103:2376
192.168.99.104:2376

And last but not least, you can also take a look at the consul UI. In our example, this is running on http://192.168.99.101:8500/ui. Below a screenshot of what you can expect.
Consul1

This post described how to get a Swarm cluster up and running with Consul Service Discovery. In a next post, we will install some application on this cluster.

Docker: Run applications on Docker Swarm using CLI and Dockerfile

Introduction

In the previous post, we created a Swarm Cluster on Virtualbox. The Swarm cluster was consisting of:

  • Swarm Manager (swarm-manager)
  • Swarm Node 1 (swarm-node1)
  • Swarm Node 2 (swarm-node2)

In this post, we will experiment with running some containers. Some of them pretty easy and just using Docker Engine. Others we will try to run with Docker Compose. Let’s get started!

Start with a simple container

Let’s start out very simple. We will run a simple container on the Swarm Cluster. You might remember we also did it in this post but the difference was that it was done on a single, independent node.

WAUTERW-M-G007:Downloads wauterw$ docker run -t -i ubuntu /bin/bash
root@dccf2656ae94:/#

Let’s now verify on which node the Swarm Manager decided to run our simple container.

WAUTERW-M-G007:Downloads wauterw$ eval $(docker-machine env --swarm swarm-manager)
WAUTERW-M-G007:Downloads wauterw$ docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED              STATUS                      PORTS                                     NAMES
dccf2656ae94        ubuntu              "/bin/bash"              About a minute ago   Exited (0) 13 seconds ago                                             swarm-node01/pensive_minsky
c65805511449        swarm:latest        "/swarm join --advert"   32 minutes ago       Up 32 minutes               2375/tcp                                  swarm-node02/swarm-agent
a293eeeda6d7        swarm:latest        "/swarm join --advert"   34 minutes ago       Up 34 minutes               2375/tcp                                  swarm-node01/swarm-agent
5b50f145fe2d        swarm:latest        "/swarm join --advert"   36 minutes ago       Up 36 minutes               2375/tcp                                  swarm-manager/swarm-agent
b82aca867319        swarm:latest        "/swarm manage --tlsv"   36 minutes ago       Up 36 minutes               2375/tcp, 192.168.99.107:3376->3376/tcp   swarm-manager/swarm-agent-master
WAUTERW-M-G007:Downloads wauterw$

It looks like it was decided to run it on swarm-node01. Note that this container has stopped. We knew that, since as soon as the main process is finished, also the container will be stopped.

Continuing with another simple hello world container

Next, we will deploy the Hello World container. Again very straightforward.

WAUTERW-M-G007:Downloads wauterw$ docker run ubuntu /bin/echo 'Hello world'
Hello world

Again, let’s see where the Swarm Manager decided to launch the container.

WAUTERW-M-G007:Downloads wauterw$ docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS                         PORTS                                     NAMES
e3f30dbe5a2d        ubuntu              "/bin/echo 'Hello wor"   2 minutes ago       Exited (0) 2 minutes ago                                                 swarm-node02/dreamy_babbage
dccf2656ae94        ubuntu              "/bin/bash"              About an hour ago   Exited (0) About an hour ago                                             swarm-node01/pensive_minsky
c65805511449        swarm:latest        "/swarm join --advert"   About an hour ago   Up About an hour               2375/tcp                                  swarm-node02/swarm-agent
a293eeeda6d7        swarm:latest        "/swarm join --advert"   About an hour ago   Up About an hour               2375/tcp                                  swarm-node01/swarm-agent
5b50f145fe2d        swarm:latest        "/swarm join --advert"   About an hour ago   Up About an hour               2375/tcp                                  swarm-manager/swarm-agent
b82aca867319        swarm:latest        "/swarm manage --tlsv"   About an hour ago   Up About an hour               2375/tcp, 192.168.99.107:3376->3376/tcp   swarm-manager/swarm-agent-master

This time apparently, it decided to have it launched on swarm-node02. Kind of expected I would think.

Run webapplication via Dockerfile on Swarm cluster

In this post, we already ran a simple webapplication that returned ‘Hello World’. Again easy, but it might be interesting just to see how Dockerfile is working together with the swarm cluster. To follow along, use the index.js, package.json and Dockerfile from that post. It should work just as fine.

First thing we have to do is to build the image again.

WAUTERW-M-G007:app wauterw$ docker build -t ubuntu-express-app-swarm .
....
Removing intermediate container 1cb1e19c884f
Step 12 : COPY . /usr/src/app
 ---> 0a36f070eaa0
Removing intermediate container a037fd63840e
Step 13 : EXPOSE 3001
 ---> Running in a34da1195a7c
 ---> f1c4ab91bb06
Removing intermediate container a34da1195a7c
Step 14 : CMD nodejs /usr/src/app/index.js
 ---> Running in 5b0da2b34986
 ---> b7fc7dfa402e
Removing intermediate container 5b0da2b34986
Successfully built b7fc7dfa402e

Then we check again on the containers and where they have ran:

WAUTERW-M-G007:app wauterw$ docker ps -a
CONTAINER ID        IMAGE                      COMMAND                  CREATED             STATUS                            PORTS                                     NAMES
a1a5e003691a        ubuntu-express-app-swarm   "nodejs /usr/src/app/"   2 minutes ago       Exited (130) About a minute ago                                             swarm-manager/kickass_goldberg
e3f30dbe5a2d        ubuntu                     "/bin/echo 'Hello wor"   23 minutes ago      Exited (0) 23 minutes ago                                                   swarm-node02/dreamy_babbage
dccf2656ae94        ubuntu                     "/bin/bash"              About an hour ago   Exited (0) About an hour ago                                                swarm-node01/pensive_minsky
c65805511449        swarm:latest               "/swarm join --advert"   About an hour ago   Up About an hour                  2375/tcp                                  swarm-node02/swarm-agent
a293eeeda6d7        swarm:latest               "/swarm join --advert"   About an hour ago   Up About an hour                  2375/tcp                                  swarm-node01/swarm-agent
5b50f145fe2d        swarm:latest               "/swarm join --advert"   2 hours ago         Up 2 hours                        2375/tcp                                  swarm-manager/swarm-agent
b82aca867319        swarm:latest               "/swarm manage --tlsv"   2 hours ago         Up 2 hours                        2375/tcp, 192.168.99.107:3376->3376/tcp   swarm-manager/swarm-agent-master

This is something I did not expect. Seems like the express app was launched on the swarm-manager itself. I thought honestly that this was just a manager, that would have all containers run on the underlying node. But apparently I was wrong in this assumption.
Just to be sure, I tried it again. This time I ran the container in detached mode

WAUTERW-M-G007:app wauterw$ docker run -it -d -p 3002:3001 ubuntu-express-app-swarm
9f5030c981c942129c2df555dffba86fd42ff721d5b33a757af93cf47f6d3f2c
WAUTERW-M-G007:app wauterw$ docker ps -a
CONTAINER ID        IMAGE                      COMMAND                  CREATED             STATUS                         PORTS                                     NAMES
9f5030c981c9        ubuntu-express-app-swarm   "nodejs /usr/src/app/"   5 seconds ago       Up 4 seconds                   192.168.99.107:3002->3001/tcp             swarm-manager/condescending_bartik
a1a5e003691a        ubuntu-express-app-swarm   "nodejs /usr/src/app/"   6 minutes ago       Exited (130) 6 minutes ago                                               swarm-manager/kickass_goldberg
e3f30dbe5a2d        ubuntu                     "/bin/echo 'Hello wor"   27 minutes ago      Exited (0) 27 minutes ago                                                swarm-node02/dreamy_babbage
dccf2656ae94        ubuntu                     "/bin/bash"              About an hour ago   Exited (0) About an hour ago                                             swarm-node01/pensive_minsky
c65805511449        swarm:latest               "/swarm join --advert"   2 hours ago         Up 2 hours                     2375/tcp                                  swarm-node02/swarm-agent
a293eeeda6d7        swarm:latest               "/swarm join --advert"   2 hours ago         Up 2 hours                     2375/tcp                                  swarm-node01/swarm-agent
5b50f145fe2d        swarm:latest               "/swarm join --advert"   2 hours ago         Up 2 hours                     2375/tcp                                  swarm-manager/swarm-agent
b82aca867319        swarm:latest               "/swarm manage --tlsv"   2 hours ago         Up 2 hours                     2375/tcp, 192.168.99.107:3376->3376/tcp   swarm-manager/swarm-agent-master

And again, it points it to the swarm-manager! Not sure if this is supposed to be like that or that I’m doing something wrong. I will continue to investigate it and update this post if I find out something.

Docker: Create Docker Swarm cluster with Docker Discovery

Introduction

Generate a Swarm token

WAUTERW-M-G007:Downloads wauterw$ docker-machine create -d virtualbox manager
Running pre-create checks...
Creating machine...
(manager) Copying /Users/wauterw/.docker/machine/cache/boot2docker.iso to /Users/wauterw/.docker/machine/machines/manager/boot2docker.iso...
(manager) Creating VirtualBox VM...
(manager) Creating SSH key...
(manager) Starting the VM...
(manager) Check network to re-create if needed...
(manager) Waiting for an IP...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with boot2docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env manager

Point the terminal to this newly generated ‘manager’

WAUTERW-M-G007:Downloads wauterw$ eval "$(docker-machine env manager)"

and run the command to create a swarm cluster

WAUTERW-M-G007:Downloads wauterw$ docker run swarm create
Unable to find image 'swarm:latest' locally
latest: Pulling from library/swarm

8c01723048ed: Pull complete
28ef38ffcca5: Pull complete
f1f933319091: Pull complete
a3ed95caeb02: Pull complete
Digest: sha256:8b007c8fc861cfaa2f0b9160e6ed3a39109af6e28dfe03982a05158e218bcc52
Status: Downloaded newer image for swarm:latest
b270205b144c3f1d96c39a6a6089791b

The ‘b270205b144c3f1d96c39a6a6089791b’ is called the ‘discovery token’ or the ‘swarm id’ and we will need it when creating additional nodes that will be part of the Swarm cluster.

Generate a Swarm nodes

We are first going to create a swarm master node.

WAUTERW-M-G007:Downloads wauterw$ docker-machine create -d virtualbox --swarm --swarm-master --swarm-discovery token://b270205b144c3f1d96c39a6a6089791b swarm-manager
Running pre-create checks...
Creating machine...
(swarm-manager) Copying /Users/wauterw/.docker/machine/cache/boot2docker.iso to /Users/wauterw/.docker/machine/machines/swarm-manager/boot2docker.iso...
(swarm-manager) Creating VirtualBox VM...
(swarm-manager) Creating SSH key...
(swarm-manager) Starting the VM...
(swarm-manager) Check network to re-create if needed...
(swarm-manager) Waiting for an IP...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with boot2docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Configuring swarm...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env swarm-manager

And then 2 additional Swarm nodes, which will run our containers:

WAUTERW-M-G007:Downloads wauterw$ docker-machine create -d virtualbox --swarm --swarm-discovery token://b270205b144c3f1d96c39a6a6089791b swarm-node01
Running pre-create checks...
Creating machine...
(swarm-node01) Copying /Users/wauterw/.docker/machine/cache/boot2docker.iso to /Users/wauterw/.docker/machine/machines/swarm-node01/boot2docker.iso...
(swarm-node01) Creating VirtualBox VM...
(swarm-node01) Creating SSH key...
(swarm-node01) Starting the VM...
(swarm-node01) Check network to re-create if needed...
(swarm-node01) Waiting for an IP...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with boot2docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Configuring swarm...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env swarm-node01
WAUTERW-M-G007:Downloads wauterw$ docker-machine create -d virtualbox --swarm --swarm-discovery token://b270205b144c3f1d96c39a6a6089791b swarm-node02
Running pre-create checks...
Creating machine...
(swarm-node02) Copying /Users/wauterw/.docker/machine/cache/boot2docker.iso to /Users/wauterw/.docker/machine/machines/swarm-node02/boot2docker.iso...
(swarm-node02) Creating VirtualBox VM...
(swarm-node02) Creating SSH key...
(swarm-node02) Starting the VM...
(swarm-node02) Check network to re-create if needed...
(swarm-node02) Waiting for an IP...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with boot2docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Configuring swarm...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env swarm-node02

Going to Virtualbox, will result in a screenshot like below:

Vbox1

Connect to Swarm cluster

WAUTERW-M-G007:Downloads wauterw$ docker-machine env --swarm swarm-manager
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.107:3376"
export DOCKER_CERT_PATH="/Users/wauterw/.docker/machine/machines/swarm-manager"
export DOCKER_MACHINE_NAME="swarm-manager"
# Run this command to configure your shell:
# eval $(docker-machine env --swarm swarm-manager)

and pointing our terminal again to the swarm manager node

WAUTERW-M-G007:Downloads wauterw$ eval "$(docker-machine env --swarm swarm-manager)"

We can then retrieve information from the Swarm Manager (swarm-manager) as follows:

WAUTERW-M-G007:Downloads wauterw$ docker info
Containers: 4
 Running: 4
 Paused: 0
 Stopped: 0
Images: 3
Server Version: swarm/1.2.0
Role: primary
Strategy: spread
Filters: health, port, dependency, affinity, constraint
Nodes: 3
 swarm-manager: 192.168.99.107:2376
  └ Status: Healthy
  └ Containers: 2
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 1.021 GiB
  └ Labels: executiondriver=, kernelversion=4.1.19-boot2docker, operatingsystem=Boot2Docker 1.11.0 (TCL 7.0); HEAD : 32ee7e9 - Wed Apr 13 20:06:49 UTC 2016, provider=virtualbox, storagedriver=aufs
  └ Error: (none)
  └ UpdatedAt: 2016-04-15T09:20:14Z
  └ ServerVersion: 1.11.0
 swarm-node01: 192.168.99.108:2376
  └ Status: Healthy
  └ Containers: 1
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 1.021 GiB
  └ Labels: executiondriver=, kernelversion=4.1.19-boot2docker, operatingsystem=Boot2Docker 1.11.0 (TCL 7.0); HEAD : 32ee7e9 - Wed Apr 13 20:06:49 UTC 2016, provider=virtualbox, storagedriver=aufs
  └ Error: (none)
  └ UpdatedAt: 2016-04-15T09:20:25Z
  └ ServerVersion: 1.11.0
 swarm-node02: 192.168.99.109:2376
  └ Status: Healthy
  └ Containers: 1
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 1.021 GiB
  └ Labels: executiondriver=, kernelversion=4.1.19-boot2docker, operatingsystem=Boot2Docker 1.11.0 (TCL 7.0); HEAD : 32ee7e9 - Wed Apr 13 20:06:49 UTC 2016, provider=virtualbox, storagedriver=aufs
  └ Error: (none)
  └ UpdatedAt: 2016-04-15T09:20:41Z
  └ ServerVersion: 1.11.0
Plugins:
 Volume:
 Network:
Kernel Version: 4.1.19-boot2docker
Operating System: linux
Architecture: amd64
CPUs: 3
Total Memory: 3.064 GiB
Name: b82aca867319
Docker Root Dir:
Debug mode (client): false
Debug mode (server): false
WARNING: No kernel memory limit support

Here we can see that 3 nodes are part of the Swarm Cluster, which is the swarm-manager, swarm-node01 and swarm-node02. The swarm manager itself is running 2 containers, while the swarm nodes are each running 1 container. I was wondering what these containers were actually, so I ran the following command.

WAUTERW-M-G007:Downloads wauterw$ docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                     NAMES
c65805511449        swarm:latest        "/swarm join --advert"   16 minutes ago      Up 16 minutes       2375/tcp                                  swarm-node02/swarm-agent
a293eeeda6d7        swarm:latest        "/swarm join --advert"   17 minutes ago      Up 17 minutes       2375/tcp                                  swarm-node01/swarm-agent
5b50f145fe2d        swarm:latest        "/swarm join --advert"   19 minutes ago      Up 19 minutes       2375/tcp                                  swarm-manager/swarm-agent
b82aca867319        swarm:latest        "/swarm manage --tlsv"   19 minutes ago      Up 19 minutes       2375/tcp, 192.168.99.107:3376->3376/tcp   swarm-manager/swarm-agent-master
WAUTERW-M-G007:Downloads wauterw$

Here we can see that we have two containers called ‘swarm-agent’ and a ‘swarm-agent-master’ running on the swarm-manager. Each node has indeed 1 agent which is called ‘swarm-agent’.

Let’s check a bit deeper on the swarm-node01 just out of curiosity.

WAUTERW-M-G007:Downloads wauterw$ eval $(docker-machine env swarm-node01)
WAUTERW-M-G007:Downloads wauterw$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
a293eeeda6d7        swarm:latest        "/swarm join --advert"   21 minutes ago      Up 21 minutes       2375/tcp            swarm-agent

Again, we see here that we have an active container running on the swarm-node01. Just want to let you see you could also check the individual nodes, not only the swarm-master.

In a next post, we will run some additional containers on the Swarm cluster

Docker: Docker Machine and DigitalOcean

Introduction

In this post we started of with Docker Machine and Virtualbox. Then we moved on to something more complex, we launched a docker host on AWS post. Just for fun, I wanted to try it also on DigitalOcean and documented it in this post. Rather straightforward as you will notice soon.

Getting all info

I’m assuming you already have an DigitalOcean account. You will need to create a token on DigitalOcean. The procedure is very well explained here.

Using Docker Machine

Creating a docker host on Digitalocean is very straightforward. See below the command to achieve this

WAUTERW-M-G007:Downloads wauterw$ docker-machine create --driver digitalocean --digitalocean-access-token xxxxxx778aa584xxxxx docker-1-digitalocean

do1

We can then launch containers etc on this newly provisioned DigitalOcean host, but we refer to previous tutorials on Docker Machine on how to do this exactly.

Docker: More on Docker Machine

Introduction

In the previous post, I experimented a bit with Docker Machine. I used it to create a docker enabled host on AWS. For this post, I will create a number of other hosts on AWS and experiment a little bit with them (running containers on them, …)

WAUTERW-M-G007:~ wauterw$ docker-machine create --driver amazonec2 --amazonec2-vpc-id vpc-93c6ddf6 --amazonec2-region eu-west-1 docker-host-A
Running pre-create checks...
Creating machine...
(docker-host-A) Launching instance...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with ubuntu(systemd)...
Installing Docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env docker-host-A

And another one:

WAUTERW-M-G007:~ wauterw$ docker-machine create --driver amazonec2 --amazonec2-vpc-id vpc-93c6ddf6 --amazonec2-region eu-west-1 docker-host-B
Running pre-create checks...
Creating machine...
(docker-host-B) Launching instance...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with ubuntu(systemd)...
Installing Docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env docker-host-B

This results in two EC2 machines running on AWS, one called docker-host-A and the other docker-host-B.
docker-machine3
In order to connect our terminal to the proper instance, we need to do the following:

WAUTERW-M-G007:~ wauterw$ docker-machine env docker-host-A
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://54.229.167.255:2376"
export DOCKER_CERT_PATH="/Users/wauterw/.docker/machine/machines/docker-host-A"
export DOCKER_MACHINE_NAME="docker-host-A"
# Run this command to configure your shell:
# eval $(docker-machine env docker-host-A)

To get an overview of what is currently known to docker-machine, do the following:

WAUTERW-M-G007:~ wauterw$ docker-machine ls
NAME            ACTIVE   DRIVER       STATE     URL                         SWARM   DOCKER    ERRORS
default         -        virtualbox   Running   tcp://192.168.99.100:2376           v1.11.0
docker-host-A   *        amazonec2    Running   tcp://54.229.167.255:2376           v1.11.0
docker-host-B   -        amazonec2    Running   tcp://54.194.110.58:2376            v1.11.0

Then we build an easy express application again. See below for the files in case you want to follow along:

index.js

WAUTERW-M-G007:container1 wauterw$ cat index.js
var express = require('express')
var app = express()

app.get('/', function (req, res) {
  res.send('Container 1 on host docker-host-A)
})

var server = app.listen(3001, function () {

  var host = server.address().address
  var port = server.address().port

  console.log('Application listening at http://%s:%s', host, port)

})

package.json

WAUTERW-M-G007:container1 wauterw$ cat package.json
{
  "name": "docker-express-container1",
  "private": true,
  "version": "0.0.1",
  "description": "Express application displaying some string",
  "author": "Wim Wauters ",
  "dependencies": {
    "express": "3.2.4"
  }
}

Dockerfile

FROM ubuntu:14.04

# Enable EPEL for Node.js
RUN apt-get update
RUN apt-get -y install build-essential
RUN apt-get -y install nodejs
RUN apt-get -y install npm
RUN apt-get -y install git
RUN apt-get -y install git-core

# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app

# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install

# Bundle app source
COPY . /usr/src/app

EXPOSE  3001
CMD ["nodejs", "/usr/src/app/index.js"]

Then as usual, build and run the container:

WAUTERW-M-G007:container1 wauterw$ docker build -t express-container1 .
WAUTERW-M-G007:container1 wauterw$ docker run -it -p 3001:3001 express-container1

Then switch to the second EC2 host:

WAUTERW-M-G007:container1 wauterw$ eval $(docker-machine env docker-host-B)

and do the same (change the string to reflect the second host).

WAUTERW-M-G007:container2 wauterw$ docker build -t express-container2 .
WAUTERW-M-G007:container2 wauterw$ docker run -it -p 3002:3001 express-container2

When you now go to the EC2 IP address of the respective host, you will find that the 1st container (express-container1) is running on the first host while the second one (express-container1) is running on the second host, because we switched the terminal.

On the first host, you can see

WAUTERW-M-G007:container2 wauterw$ docker ps
CONTAINER ID        IMAGE                COMMAND                  CREATED              STATUS              PORTS                    NAMES
27e35c2b11e5        express-container1   "nodejs /usr/src/app/"   About a minute ago   Up About a minute   0.0.0.0:3001->3001/tcp   dreamy_nobel
WAUTERW-M-G007:container2 wauterw$ docker images
REPOSITORY           TAG                 IMAGE ID            CREATED             SIZE
express-container1   latest              e123cae1c8de        17 minutes ago      408.6 MB

and on the second host you can see:

WAUTERW-M-G007:container2 wauterw$ eval $(docker-machine env docker-host-B)
WAUTERW-M-G007:container2 wauterw$ docker ps
CONTAINER ID        IMAGE                COMMAND                  CREATED             STATUS              PORTS                    NAMES
e8e8b85ad75c        express-container2   "nodejs /usr/src/app/"   3 minutes ago       Up 3 minutes        0.0.0.0:3002->3001/tcp   reverent_heyrovsky
WAUTERW-M-G007:container2 wauterw$ docker images
REPOSITORY           TAG                 IMAGE ID            CREATED             SIZE
express-container2   latest              615dd62190a0        7 minutes ago       408.6 MB
ubuntu               14.04               b72889fa879c        18 hours ago        188 MB

So you can see that everything worked out nice. Some screenshots below:
For container 1 on host A:
con1-docA
For container 1 on host B:
con1-docB

You can also ssh into the EC2 hosts. See below an example using the second host

WAUTERW-M-G007:container2 wauterw$ docker-machine ssh docker-host-A
Welcome to Ubuntu 15.10 (GNU/Linux 4.2.0-18-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

  Get cloud support with Ubuntu Advantage Cloud Guest:
    http://www.ubuntu.com/business/services/cloud


*** System restart required ***
Last login: Thu Apr 14 14:28:05 2016 from 173.38.220.51
ubuntu@docker-host-A:~$

Probably you did not learn that much new things, have to admit that this post was just for me to get acquainted with everything we learned so far and practise a little bit.

Docker: remove all containers and images

This post just as a reminder how to delete all containers and images. While experimenting with Docker I continuously needs these commands and instead of always Googling them I might be better of just writing a small post.

Remove all containers

root@ubuntu-demo:/home/cloud-user# docker rm `docker ps -qa`

Remove all images

root@ubuntu-demo:/home/cloud-user# docker rmi $(docker images -q)

Remove all ghost images from Docker-Machine

Sometimes the creation of a host fails and it leaves a ghost entry when doing “docker-machine ls”. Below command will completely remove all hosts so be careful when using it.

root@ubuntu-demo:/home/cloud-user# docker-machine rm -f $(docker-machine ls -q);

Docker: Getting started with Docker Machine (AWS)

Introduction

In this post, we experimented a bit with Docker Machine and Virtualbox. We were able to successfully launch a docker host on Virtualbox. It would be interesting to try this now also on AWS. Note that for this post, I’m assuming you already have an AWS account.

Getting all info from AWS

You will need to gather some information from AWS:

  • your AWS Access Key ID
  • your AWS Secret Access Key
  • your region in which you want to launch your instance
  • your VPC id for that region

Getting AWS Access Key and Secret Access Key

In your AWS console, go to ‘Identity & Access Management’, then either create a user or click on an existing user (I’m assuming the latter). When you select the user, go to ‘User Actions’ and then ‘Manage Access Keys’. Create your security credentials and download them (they will only be displayed once).

Then, go to your local machine (in my case the MAC) and create a file ~/.aws/credentials with the following content:

   
[default]
    aws_access_key_id = **access_key**
    aws_secret_access_key = **secret_key**

Of course, change the placeholders with the value of your own credentials.

Getting your AWS region

By default, the AWS driver creates new instances in region us-east-1 (North Virginia). As I live in Europe, I prefer something closes. You can do this by specifying a different region by using the –amazonec2-region flag. For that, you will need to know the official name for your region. The easiest is to go to here and check under ‘Available Regions’.

Getting your AWS VPC ID

AWS creates your EC2 instances (by default) in a default VPC. So you will also need that one. To do so, go to your region (in my case Ireland (eu-west-1) and go to the VPC dashboard. Click on the VPC and take a note of the VPC-ID. Again, you will need this one later on.

Using Docker Machine

We did quite some preparation work, time has come now to get started with the docker-machine command. We will create an docker ready EC2 instance in the Ireland region. Do this as follows:

WAUTERW-M-G007:~ wauterw$ docker-machine create --driver amazonec2 --amazonec2-vpc-id vpc-93c6ddf6 --amazonec2-region eu-west-1 aws-docker1
Running pre-create checks...
Creating machine...
(aws-docker1) Launching instance...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with ubuntu(systemd)...
Installing Docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env aws-docker1

Wait about a minute or 2 and you will see that an EC2 instance with name aws-docker1 is spawning on AWS. Let me show a screenshot in case you don’t believe me.
docker-machine1
The whole process takes about a minute or 5 before the docker-machine command is finished completely installing docker on the host, etc…

Experimenting with Docker Machine

WAUTERW-M-G007:~ wauterw$ docker-machine ls
NAME          ACTIVE   DRIVER       STATE     URL                         SWARM   DOCKER    ERRORS
aws-docker1   -        amazonec2    Running   tcp://54.229.47.72:2376             v1.11.0
default       -        virtualbox   Running   tcp://192.168.99.100:2376           v1.11.0

When you create a new machine, your command shell automatically connects to it. In case this is not so, you’ll have to run eval $(docker-machine env aws-docker1). How I got that one? See below…

WAUTERW-M-G007:~ wauterw$ docker-machine env aws-docker1
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://54.229.47.72:2376"
export DOCKER_CERT_PATH="/Users/wauterw/.docker/machine/machines/aws-docker1"
export DOCKER_MACHINE_NAME="aws-docker1"
# Run this command to configure your shell:
# eval $(docker-machine env aws-docker1)

From now on, every docker command you will supply is running on the AWS host called ‘aws-docker1’. Let’s try things a bit…

WAUTERW-M-G007:~ wauterw$ docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
03f4658f8b78: Pull complete
a3ed95caeb02: Pull complete
Digest: sha256:8be990ef2aeb16dbcb9271ddfe2610fa6658d13f6dfb8bc72074cc1ca36966a7
Status: Downloaded newer image for hello-world:latest

Hello from Docker.

So we ran a container on our AWS host. Sure? Let’s see inside the AWS host. From your local MAC, do the following:

WAUTERW-M-G007:~ wauterw$ docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                      PORTS               NAMES
93c035403e68        hello-world         "/hello"            50 seconds ago      Exited (0) 49 seconds ago                       distracted_austin

The clearly refers to our hello-world example.

SSH into the AWS instance

If you look at the AWS console, you will see that the instance aws-docker1 has a keypair called ‘aws-docker1’. The issue is that you cannot download it. If you browse through keypairs, it’s clear that there is no option to download keypairs that have been generated previously. So how to get into the instance then? Luckily docker-machine has an ‘ssh’ subcommand that allows us to get access to the instance.

WAUTERW-M-G007:app wauterw$ docker-machine ssh aws-docker1

We can also stop the ‘aws-docker1’ host on AWS. To do that, issue the following command:

WAUTERW-M-G007:~ wauterw$ docker-machine stop aws-docker1
Stopping "aws-docker1"...
Machine "aws-docker1" was stopped.

If you then go to your AWS console, you’ll see the instance was stopped.

Obviously, we’re also able to remove a remote docker host. Do the following:

WAUTERW-M-G007:~ wauterw$ docker-machine rm aws-docker1
About to remove aws-docker1
Are you sure? (y/n): y
Successfully removed aws-docker1

You will then see that the ‘aws-docker1’ host on AWS is in terminated state.

Docker: Docker Machine and Virtualbox

Introduction

Wanted to learn a bit more on Docker Machine. The idea of Docker Machine is to provision Docker hosts on remote systems. The definition on the Docker site is as follows:

Docker Machine is a tool that lets you install Docker Engine on virtual hosts, and manage the hosts with docker-machine commands. You can use Machine to create Docker hosts on your local Mac or Windows box, on your company network, in your data center, or on cloud providers like AWS or Digital Ocean. Using docker-machine commands, you can start, inspect, stop, and restart a managed host, upgrade the Docker client and daemon, and configure a Docker client to talk to your host.

So the idea of this post is to have Docker Machine running on my MAC and use it to launch docker hosts on Virtualbox. This is actually quite nice. If you remember from all my previous Docker related posts, I was working on an Ubuntu machine I manually created on Openstack, I had to connect via SSH to this machine. Then I had to install and update the docker engine, and I had to run all commands from that host. If I wanted to provision an additional host, I had to repeat the same process over again. This is manageable for 1 or 2 hosts, but imaging you have 1000s of them. Seems like Docker machine could be a solution to that. In later posts, we will also try with cloud provider drivers such as AWS, DigitalOcean and Openstack

Let’s give it a try with Virtualbox first…

WAUTERW-M-G007:Downloads wauterw$ docker-machine create --driver virtualbox docker-host-virtualbox
Running pre-create checks...
Creating machine...
(docker-host-virtualbox) Copying /Users/wauterw/.docker/machine/cache/boot2docker.iso to /Users/wauterw/.docker/machine/machines/docker-host-virtualbox/boot2docker.iso...
(docker-host-virtualbox) Creating VirtualBox VM...
(docker-host-virtualbox) Creating SSH key...
(docker-host-virtualbox) Starting the VM...
(docker-host-virtualbox) Check network to re-create if needed...
(docker-host-virtualbox) Waiting for an IP...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with boot2docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env docker-host-virtualbox

Here is the result on Virtualbox:
virtualbox

Docker: container linking using Docker Compose

Introduction

In previous post, we have been deploying a full web application through linking two containers (application and database) with each other through the docker command line. All went well, but all in all, it is not really an optimal solution. Luckily, Docker again comes to the rescue with the ‘docker-compose’ tool. Dockers explains it pretty well on their website:

Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a Compose file to configure your application’s services. Then, using a single command, you create and start all the services from your configuration.

For simplicity, we will use the same web application as in the previous post (use git clone to download it on your environment). As a first step, we will need to make a small change in the ‘config/database.js’ file. The reason why will be explained a bit further.

module.exports = {
	url : 'mongodb://mongo-container:27017/todo'
}

to

module.exports = {
	url : 'mongodb://mongo_container:27017/todo'
}

We will re-use the same Dockerfile as from the previous example. We are inserting it here for you convenience:

FROM ubuntu:14.04

# Enable EPEL for Node.js

RUN apt-get update
RUN apt-get -y install build-essential
RUN apt-get -y install nodejs
RUN apt-get -y install npm
RUN apt-get -y install git
RUN apt-get -y install git-core


# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app

# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install

# Bundle app source
COPY . /usr/src/app

EXPOSE 3000

CMD ["nodejs", "/usr/src/app/bin/www"]

We can then go ahead and create the docker-compose.yml file as per the documentation on the Docker website. The content of the docker-compose.yml file is below for your reference.

version: '2'
services:
  express_container:
    build: .
    ports:
     - "3000:3000"
    volumes:
     - .:/usr/src/app
    depends_on:
     - mongo_container
  mongo_container:
    image: mongo

This docker-compose file has the following characteristics:

  • two services are created: one called express_container and one called mongo_container.
  • for the express_container service, we bind port 3000 of our host to port 3000 of our container.
  • we also make the express_container service dependant on the mongo_container (we create a link so to say)
  • the express_container builds from the Dockerfile in the current directory
  • mounts the project directory on the host to the /usr/src/app directory inside the container allowing us to modify the code without having to rebuild the image

Note: a lot of people would recommend to use environment variables to link the mongo container (database url) to the application. However the Docker website mentions the following:

Note: Environment variables are no longer the recommended method for connecting to linked services. Instead, you should use the link name (by default, the name of the linked service) as the hostname to connect to. See the docker-compose.yml documentation for details.

So instead of using environment variables, we will work as per their suggestion. Note that the mongo service is referenced as ‘mongo_container’ and this is the reason why we had to also use this in the config/database.js file earlier. I initially was using ‘mongo-container’ but I noticed some issues with using special characters like ‘-‘ so I decided to change it a bit. See here for more information.

Once we have the Dockerfile and the docker-compose.yml file, we should be good to go. Let’s start the containers by running the following command:

root@ubuntu-docker-1:/home/cloud-user/Express_Todo_Mongo_API_Jade# docker-compose up
......
......
root@ubuntu-docker-1:/home/cloud-user/Express_Todo_Mongo_API_Jade# docker ps
CONTAINER ID        IMAGE                                       COMMAND                  CREATED             STATUS              PORTS                    NAMES
c2dc51858b3d        expresstodomongoapijade_express_container   "nodejs /usr/src/app/"   2 minutes ago       Up 2 minutes        0.0.0.0:3000->3000/tcp   expresstodomongoapijade_express_container_1
114f367bd23f        mongo                                       "/entrypoint.sh mongo"   2 minutes ago       Up 2 minutes        27017/tcp                expresstodomongoapijade_mongo_container_1
root@ubuntu-docker-1:/home/cloud-user/Express_Todo_Mongo_API_Jade# docker images
REPOSITORY                                  TAG                 IMAGE ID            CREATED             SIZE
expresstodomongoapijade_express_container   latest              93e6e932681d        3 minutes ago       452.8 MB
ubuntu                                      14.04               b72889fa879c        13 hours ago        188 MB
mongo                                       latest              04f2f0daa7a5        9 days ago          309.8 MB

So we now have a full web application running using docker-compose. Thanks for reading and see you later!