Monthly Archives: December 2016

Docker networking: Flannel

Introduction

Flannel is a networking technology used to connect containers. It is distributed and maintained by CoreOS and was originally designed for Kubernetes. It is a generic overlay network that can be used as a simple alternative to existing docker networking solutions. It provides a configurable virtual overlay network for Docker.

There are many different ways to network containers, all with different architectural approaches. Docker’s native networking scheme creates a virtual layer 3 Ethernet bridge, which automatically forwards packets between containers through a subnet routing scheme. Flannel on the other side is a basic overlay network that works by assigning a range of subnet addresses. Each address corresponds to a container, so that all containers in a system may reside on different hosts. By using packet encapsulation, Flannel enables the entire span of hosts to be addressed, by assigning a separate subnet to each host. Flannel uses the open source etcd key/value store to record the mappings between the addresses assigned to containers by their native hosts, and their addresses in the overlay network.

In all honesty, I struggled to get Flannel installed on my hosts. For some reason, yet unknown to me, every attempt to install Flannel on an Ubuntu host just failed. I logged an issue here so let’s see what comes out of it.

As I just wanted to get my hands dirty with Flannel, I have re-used this post to get a CoreOS cluster up and running easily. In this post, I will look more into the Flannel specifics though.

Looking at the IP configuration

It’s always interesting to have a look at the ip configuration. SSH into the CoreOS hosts and have a look at it:

On the first host:

core@node-01 ~ $ ifconfig
eth0: flags=4163  mtu 1500
        inet 10.0.2.15  netmask 255.255.255.0  broadcast 10.0.2.255
        inet6 fe80::a00:27ff:fe7b:5175  prefixlen 64  scopeid 0x20
        ether 08:00:27:7b:51:75  txqueuelen 1000  (Ethernet)
        RX packets 6897  bytes 8445671 (8.0 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 3634  bytes 245544 (239.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth1: flags=4163  mtu 1500
        inet 172.17.8.101  netmask 255.255.255.0  broadcast 172.17.8.255
        inet6 fe80::a00:27ff:fe19:656b  prefixlen 64  scopeid 0x20
        ether 08:00:27:19:65:6b  txqueuelen 1000  (Ethernet)
        RX packets 2885  bytes 288770 (282.0 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2908  bytes 289173 (282.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

flannel0: flags=4305  mtu 1472
        inet 10.1.16.0  netmask 255.255.0.0  destination 10.1.16.0
        inet6 fe80::a98a:9a9f:4164:8419  prefixlen 64  scopeid 0x20
        unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  txqueuelen 500  (UNSPEC)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 3  bytes 144 (144.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10
        loop  txqueuelen 1  (Local Loopback)
        RX packets 901  bytes 227935 (222.5 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 901  bytes 227935 (222.5 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

On the second host:

core@node-02 ~ $ ifconfig
eth0: flags=4163  mtu 1500
        inet 10.0.2.15  netmask 255.255.255.0  broadcast 10.0.2.255
        inet6 fe80::a00:27ff:fe7b:5175  prefixlen 64  scopeid 0x20
        ether 08:00:27:7b:51:75  txqueuelen 1000  (Ethernet)
        RX packets 6887  bytes 8444854 (8.0 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 3459  bytes 235545 (230.0 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth1: flags=4163  mtu 1500
        inet 172.17.8.102  netmask 255.255.255.0  broadcast 172.17.8.255
        inet6 fe80::a00:27ff:fe26:5cce  prefixlen 64  scopeid 0x20
        ether 08:00:27:26:5c:ce  txqueuelen 1000  (Ethernet)
        RX packets 3024  bytes 300022 (292.9 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 3028  bytes 302154 (295.0 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

flannel0: flags=4305  mtu 1472
        inet 10.1.12.0  netmask 255.255.0.0  destination 10.1.12.0
        inet6 fe80::1a69:f9ce:1e7:dd97  prefixlen 64  scopeid 0x20
        unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  txqueuelen 500  (UNSPEC)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 3  bytes 144 (144.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10
        loop  txqueuelen 1  (Local Loopback)
        RX packets 623  bytes 109571 (107.0 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 623  bytes 109571 (107.0 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

You notice immediately that a flannel0 interface is available. This is the flannel overlay network that has been created automatically (using our Vagrantfile). You can see here also that containers on node 01 will get IP addresses in the 10.1.16.0 network, while containers on node 02 will get IP addresses in the 10.1.12.0 network.

So wait, where was that defined? The easier answer is that it is defined in the user-data file. You can see that in the flannel section we have defined a network 10.1.0.0/16.

- name: flanneld.service
    drop-ins:
    - name: 50-network-config.conf
      content: |
        [Service]
        ExecStartPre=/usr/bin/etcdctl set /coreos.com/network/config '{ "Network": "10.1.0.0/16" }'
    command: start

Another way to verify this is to retrieve the information from the etcd key-value store.

core@node-01 ~ $ /usr/bin/etcdctl get /coreos.com/network/config
{ "Network": "10.1.0.0/16" }

By default, flannel assigns a /24 to each host. So far, it is unclear to me why it selects 10.1.12.0 and 10.1.16.0 initially. Need to reserve some time to find out.

Docker networking: Docker overlay network with Consul

Introduction

I have said it before already, container technology is great. As long as you launch containers on a single host, life is easy. However, running containers on multiple hosts is a bit more difficult, mainly related to the multi-host networking. Luckily Docker has improved on this after the acquisition of Socketplane in 2015.

In this post, we will investigate how multi-host networking with Docker works.

Prepare the Docker hosts

Execute the below bash script. It will create 3 hosts. One host will be running the external KV store (consul) while the two other hosts will be used to run containers.

#!/bin/bash

set -e

# Docker Machine Setup
docker-machine create \
    -d virtualbox \
    consul

docker $(docker-machine config consul) run -d \
    -p "8500:8500" \
    -h "consul" \
    progrium/consul -server -bootstrap

docker-machine create \
    -d virtualbox \
    --virtualbox-disk-size 50000 \
    --engine-opt="cluster-store=consul://$(docker-machine ip consul):8500" \
    --engine-opt="cluster-advertise=eth1:0" \
    node-01

docker-machine create \
    -d virtualbox \
    --virtualbox-disk-size 50000 \
    --engine-opt="cluster-store=consul://$(docker-machine ip consul):8500" \
    --engine-opt="cluster-advertise=eth1:0" \
    node-02

Note: if you are running Docker 1.12 you could use swarm-mode and with that you don’t need an external KV store anymore. We will try this later, but in this post I wanted to test the network between two standalone hosts.

When the script is finished, you will see three hosts running:

WAUTERW-M-G007:docker wauterw$ docker-machine ls
NAME      ACTIVE   DRIVER       STATE     URL                         SWARM   DOCKER    ERRORS
consul    -        virtualbox   Running   tcp://192.168.99.105:2376           v1.12.3
node-01   -        virtualbox   Running   tcp://192.168.99.106:2376           v1.12.3
node-02   -        virtualbox   Running   tcp://192.168.99.107:2376           v1.12.3

The IP addresses for the nodes are 192.168.99.106 and 192.168.99.107 respectively.

Let’s have a look at the various networks on this host.

WAUTERW-M-G007:docker wauterw$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
4a4108c4ee92        bridge              bridge              local
1dfa158a5f0d        host                host                local
a3c1bf6b2b1b        none                null                local
WAUTERW-M-G007:docker wauterw$ eval $(docker-machine env node-02)
WAUTERW-M-G007:~ wauterw$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
86ed80f7033c        bridge              bridge              local
c8e590484c9b        host                host                local
a2ee4efd1120        none                null                local

Historically, these three networks are part of Docker’s implementation. When you run a container you can use the –network flag to specify which network you want to run a container on. So as expected, we see 3 networks:

  • bridge: The bridge network represents the docker0 network present in all Docker installations. The Docker daemon connects containers to this network by default.
  • host: The host network adds a container on the hosts network stack. You’ll find the network configuration inside the container is identical to the host.
  • none: The none network adds a container to a container-specific network stack. That container lacks a network interface.

Let’s have a look at the bridge network. From below command output, you can see that Docker is creating a network with subnet 172.17.0.0/16. This network is also available on the second host, node-02.

WAUTERW-M-G007:docker wauterw$ eval $(docker-machine env node-01)
WAUTERW-M-G007:docker wauterw$ docker network inspect bridge
[
    {
        "Name": "bridge",
        "Id": "4a4108c4ee925389c4e64024ab4ebeb641418f433f9f30406a73246db9c1e12d",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16"
                }
            ]
        },
        "Internal": false,
        "Containers": {},
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]
WAUTERW-M-G007:docker wauterw$ eval $(docker-machine env node-02)
WAUTERW-M-G007:~ wauterw$ docker network inspect bridge
[
    {
        "Name": "bridge",
        "Id": "86ed80f7033c7d64fd7d49c0072ab7c04e733d02204781bbc94c30387347e233",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16",
                    "Gateway": "172.17.0.1"
                }
            ]
        },
        "Internal": false,
        "Containers": {},
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]

Notice that our host has indeed a docker0 interface as well as an eth0 and eth1 interface

node01

node02

Create an overlay network

WAUTERW-M-G007:docker wauterw$ eval $(docker-machine env node-01)
WAUTERW-M-G007:docker wauterw$ docker network create -d overlay mynetfb669e6d67075afcc89c6cd5cab6503d2b5496abf010e129dc5a0fa13d9c95ddWAUTERW-M-G007:docker wauterw$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
4a4108c4ee92        bridge              bridge              local
1dfa158a5f0d        host                host                local
fb669e6d6707        mynet               overlay             global
a3c1bf6b2b1b        none                null                local
WAUTERW-M-G007:~ wauterw$ eval $(docker-machine env node-02)
WAUTERW-M-G007:~ wauterw$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
86ed80f7033c        bridge              bridge              local
c8e590484c9b        host                host                local
fb669e6d6707        mynet               overlay             global
a2ee4efd1120        none                null                local

Note that the overlay is added on both hosts immediately with the same ID. Next, let’s inspect the mynet network in more detail:

WAUTERW-M-G007:docker wauterw$ docker network inspect mynet
[
    {
        "Name": "mynet",
        "Id": "fb669e6d67075afcc89c6cd5cab6503d2b5496abf010e129dc5a0fa13d9c95dd",
        "Scope": "global",
        "Driver": "overlay",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "10.0.0.0/24",
                    "Gateway": "10.0.0.1/24"
                }
            ]
        },
        "Internal": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]

The mynet overlay network has a subnet 10.0.0.0/24.

Launch containers without overlay

In this section, we will first create some containers without specifying to which network they belong. By default Docker will assume that they are part of the bridge network.

WAUTERW-M-G007:docker wauterw$ eval $(docker-machine env node-01)
WAUTERW-M-G007:docker wauterw$ docker run -itd --name container-01 ubuntu:14.04
Unable to find image 'ubuntu:14.04' locally
14.04: Pulling from library/ubuntu

ba76e97bb96c: Pull complete
4d6181e6b423: Pull complete
4854897be9ac: Pull complete
4458f3097eef: Pull complete
9989a8de1a9e: Pull complete
Digest: sha256:062bba17f92e749bd3092e7569aa06c6773ade7df603958026f2f5397431754c
Status: Downloaded newer image for ubuntu:14.04
ef490a761a728e029ea71d191b81d521ca36e18318341260c7e8609f8ef70062
WAUTERW-M-G007:~ wauterw$ eval $(docker-machine env node-02)
WAUTERW-M-G007:~ wauterw$ docker run -itd --name container-02 ubuntu:14.04
Unable to find image 'ubuntu:14.04' locally
14.04: Pulling from library/ubuntu

ba76e97bb96c: Pull complete
4d6181e6b423: Pull complete
4854897be9ac: Pull complete
4458f3097eef: Pull complete
9989a8de1a9e: Pull complete
Digest: sha256:062bba17f92e749bd3092e7569aa06c6773ade7df603958026f2f5397431754c
Status: Downloaded newer image for ubuntu:14.04
1a3502101e49b7569dff1d519774705adf0a9992f5645d56617eb0c26aec0a71

As mentioned, these containers use the bridge network. Let’s verify this:

WAUTERW-M-G007:docker wauterw$ eval $(docker-machine env node-01)
WAUTERW-M-G007:docker wauterw$ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED              STATUS              PORTS               NAMES
ef490a761a72        ubuntu:14.04        "/bin/bash"         About a minute ago   Up About a minute                       container-01
WAUTERW-M-G007:docker wauterw$ docker network inspect bridge
[
    {
        "Name": "bridge",
        "Id": "4a4108c4ee925389c4e64024ab4ebeb641418f433f9f30406a73246db9c1e12d",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16"
                }
            ]
        },
        "Internal": false,
        "Containers": {
            "ef490a761a728e029ea71d191b81d521ca36e18318341260c7e8609f8ef70062": {
                "Name": "container-01",
                "EndpointID": "20cdec7139c6088dc401fdd4f04700656ace85fa8ff6ee13b4e9f561788f2612",
                "MacAddress": "02:42:ac:11:00:02",
                "IPv4Address": "172.17.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]
WAUTERW-M-G007:~ wauterw$ eval $(docker-machine env node-02)
WAUTERW-M-G007:~ wauterw$ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED              STATUS              PORTS               NAMES
1a3502101e49        ubuntu:14.04        "/bin/bash"         About a minute ago   Up About a minute                       container-02
WAUTERW-M-G007:~ wauterw$ docker network inspect bridge
[
    {
        "Name": "bridge",
        "Id": "86ed80f7033c7d64fd7d49c0072ab7c04e733d02204781bbc94c30387347e233",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16",
                    "Gateway": "172.17.0.1"
                }
            ]
        },
        "Internal": false,
        "Containers": {
            "1a3502101e49b7569dff1d519774705adf0a9992f5645d56617eb0c26aec0a71": {
                "Name": "container-02",
                "EndpointID": "d7f2858a1db6c9826ff817e3de9dddeb99ccbb2feebcdcdc49614c8a5abc061d",
                "MacAddress": "02:42:ac:11:00:02",
                "IPv4Address": "172.17.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]

From above command output, you can see that container-01 with ID ef490a761a72 and container-02 with ID 1a3502101e49 belong to the bridge network.

Ping between containers without overlay

Let’s find out the IP addresses for container-01 (running on node-01) and container-02 (running on node-02). As they are both part of the respective bridge network on their host, we can assume they will get an IP address in subnet 172.17.0.0/16.

WAUTERW-M-G007:docker wauterw$ docker exec container-01 ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:ac:11:00:02
          inet addr:172.17.0.2  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::42:acff:fe11:2/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:16 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1296 (1.2 KB)  TX bytes:648 (648.0 B)
WAUTERW-M-G007:~ wauterw$ docker exec container-02 ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:ac:11:00:02
          inet addr:172.17.0.2  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::42:acff:fe11:2/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:16 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1296 (1.2 KB)  TX bytes:648 (648.0 B)

As a matter of fact, they both received IP address 172.17.0.2. If we would execute a ping command from container-01 to 172.17.0.2, we would simply get a reply from container-01 so that does not bring us a lot of value. So let’s try something else instead.

We will launch an additional container, called container-03 and we will use the nginx image. This will in fact just expose a webserver. We will then also launche an additional container, container-04, which will run the wget command to retrieve the default nginx page.

WAUTERW-M-G007:docker wauterw$ docker run -itd --name=container-03 nginx
Unable to find image 'nginx:latest' locally
latest: Pulling from library/nginx
386a066cd84a: Pull complete
7bdb4b002d7f: Pull complete
49b006ddea70: Pull complete
Digest: sha256:9038d5645fa5fcca445d12e1b8979c87f46ca42cfb17beb1e5e093785991a639
Status: Downloaded newer image for nginx:latest
b08ee52b4baa874f5b6f8ed4667f48222c8a6b2d69fe42124b96f88ad93a6656
WAUTERW-M-G007:~ wauterw$ docker run -it --name=container-04 --rm busybox wget -qO- http://container-03
Unable to find image 'busybox:latest' locally
latest: Pulling from library/busybox
56bec22e3559: Pull complete
Digest: sha256:29f5d56d12684887bdfa50dcd29fc31eea4aaf4ad3bec43daf19026a7ce69912
Status: Downloaded newer image for busybox:latest
wget: bad address 'container-03'

You can see here already that we cannot reach container-03 (running on node-01) from container-04 (running on node-02).

Launch containers with overlay

Next, we will launch an additional two containers, one on each host. We will provide the –net option to connect them to the overlay network we defined earlier.

WAUTERW-M-G007:docker wauterw$ eval $(docker-machine env node-01)
WAUTERW-M-G007:docker wauterw$ docker run -itd --name container-05 --net=mynet ubuntu:14.04
e894cf873c10fe553b59e6c8c9fced0d0090f0e7090b143d4ec181c3ba71e451
WAUTERW-M-G007:~ wauterw$ eval $(docker-machine env node-02)
WAUTERW-M-G007:~ wauterw$ docker run -itd --name container-06 --net=mynet ubuntu:14.04
8d03466e77ee5045ad04fbc3947b5e2eda7711984c79aa168d9d5b4f59cdb20a

You will also see now that Docker has created an additional network called docker_gwbridge. While the mynet network has type overlay, the docker_gwbridge has type bridge. The overlay

WAUTERW-M-G007:docker wauterw$ eval $(docker-machine env node-01)
WAUTERW-M-G007:docker wauterw$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
4a4108c4ee92        bridge              bridge              local
6c1b131717d6        docker_gwbridge     bridge              local
1dfa158a5f0d        host                host                local
fb669e6d6707        mynet               overlay             global
a3c1bf6b2b1b        none                null                local
WAUTERW-M-G007:docker wauterw$ eval $(docker-machine env node-02)
WAUTERW-M-G007:~ wauterw$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
86ed80f7033c        bridge              bridge              local
7b94d4e06714        docker_gwbridge     bridge              local
c8e590484c9b        host                host                local
fb669e6d6707        mynet               overlay             global
a2ee4efd1120        none                null                local
WAUTERW-M-G007:~ wauterw$ eval $(docker-machine env node-01)
WAUTERW-M-G007:docker wauterw$ docker network inspect docker_gwbridge
[
    {
        "Name": "docker_gwbridge",
        "Id": "6c1b131717d69cfc3b34c2b390cef71f728d3e56365374221f89548e13139b85",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.18.0.0/16",
                    "Gateway": "172.18.0.1/16"
                }
            ]
        },
        "Internal": false,
        "Containers": {
            "e894cf873c10fe553b59e6c8c9fced0d0090f0e7090b143d4ec181c3ba71e451": {
                "Name": "gateway_e894cf873c10",
                "EndpointID": "e536fa3a0f52ad7646e12a4d7612eb347b982b561f549452959bd74ef3b5fe0b",
                "MacAddress": "02:42:ac:12:00:02",
                "IPv4Address": "172.18.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.enable_icc": "false",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.name": "docker_gwbridge"
        },
        "Labels": {}
    }
]
WAUTERW-M-G007:~ wauterw$ eval $(docker-machine env node-02)
WAUTERW-M-G007:~ wauterw$ docker network inspect docker_gwbridge
[
    {
        "Name": "docker_gwbridge",
        "Id": "7b94d4e06714bf55fd5b682aaf909d2d256dd7c6d1848ae1f03217e9d9e32f21",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.18.0.0/16",
                    "Gateway": "172.18.0.1/16"
                }
            ]
        },
        "Internal": false,
        "Containers": {
            "8d03466e77ee5045ad04fbc3947b5e2eda7711984c79aa168d9d5b4f59cdb20a": {
                "Name": "gateway_8d03466e77ee",
                "EndpointID": "ee0b6ef48e8090626e050035dd370a9179ab514c24f70a6b28b4e2ec7e39d132",
                "MacAddress": "02:42:ac:12:00:02",
                "IPv4Address": "172.18.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.enable_icc": "false",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.name": "docker_gwbridge"
        },
        "Labels": {}
    }
]
WAUTERW-M-G007:docker wauterw$ eval $(docker-machine env node-01)
WAUTERW-M-G007:docker wauterw$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
e894cf873c10        ubuntu:14.04        "/bin/bash"              9 minutes ago       Up 9 minutes                            container-05
b08ee52b4baa        nginx               "nginx -g 'daemon off"   2 hours ago         Up 2 hours          80/tcp, 443/tcp     container-03
ef490a761a72        ubuntu:14.04        "/bin/bash"              2 hours ago         Up 2 hours                              container-01
WAUTERW-M-G007:docker wauterw$ eval $(docker-machine env node-02)
WAUTERW-M-G007:~ wauterw$ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
8d03466e77ee        ubuntu:14.04        "/bin/bash"         8 minutes ago       Up 8 minutes                            container-06
1a3502101e49        ubuntu:14.04        "/bin/bash"         2 hours ago         Up 2 hours                              container-02

From the above output, we can see that docker_gwbridge has subnet 172.18.0.0/16 and that container-05 and container-06 belongs to this network but as you can see below, they also belong to the overlay network mynet with a subnet 10.0.0.0/24

WAUTERW-M-G007:docker wauterw$ eval $(docker-machine env node-01)
WAUTERW-M-G007:docker wauterw$ docker network inspect mynet
[
    {
        "Name": "mynet",
        "Id": "fb669e6d67075afcc89c6cd5cab6503d2b5496abf010e129dc5a0fa13d9c95dd",
        "Scope": "global",
        "Driver": "overlay",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "10.0.0.0/24",
                    "Gateway": "10.0.0.1/24"
                }
            ]
        },
        "Internal": false,
        "Containers": {
            "e894cf873c10fe553b59e6c8c9fced0d0090f0e7090b143d4ec181c3ba71e451": {
                "Name": "container-05",
                "EndpointID": "b89c30bd06716b9cb4e21ba6b3e018211ee2fd1ec16e25b58a862e85520a4a7f",
                "MacAddress": "02:42:0a:00:00:02",
                "IPv4Address": "10.0.0.2/24",
                "IPv6Address": ""
            },
            "ep-8817f3a22628fdc551d67fa5d0226e3c00c870b7aa47605586488603eaf2b8fd": {
                "Name": "container-06",
                "EndpointID": "8817f3a22628fdc551d67fa5d0226e3c00c870b7aa47605586488603eaf2b8fd",
                "MacAddress": "02:42:0a:00:00:03",
                "IPv4Address": "10.0.0.3/24",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]
WAUTERW-M-G007:docker wauterw$ eval $(docker-machine env node-02)
WAUTERW-M-G007:~ wauterw$ docker network inspect mynet
[
    {
        "Name": "mynet",
        "Id": "fb669e6d67075afcc89c6cd5cab6503d2b5496abf010e129dc5a0fa13d9c95dd",
        "Scope": "global",
        "Driver": "overlay",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "10.0.0.0/24",
                    "Gateway": "10.0.0.1/24"
                }
            ]
        },
        "Internal": false,
        "Containers": {
            "8d03466e77ee5045ad04fbc3947b5e2eda7711984c79aa168d9d5b4f59cdb20a": {
                "Name": "container-06",
                "EndpointID": "8817f3a22628fdc551d67fa5d0226e3c00c870b7aa47605586488603eaf2b8fd",
                "MacAddress": "02:42:0a:00:00:03",
                "IPv4Address": "10.0.0.3/24",
                "IPv6Address": ""
            },
            "ep-b89c30bd06716b9cb4e21ba6b3e018211ee2fd1ec16e25b58a862e85520a4a7f": {
                "Name": "container-05",
                "EndpointID": "b89c30bd06716b9cb4e21ba6b3e018211ee2fd1ec16e25b58a862e85520a4a7f",
                "MacAddress": "02:42:0a:00:00:02",
                "IPv4Address": "10.0.0.2/24",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]

Last but not least, let’s have a look at the IP addresses allocated to the containers. Here you can clearly see that our containers have an eth0 interface to mynet and eth1 to docker_gwbridge.

WAUTERW-M-G007:docker wauterw$ eval $(docker-machine env node-01)
WAUTERW-M-G007:docker wauterw$ docker exec container-05 ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:0a:00:00:02
          inet addr:10.0.0.2  Bcast:0.0.0.0  Mask:255.255.255.0
          inet6 addr: fe80::42:aff:fe00:2/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1450  Metric:1
          RX packets:15 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1206 (1.2 KB)  TX bytes:648 (648.0 B)

eth1      Link encap:Ethernet  HWaddr 02:42:ac:12:00:02
          inet addr:172.18.0.2  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::42:acff:fe12:2/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:15 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1206 (1.2 KB)  TX bytes:648 (648.0 B)

WAUTERW-M-G007:docker wauterw$ eval $(docker-machine env node-02)
WAUTERW-M-G007:~ wauterw$ docker exec container-06 ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:0a:00:00:03
          inet addr:10.0.0.3  Bcast:0.0.0.0  Mask:255.255.255.0
          inet6 addr: fe80::42:aff:fe00:3/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1450  Metric:1
          RX packets:15 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1206 (1.2 KB)  TX bytes:648 (648.0 B)

eth1      Link encap:Ethernet  HWaddr 02:42:ac:12:00:02
          inet addr:172.18.0.2  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::42:acff:fe12:2/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:15 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1206 (1.2 KB)  TX bytes:648 (648.0 B)

The hosts themselves have a docker0 interface (172.17.0.1 on both hosts), a docker_gwbridge interface (172.18.0.1 on both hosts ), an eth0 interface (10.0.2.15 on both hosts), an eth1 interface (192.168.99.106 for node-01 and 192.168.99.107 for node-02) and an additional two veth interfaces.

Pinging containers with overlay network

Let’s then finally test if we can ping from container-05 to container-06.

From container-05 to container-06:

WAUTERW-M-G007:docker wauterw$ docker exec container-05 ping 10.0.0.3
PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data.
64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.479 ms
64 bytes from 10.0.0.3: icmp_seq=2 ttl=64 time=0.643 ms
64 bytes from 10.0.0.3: icmp_seq=3 ttl=64 time=0.481 ms
64 bytes from 10.0.0.3: icmp_seq=4 ttl=64 time=0.610 ms
64 bytes from 10.0.0.3: icmp_seq=5 ttl=64 time=0.496 ms
64 bytes from 10.0.0.3: icmp_seq=6 ttl=64 time=0.603 ms
^C

From container-06 to container-05:

WAUTERW-M-G007:~ wauterw$ docker exec container-06 ping 10.0.0.2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.404 ms
64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=0.489 ms
64 bytes from 10.0.0.2: icmp_seq=3 ttl=64 time=0.486 ms
64 bytes from 10.0.0.2: icmp_seq=4 ttl=64 time=0.505 ms

That’s it for now. Quite a lengthy post, but I wanted to provide enough details for you to follow along. Hope you enjoyed!

Rancher: Setup a multi-cloud environment

Introduction

In this post, we will setup a multi-cloud environment, which means we will install some hosts on EC2 and some hosts on DigitalOcean. This will just explain how to create these hosts, but the post will not (yet) focus on full high availability.

Install Rancher

Just as we did in previous posts, we will start with installing the Rancher host and Rancher server.

WAUTERW-M-G007:~ wauterw$ docker-machine create -d amazonec2 --amazonec2-vpc-id vpc-84fd6de0 --amazonec2-region eu-west-1 --amazonec2-ami ami-c5f1beb6 --amazonec2-ssh-user rancher Rancher-AWS
Running pre-create checks...
Creating machine...
(Rancher-AWS) Launching instance...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with rancheros...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env Rancher-AWS
WAUTERW-M-G007:~ wauterw$ eval $(docker-machine env Rancher-AWS)
WAUTERW-M-G007:~ wauterw$ docker run -d --restart=unless-stopped -p 8080:8080 rancher/server
Unable to find image 'rancher/server:latest' locally
latest: Pulling from rancher/server
96c6a1f3c3b0: Pull complete
ed40d4bcb313: Pull complete
...
Digest: sha256:d5a798d1274bcf6813fc9866660dc8559b7e17cdce47608bce28d134bd4f2dc1
Status: Downloaded newer image for rancher/server:latest
7f8de76097c1f91a508de09a1ac1e049a370794068728734d2b8bf038d575551

This will finally result in the EC2 host being added in AWS. Lookup the public IP in AWS and open a webbrowser on http://IP_ADDRESS:8080 to see the Rancher UI.

rancher-aws-01

Method 1: Add EC2 hosts via the UI

Next, we will add hosts to the Rancher setup. First, we will demonstrate how to do this via the Rancher UI which is very straightforward.

Click on ‘Add host’ and select ‘EC2’:
rancher-aws-03
Fill in the availability zone you want the host to be available in. Also select the proper VPC:

rancher-aws-04
Next, choose your security group. Rancher creates its own security group, but instead I re-used a security group I created earlier. Note that you need to open ports 22 (TCP), 8080 (TCP), 2376 (TCP), 500 (UDP) and 4500 (UDP).

rancher-aws-05
Next, provide some details on the EC2 hosts, like instance type, AMI Id etc… Note that I used the table in the README file on this link:

rancher-aws-06
Go to the Infrastructure tab to see an overview of all the hosts:
rancher-aws-07
Obviously you should also see the server added as an EC2 instance on AWS:
rancher-aws-08

Method 2: Add EC2 hosts via the AWS CLI

You could use the following command to create an EC2 host.

WAUTERW-M-G007:~ wauterw$ aws ec2 run-instances --image-id ami-c5f1beb6 --count 1 --instance-type t2.micro --security-groups docker-machine --key-name keypair_ireland

While the above command works, the annoying thing with this command is that the instance will have no name in the EC2 console. The problem is that the EC2 run-instances command does not support a –tag flag

If this is really something you will want, you’ll need to install jq (CLI JSON parser)

WAUTERW-M-G007:~ wauterw$ brew install jq
==> Installing dependencies for jq: oniguruma
==> Installing jq dependency: oniguruma
==> Downloading https://homebrew.bintray.com/bottles/oniguruma-6.1.1_1.sierra.bottle.tar.gz
######################################################################## 100.0%
==> Pouring oniguruma-6.1.1_1.sierra.bottle.tar.gz
?  /usr/local/Cellar/oniguruma/6.1.1_1: 17 files, 1.3M
==> Installing jq
==> Downloading https://homebrew.bintray.com/bottles/jq-1.5_2.sierra.bottle.tar.gz
######################################################################## 100.0%
==> Pouring jq-1.5_2.sierra.bottle.tar.gz
?  /usr/local/Cellar/jq/1.5_2: 18 files, 957.9K

Then run the following command

WAUTERW-M-G007:~ wauterw$ aws ec2 create-tags --resources `aws ec2 run-instances --image-id ami-c5f1beb6 --count 1 --instance-type t2.micro --security-group-ids docker-machine --key-name "keypair_wauters1978_ireland" | jq -r ".Instances[0].InstanceId"` --tags "Key=Name,Value=Rancher-AWS-Node-02"

You will see that a host is added to the EC2 console (see screenshot below):

rancher-aws-09

Next, go to the Rancher UI and click ‘Add host’ and then the ‘Custom’ method. Fill in all details and copy/paste the resulting command into the CLI (make sure you are ssh’ed into your newly created EC2 host)

WAUTERW-M-G007:Belangrijk wauterw$ ssh -i keypair_wauters1978_ireland.pem rancher@52.209.194.184
The authenticity of host '52.209.194.184 (52.209.194.184)' can't be established.
ECDSA key fingerprint is SHA256:x9TkjtjSnT256EoomzOytE7SGP5SGnzOLdUQi+UWnYA.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '52.209.194.184' (ECDSA) to the list of known hosts.
[rancher@ip-172-31-42-152 ~]$
[rancher@ip-172-31-42-152 ~]$
[rancher@ip-172-31-42-152 ~]$
[rancher@ip-172-31-42-152 ~]$ sudo docker run -e CATTLE_HOST_LABELS='Name=Rancher-AWS-Node-02'  -d --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher rancher/agent:v1.0.2 http://52.213.72.36:8080/v1/scripts/B5920D373229E6F93AA1:1478606400000:LpxEbHAKHqfIy0SYV5lynZZAuM
Unable to find image 'rancher/agent:v1.0.2' locally
v1.0.2: Pulling from rancher/agent
5a132a7e7af1: Pull complete
fd2731e4c50c: Pull complete
28a2f68d1120: Pull complete
a3ed95caeb02: Pull complete
7fa4fac65171: Pull complete
33de63de5fdb: Pull complete
d00b3b942272: Pull complete
Digest: sha256:b0b532d1e891534779d0eb1a01a5717ebfff9ac024db4412ead87d834ba92544
Status: Downloaded newer image for rancher/agent:v1.0.2
35f59a67b353760a360fb3a47e5acb73b78fcce63f9c8d54cebdfb4824ebbe30
[rancher@ip-172-31-42-152 ~]$

Eventually this host will appear in the RancherUI:
rancher-aws-11

Note: if you see the hostname (in my case) ip-172-31-42-152…appear in the Rancher UI, than you can easily change this by setting it correctly into the /etc/hostname file once you SSH’ed into the host)

Method 3: Add EC2 hosts via the Docker-Machine

Another method to add Rancher hosts is via the good old Docker-Machine. Follow along with following steps:

WAUTERW-M-G007:Belangrijk wauterw$ docker-machine create -d amazonec2 --amazonec2-vpc-id vpc-84fd6de0 --amazonec2-region eu-west-1 --amazonec2-ami ami-c5f1beb6 --amazonec2-ssh-user rancher Rancher-AWS-Node-3
Running pre-create checks...
Creating machine...
(Rancher-AWS-Node-3) Launching instance...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with rancheros...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env Rancher-AWS-Node-3

Continue with SSH’ing into the host and then run the command you copied from the Rancher UI (Custom method). This procedure was explained already in the section “Adding hosts to Rancher server” of this post:

WAUTERW-M-G007:Belangrijk wauterw$ eval $(docker-machine env Rancher-AWS-Node-3)
WAUTERW-M-G007:Belangrijk wauterw$ docker-machine ssh Rancher-AWS-Node-3
[rancher@Rancher-AWS-Node-3 ~]$ sudo docker run -e CATTLE_HOST_LABELS='Name=Rancher-AWS-Node-03'  -d --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher rancher/agent:v1.0.2 http://52.213.72.36:8080/v1/scripts/B5920D373229E6F93AA1:1478610000000:XCvREVTgaYyf7kcX6b6PhJ6Gpg
Unable to find image 'rancher/agent:v1.0.2' locally
v1.0.2: Pulling from rancher/agent
5a132a7e7af1: Pull complete
fd2731e4c50c: Pull complete
28a2f68d1120: Pull complete
a3ed95caeb02: Pull complete
7fa4fac65171: Pull complete
33de63de5fdb: Pull complete
d00b3b942272: Pull complete
Digest: sha256:b0b532d1e891534779d0eb1a01a5717ebfff9ac024db4412ead87d834ba92544
Status: Downloaded newer image for rancher/agent:v1.0.2
53253b9c2d10956147eca1a22d19ed747c8ad3746145c05b40fb33ca20e8b674
[rancher@Rancher-AWS-Node-3 ~]$

And the final result:
rancher-aws-12

Adding host on DigitalOcean

In previous section, I wanted to show mainly the three methods to add hosts (running on EC2) to Rancher. To create a true multi-cloud environment, obviously I need to also create some hosts on an alternative cloud provider. Luckily I also have an account on DigitalOcean. If you want to follow along, you’ll need to sign up to DigitalOcean.

First off, (after signing up) go to DigitalOcean dashboard and go to the “API” tab. You will need to create a token for Rancher to be able to create hosts on DigitalOcean. Click on the “Generate New Token” button and fill in the details. You will have to copy the token that was created. See also the below screenshot:

do1
Next, go to the Rancher UI and add an additional host by selecting the DigitalOcean option. You will need to provide some details to Rancher, such as your token, the image you would like to use and the region in which you would want to run the server.

do2
Eventually, you will see a fourth host in Rancher, but this time running on DigitalOcean.

do3

That’s it for this post. I mainly wanted to show how you could create multiple hosts on multiple cloud providers. Next post, I will launch some applications across the different hosts.