Monthly Archives: May 2016

Mesos & Marathon: Installation using Vagrant

Introduction

This post is inspired by this DigitalOcean post but instead works using a Vagrant file that I will provide below.

Configuration of the system

We will install a Mesos cluster and will be assuming the following configuration.

HostnameFunctionIP address
master1Mesos Master192.0.2.11
master2Mesos Master192.0.2.12
master3Mesos Master192.0.2.13
slave1Mesos Slave192.0.2.51
slave2Mesos Slave192.0.2.52
slave3Mesos Slave192.0.2.53

Installing Mesos Masters

As mentioned above, we will be using Vagrant to install our Mesos cluster. In order to do so, create a folder ‘Mesos-Master’ on your local PC. That folder will contain the Vagrantfile as well as a shell provisioning file. We used this principle before in this post.

We will begin with the Vagrantfile in which we configure 3 servers, being the Mesos-Master servers called master1, master2 and master3. Each of them are configured according to the table above and we also expose two ports, being 5050 (for the Mesos UI) and 8080 (for the Marathon UI).

WAUTERW-M-G007:Mesos-Master wauterw$ cat Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :


Vagrant.configure(2) do |config|
  config.vm.define "master1" do |master1|
    master1.vm.box = "ubuntu/trusty64"
    master1.vm.hostname = "master1"

    master1.vm.network :private_network, ip: "192.0.2.11"
    master1.vm.network "forwarded_port", guest: 8080, host: 8001
    master1.vm.network "forwarded_port", guest: 5050, host: 5001
    master1.vm.network "forwarded_port", guest: 4040, host: 4001

    master1.vm.provider "virtualbox" do |vb|
      vb.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
      vb.customize ["modifyvm", :id, "--memory", 512]
      vb.customize ["modifyvm", :id, "--name", "master1"]
    end
  end

  config.vm.define "master2" do |master2|
    master2.vm.box = "ubuntu/trusty64"
    master2.vm.hostname = "master2"

    master2.vm.network :private_network, ip: "192.0.2.12"
    master2.vm.network "forwarded_port", guest: 8080, host: 8002
    master2.vm.network "forwarded_port", guest: 5050, host: 5002
    master2.vm.network "forwarded_port", guest: 4040, host: 4002

    master2.vm.provider "virtualbox" do |vb|
      vb.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
      vb.customize ["modifyvm", :id, "--memory", 512]
      vb.customize ["modifyvm", :id, "--name", "master2"]
    end
  end

  config.vm.define "master3" do |master3|
    master3.vm.box = "ubuntu/trusty64"
    master3.vm.hostname = "master3"

    master3.vm.network :private_network, ip: "192.0.2.13"
    master3.vm.network "forwarded_port", guest: 8080, host: 8003
    master3.vm.network "forwarded_port", guest: 5050, host: 5003
    master3.vm.network "forwarded_port", guest: 4040, host: 4003

    master3.vm.provider "virtualbox" do |vb|
      vb.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
      vb.customize ["modifyvm", :id, "--memory", 512]
      vb.customize ["modifyvm", :id, "--name", "master3"]
    end
  end
 config.vm.provision "shell", path: "provision_master.sh"
end

Configuring Mesos Masters

Towards the end of the file, you also see that we are using a provision_master.sh file to configure the 3 servers. See below for the entire file.

WAUTERW-M-G007:Mesos-Master wauterw$ cat provision_master.sh
#!/usr/bin/env bash

echo "Installing Mesos/Marathon dependencies ..."
echo "Updating apt-get"
sudo su
sudo apt-get update -y >/dev/null 2>&1
echo "Adding apt-key"
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv E56151BF
DISTRO=$(lsb_release -is | tr '[:upper:]' '[:lower:]')
CODENAME=$(lsb_release -cs)
echo "Writing sources.list"
echo "deb http://repos.mesosphere.io/${DISTRO} ${CODENAME} main" | sudo tee /etc/apt/sources.list.d/mesosphere.list
echo "Installing Java"
sudo apt-get install -y python-software-properties debconf-utils
sudo add-apt-repository -y ppa:webupd8team/java
sudo apt-get update -y >/dev/null 2>&1
echo "oracle-java8-installer shared/accepted-oracle-license-v1-1 select true" | sudo debconf-set-selections
sudo apt-get install -y oracle-java8-installer >/dev/null 2>&1
sudo apt-get update -y >/dev/null 2>&1
echo "Installing Mesosphere"
sudo apt-get install -y mesosphere
sudo apt-get update -y >/dev/null 2>&1
echo "Finished installing Mesos masters!"
echo "Configuring Mesos conf files"
ip=`ip addr show |grep "inet " |grep -v 127.0.0. |grep -v 10.0.2. |head -1|cut -d" " -f6|cut -d/ -f1`
echo "IP address of this machine is "$ip
echo zk://192.0.2.11:2181,192.0.2.12:2181,192.0.2.13:2181/mesos | sudo tee /etc/mesos/zk
echo 2 | sudo tee /etc/mesos-master/quorum
if  [ "$ip" = "192.0.2.11" ]; then
    echo "IP is 192.0.2.11"
    echo 1 | sudo tee /etc/zookeeper/conf/myid
    echo 192.0.2.11 | sudo tee /etc/mesos-master/ip
fi
if [ "$ip" == "192.0.2.12" ]; then
    echo "IP is 192.0.2.12"
    echo 2 | sudo tee /etc/zookeeper/conf/myid
    echo 192.0.2.12 | sudo tee /etc/mesos-master/ip
fi
if [ $ip == "192.0.2.13" ]; then
    echo "IP is 192.0.2.13"
    echo 3 | sudo tee /etc/zookeeper/conf/myid
    echo 192.0.2.13 | sudo tee /etc/mesos-master/ip
fi
cat > /etc/zookeeper/conf/zoo.cfg << EOL
dataDir=/var/lib/zookeeper
clientPort=2181
tickTime=2000
initLimit=5
syncLimit=2
server.1=192.0.2.11:2888:3888
server.2=192.0.2.12:2888:3888
server.3=192.0.2.13:2888:3888
EOL
sudo cp /etc/mesos-master/ip /etc/mesos-master/hostname
sudo mkdir -p /etc/marathon/conf
sudo cp /etc/mesos-master/hostname /etc/marathon/conf
sudo cp /etc/mesos/zk /etc/marathon/conf/master
sudo cp /etc/marathon/conf/master /etc/marathon/conf/zk
echo zk://192.0.2.11:2181,192.0.2.12:2181,192.0.2.13:2181/marathon | sudo tee /etc/marathon/conf/zk
sudo stop mesos-slave
echo manual | sudo tee /etc/init/mesos-slave.override
sudo restart zookeeper
sudo start mesos-master
sudo start marathon
sudo start chronos 

It might come across quite complicated so let me dive into it step by step.

First of all, we begin with the installation of the Mesosphere software. Note that Mesosphere is also depending on Java 8 so we also install that.

echo "Installing Mesos/Marathon dependencies ..."
echo "Updating apt-get"
sudo su
sudo apt-get update -y >/dev/null 2>&1
echo "Adding apt-key"
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv E56151BF
DISTRO=$(lsb_release -is | tr '[:upper:]' '[:lower:]')
CODENAME=$(lsb_release -cs)
echo "Writing sources.list"
echo "deb http://repos.mesosphere.io/${DISTRO} ${CODENAME} main" | sudo tee /etc/apt/sources.list.d/mesosphere.list
echo "Installing Java"
sudo apt-get install -y python-software-properties debconf-utils
sudo add-apt-repository -y ppa:webupd8team/java
sudo apt-get update -y >/dev/null 2>&1
echo "oracle-java8-installer shared/accepted-oracle-license-v1-1 select true" | sudo debconf-set-selections
sudo apt-get install -y oracle-java8-installer >/dev/null 2>&1
sudo apt-get update -y >/dev/null 2>&1
echo "Installing Mesosphere"
sudo apt-get install -y mesosphere
sudo apt-get update -y >/dev/null 2>&1
echo "Finished installing Mesos masters!"

Then we continue with adapting the various Mesos files. First of all, we need to adapt the quorum to 2 because we have 3 masters. Then, because we are using 1 script for the 3 servers, we need to somehow differentiate between them. Therefore, we first try to find the IP address of the server. As we have been using Vagrant to setup those servers, we know the IP addresses of each. We can then assure that each server has a different Zookeeper ID (in the file /etc/zookeeper/conf/myid). Also we ensure that the /etc/mesos-master/ip has the correct IP address.

echo "Configuring Mesos conf files"
ip=`ip addr show |grep "inet " |grep -v 127.0.0. |grep -v 10.0.2. |head -1|cut -d" " -f6|cut -d/ -f1`
echo "IP address of this machine is "$ip
echo zk://192.0.2.11:2181,192.0.2.12:2181,192.0.2.13:2181/mesos | sudo tee /etc/mesos/zk
echo 2 | sudo tee /etc/mesos-master/quorum
if  [ "$ip" = "192.0.2.11" ]; then
    echo "IP is 192.0.2.11"
    echo 1 | sudo tee /etc/zookeeper/conf/myid
    echo 192.0.2.11 | sudo tee /etc/mesos-master/ip
fi
if [ "$ip" == "192.0.2.12" ]; then
    echo "IP is 192.0.2.12"
    echo 2 | sudo tee /etc/zookeeper/conf/myid
    echo 192.0.2.12 | sudo tee /etc/mesos-master/ip
fi
if [ $ip == "192.0.2.13" ]; then
    echo "IP is 192.0.2.13"
    echo 3 | sudo tee /etc/zookeeper/conf/myid
    echo 192.0.2.13 | sudo tee /etc/mesos-master/ip
fi

Then we need to modify the Zookeeper configuration file as follows:

cat > /etc/zookeeper/conf/zoo.cfg << EOL
dataDir=/var/lib/zookeeper
clientPort=2181
tickTime=2000
initLimit=5
syncLimit=2
server.1=192.0.2.11:2888:3888
server.2=192.0.2.12:2888:3888
server.3=192.0.2.13:2888:3888
EOL

This basically writes all the lines between the EOL statement to the file /etc/zookeeper/conf/zoo.cfg. We then need to also update the marathon files:

sudo cp /etc/mesos-master/ip /etc/mesos-master/hostname
sudo mkdir -p /etc/marathon/conf
sudo cp /etc/mesos-master/hostname /etc/marathon/conf
sudo cp /etc/mesos/zk /etc/marathon/conf/master
sudo cp /etc/marathon/conf/master /etc/marathon/conf/zk
echo zk://192.0.2.11:2181,192.0.2.12:2181,192.0.2.13:2181/marathon | sudo tee /etc/marathon/conf/zk

We finish the script with (re)starting the Zookeeper, Mesos-Master and Marathon service. We should now have the Mesos Masters up and running.

Installing Mesos Slaves

Similar to the Mesos Masters, we create a folder called 'Mesos-Slave' on our local PC. That folder will contain a Vagrant file and a provision script. The Vagrantfile is responsible for the installation of the servers according to the above table and is provided below.

WAUTERW-M-G007:Mesos-Slave wauterw$ cat Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure(2) do |config|
  config.vm.define "slave1" do |slave1|
    slave1.vm.box = "ubuntu/trusty64"
    slave1.vm.hostname = "slave1"

    slave1.vm.network :private_network, ip: "192.0.2.51"
    slave1.vm.network "forwarded_port", guest: 8080, host: 8051
    slave1.vm.network "forwarded_port", guest: 5050, host: 5051

    slave1.vm.provider "virtualbox" do |vb|
      vb.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
      vb.customize ["modifyvm", :id, "--memory", 512]
      vb.customize ["modifyvm", :id, "--name", "slave1"]
    end
  end

  config.vm.define "slave2" do |slave2|
    slave2.vm.box = "ubuntu/trusty64"
    slave2.vm.hostname = "slave2"

    slave2.vm.network :private_network, ip: "192.0.2.52"
    slave2.vm.network "forwarded_port", guest: 8080, host: 8052
    slave2.vm.network "forwarded_port", guest: 5050, host: 5052

    slave2.vm.provider "virtualbox" do |vb|
      vb.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
      vb.customize ["modifyvm", :id, "--memory", 512]
      vb.customize ["modifyvm", :id, "--name", "slave2"]
    end
  end

  config.vm.define "slave3" do |slave3|
    slave3.vm.box = "ubuntu/trusty64"
    slave3.vm.hostname = "slave3"

    slave3.vm.network :private_network, ip: "192.0.2.53"
    slave3.vm.network "forwarded_port", guest: 8080, host: 8053
    slave3.vm.network "forwarded_port", guest: 5050, host: 5053

    slave3.vm.provider "virtualbox" do |vb|
      vb.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
      vb.customize ["modifyvm", :id, "--memory", 512]
      vb.customize ["modifyvm", :id, "--name", "slave3"]
    end
  end
 config.vm.provision "shell", path: "provision_slave.sh"
end

Browsing through the UI's after configuring the Mesos Masters

If all went well, you should see the following UI. After installing the Mesos masters, you will see an empty Mesos and Marathon environment.

As we don't have any Mesos slaves up and running yet, pay attention to the 'activated' slaves. Also if you would go to another IP address of one of the mesos masters, you will see a message that says: "This master is not the leader, redirecting in 3 seconds ... go now" . This proofs indeed that Mesos is electing a master server.

mesos1

In the below screenshot, you'll see that we have 1 registered framework, which is obviously the Marathon framework.
mesos2

And again, as we did only install the Mesos master nodes, we don't have any slaves.
mesos3

You should also be able to see the Marathon UI on port 8080 of one of the Mesos masters:
marathon1

Provisioning Mesos Slaves

WAUTERW-M-G007:Mesos-Slave wauterw$ cat provision_slave.sh
#!/usr/bin/env bash

echo "Installing Mesos dependencies ..."
echo "Updating apt-get"
sudo su
sudo apt-get update -y >/dev/null 2>&1
echo "Adding apt-key"
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv E56151BF
DISTRO=$(lsb_release -is | tr '[:upper:]' '[:lower:]')
CODENAME=$(lsb_release -cs)
echo "Writing sources.list"
echo "deb http://repos.mesosphere.io/${DISTRO} ${CODENAME} main" | sudo tee /etc/apt/sources.list.d/mesosphere.list
echo "Installing Java"
sudo apt-get install -y python-software-properties debconf-utils
sudo add-apt-repository -y ppa:webupd8team/java
sudo apt-get update -y >/dev/null 2>&1
echo "oracle-java8-installer shared/accepted-oracle-license-v1-1 select true" | sudo debconf-set-selections
sudo apt-get install -y oracle-java8-installer >/dev/null 2>&1
sudo apt-get update -y >/dev/null 2>&1
echo "Installing Mesosphere"
sudo apt-get install -y mesos
sudo apt-get update -y >/dev/null 2>&1
echo "Finished installing Mesos slaves!"
echo "Starting configuration Mesos slaves!"
sudo stop zookeeper
echo manual | sudo tee /etc/init/zookeeper.override
echo manual | sudo tee /etc/init/mesos-master.override
sudo stop mesos-master
echo zk://192.0.2.11:2181,192.0.2.12:2181,192.0.2.13:2181/mesos | sudo tee /etc/mesos/zk
ip=`ip addr show |grep "inet " |grep -v 127.0.0. |grep -v 10.0.2. |head -1|cut -d" " -f6|cut -d/ -f1`
echo "IP address of this machine is "$ip
if [ $ip == "192.0.2.51" ]; then
    echo "192.0.2.51"
    echo 192.0.2.51 | sudo tee /etc/mesos-slave/ip
    sudo cp /etc/mesos-slave/ip /etc/mesos-slave/hostname
fi
if [ $ip == "192.0.2.52" ]; then
    echo "192.0.2.52"
    echo 192.0.2.52 | sudo tee /etc/mesos-slave/ip
    sudo cp /etc/mesos-slave/ip /etc/mesos-slave/hostname
fi
if [ $ip == "192.0.2.53" ]; then
    echo "192.0.2.53"
    echo 192.0.2.53 | sudo tee /etc/mesos-slave/ip
    sudo cp /etc/mesos-slave/ip /etc/mesos-slave/hostname
fi
sudo start mesos-slave

The above file will install all necessary packages and will configure the instances as Mesos Slaves.

Browsing through the UI's after configuring the Mesos Slaves

In the below screemshot, you can see on the Mesos dashboard that we have 3 activated slaves.
mesos4

And below screenshot shows the actual slaves that have been registered
mesos5

That's it for this post. Using the Vagrantfile and the provisioning files, you are able to install a full blown high-available Mesos cluster with Marathon framework.

Docker: multi-host networking (using overlay)

Introduction

In this post, we will experiment a little bit with multi-host networking. We will create two hosts and will run a container on each of them and try to connect the two containers without having a common overlay network. Then we will create an overlay network and repeat the same exercise again.

Setting up multi-host environment

To set up the multi-host environment, run the following script. It will basically just create 3 hosts:

  • Consul hosts: runs Consul for service discovery purposes
  • Node 01: will be part of the Swarm cluster
  • Node 02: will be part of the Swarm cluster
#!/bin/bash
set -e

# Docker Machine Setup
docker-machine create \
    -d virtualbox \
    consul

docker $(docker-machine config consul) run -d \
    -p "8500:8500" \
    -h "consul" \
    progrium/consul -server -bootstrap

docker-machine create \
    -d virtualbox \
    --engine-opt="cluster-store=consul://$(docker-machine ip consul):8500" \
    --engine-opt="cluster-advertise=eth1:0" \
    node-01

docker-machine create \
    -d virtualbox \
    --engine-opt="cluster-store=consul://$(docker-machine ip consul):8500" \
    --engine-opt="cluster-advertise=eth1:0" \
    node-02

You will see in your virtualbox in total 3 virtual machines. The beauty of Docker Machine is that you can easily check it with the CLI.

WAUTERW-M-G007:config wauterw$ docker-machine ls
NAME               ACTIVE   DRIVER      STATE   URL   SWARM   DOCKER    ERRORS
consul             -    virtualbox   Running   tcp://192.168.99.106:2376        v1.12.2
node-01            *    virtualbox   Running   tcp://192.168.99.107:2376        v1.12.2
node-02            -    virtualbox   Running   tcp://192.168.99.108:2376        v1.12.2

Let’s also look at the network setup.

WAUTERW-M-G007:config wauterw$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
5950d10a35e7        bridge              bridge              local
2d53912170e9        host                host                local
0d20455f4cd1        none                null                local

Historically, these three networks are part of Docker’s implementation. When you run a container you can use the –network flag to specify which network you want to run a container on. So as expected, we see 3 networks:

  • bridge: The bridge network represents the docker0 network present in all Docker installations. The Docker daemon connects containers to this network by default.
  • host: The host network adds a container on the hosts network stack. You’ll find the network configuration inside the container is identical to the host.
  • none: The none network adds a container to a container-specific network stack. That container lacks a network interface.

Multi-host: pinging container without overlay network

To test the connectivity between containers, we will run an NGINX server on host 1 (node-01) and we will run an Ubuntu container on host 2 (node-02) to retrieve the content from the default NGINX page. Let’s go ahead!

WAUTERW-M-G007:config wauterw$ eval $(docker-machine env node-01)
WAUTERW-M-G007:config wauterw$ docker run -itd --name=web nginx
Unable to find image 'nginx:latest' locally
latest: Pulling from library/nginx
43c265008fae: Pull complete
e4c030a565b1: Pull complete
685b7631c1ce: Pull complete
Digest: sha256:dedbce721065b2bcfae35d2b0690857bb6c3b4b7dd48bfe7fc7b53693731beff
Status: Downloaded newer image for nginx:latest
d4e184d8d4156e32ffa1bbda0b3ba01349b0ed55fc30f80f151032c0e02bac61
WAUTERW-M-G007:config wauterw$

and on host2:

WAUTERW-M-G007:config wauterw$ eval $(docker-machine env node-02)
WAUTERW-M-G007:~ wauterw$ docker run -it --rm busybox wget -qO- http://web
Unable to find image 'busybox:latest' locally
latest: Pulling from library/busybox

56bec22e3559: Pull complete
Digest: sha256:29f5d56d12684887bdfa50dcd29fc31eea4aaf4ad3bec43daf19026a7ce69912
Status: Downloaded newer image for busybox:latest
wget: can't connect to remote host (172.20.55.113): Connection timed out
WAUTERW-M-G007:~ wauterw$ docker run -it --rm busybox wget -qO- http://web
...no response...

Here you can clearly see that the container running on the second host cannot download the NGINX page from the container running on the first host.

Multi-host: pinging container with overlay network

Obviously, we knew that this was going to happen because we did not create an overlay network for the containers to communicate with each other. So, we will begin with creating such overlay network and we’ll call it mynet.

WAUTERW-M-G007:config wauterw$ docker network create -d overlay mynet
69eda36c66e772c5b04002c282ae59786bd4450f28853426c2e6cbb4c4019336

Let’s see how this looks like on Docker:

WAUTERW-M-G007:config wauterw$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
5950d10a35e7        bridge              bridge              local
2d53912170e9        host                host                local
69eda36c66e7        mynet               overlay             global
0d20455f4cd1        none                null                local

A new network called mynet got added to the network configuration.
Next, we create a new container, called web1, which will be using the overlay network (using the –net option)

WAUTERW-M-G007:config wauterw$ docker run -itd --name=web1 --net=mynet nginx
0a60b279f3f6bf5a22278641b1aa4b127cdabf558f1fd10224a2b80d4306a3a2

On the second host, we then create a new container which is also part of that same overlay network. We will use the busybox wget command. In case you are not familiar with Busybox, check out the information here.

WAUTERW-M-G007:~ wauterw$ eval $(docker-machine env node-02)
WAUTERW-M-G007:~ wauterw$ docker run -it --rm --net=mynet busybox wget -qO- http://web1Welcome to nginx!

Welcome to nginx!

If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.

For online documentation and support please refer to
nginx.org.
Commercial support is available at
nginx.com.

Thank you for using nginx.

As you can see, we are able the retrieve the NGINX page without an issue. This shows that both containers can communicate with each other without a problem.

In this post, we created seperate hosts, they were not part of a cluster system. In future posts, I will perform the same tests but I’ll setup a Swarm cluster first. Hope this was interesting!