Monthly Archives: October 2018

Install Kubernetes using Terraform and Ansible

Couple of weeks ago, we have created multiple posts where we created some servers on Digitalocean. One of these posts can be found here. If you want to follow along with this guide, then use that post to create 3 droplets.

If all went well, you will see the following screen in DigitalOcean.

Using Ansible to install Kubernetes

In this post, we will focus on using Ansible to install Kubernetes. In fact, we will be implementing this guide, be it on Ubuntu 18.04 with Kubernetes 1.13 and latest flannel release. In other words, this post is just a little more up to date but the general principles apply. Also, what we do here with Ansible are essentially the same steps as I did in this post where I configured Kubernetes manually.

First of all, let’s create a hosts file for our Ansible scripts. We will define 1 master and 2 workers. The IP addresses are the same as the ones in the DigitalOcean screenshot (obviously).

#hosts
[masters]
master ansible_host=82.196.4.40 ansible_user=root

[workers]
worker1 ansible_host=82.196.4.203 ansible_user=root
worker2 ansible_host=82.196.0.134 ansible_user=root

[all:vars]
ansible_python_interpreter=/usr/bin/python3

Next, we will define create a file to install all the updates, to create the ubuntu user and ensure the ubuntu user has sudo rights.
# initial.yml
- hosts: all
  become: yes
  tasks:
    - name: Update and upgrade apt packages
      become: true
      apt:
        upgrade: yes
        update_cache: yes
        cache_valid_time: 86400 #One day

    - name: create the 'ubuntu' user
      user: name=ubuntu append=yes state=present createhome=yes shell=/bin/bash

    - name: allow 'ubuntu' to have passwordless sudo
      lineinfile:
        dest: /etc/sudoers
        line: "ubuntu ALL=(ALL) NOPASSWD: ALL"
        validate: "visudo -cf %s"

    - name: set up authorized keys for the ubuntu user
      authorized_key: user=ubuntu key="{{item}}"
      with_file:
        - ~/.ssh/keypair_digitalocean_146185179184.pub

Next up, we will create a different file that installs the Kubernetes specific dependencies on the 3 nodes. First of all, we start with the installation of Docker, then we add Kubernetes as well as the kubelet and kubeadm toolset.

#kube-dependencies.yml
- hosts: all
  become: yes
  tasks:
   - name: install Docker
     apt:
       name: docker.io
       state: present
       update_cache: true

   - name: install APT Transport HTTPS
     apt:
       name: apt-transport-https
       state: present

   - name: add Kubernetes apt-key
     apt_key:
       url: https://packages.cloud.google.com/apt/doc/apt-key.gpg
       state: present

   - name: add Kubernetes' APT repository
     apt_repository:
      repo: deb http://apt.kubernetes.io/ kubernetes-xenial main
      state: present
      filename: 'kubernetes'

   - name: install kubelet
     apt:
       name: kubelet
       state: present
       update_cache: true

   - name: install kubeadm
     apt:
       name: kubeadm
       state: present

- hosts: master
  become: yes
  tasks:
   - name: install kubectl
     apt:
       name: kubectl
       state: present

When the above is finished, we will create a specific file for the master node. This file will take care of the initialization of the Kubernetes cluster, will create the .kube directory and will copy the admin.conf file to the user’s kube.config file and also install the flannel network. Just similar as what we did in the manual process.

#master.yml
- hosts: master
  become: yes
  tasks:
    - name: initialize the cluster
      shell: kubeadm init --pod-network-cidr=10.244.0.0/16 >> cluster_initialized.txt
      args:
        chdir: $HOME
        creates: cluster_initialized.txt

    - name: create .kube directory
      become: yes
      become_user: ubuntu
      file:
        path: $HOME/.kube
        state: directory
        mode: 0755

    - name: copy admin.conf to user's kube config
      copy:
        src: /etc/kubernetes/admin.conf
        dest: /home/ubuntu/.kube/config
        remote_src: yes
        owner: ubuntu

    - name: install Pod network
      become: yes
      become_user: ubuntu
      shell: kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml >> pod_network_setup.txt
      args:
        chdir: $HOME
        creates: pod_network_setup.txt

Last file we create is the workers.yml file. In that file, we will first retrieve the join command from the master node and then we will join worker nodes to the cluster.

#workers.yml
- hosts: master
  become: yes
  gather_facts: false
  tasks:
    - name: get join command
      shell: kubeadm token create --print-join-command
      register: join_command_raw

    - name: set join command
      set_fact:
        join_command: "{{ join_command_raw.stdout_lines[0] }}"


- hosts: workers
  become: yes
  tasks:
    - name: join cluster
      shell: "{{ hostvars['master'].join_command }} >> node_joined.txt"
      args:
        chdir: $HOME
        creates: node_joined.txt

If you followed along, we have a number of yml files. These are:

  • hosts
  • initial.yml
  • kube-dependencies.yml
  • master.yml
  • workers.yml

Once we have all the files, we can execute the Ansible playbooks. We will start with the initial.yml file.

WAUTERW-M-T3ZT:ansible-k8s-digitalocean wim$ ansible-playbook -i hosts initial.yml

PLAY [all] ************************************************************************************************************************************

TASK [Gathering Facts] ************************************************************************************************************************
ok: [worker1]
ok: [worker2]
ok: [master]

TASK [Update and upgrade apt packages] ********************************************************************************************************
 [WARNING]: Could not find aptitude. Using apt-get instead.

changed: [worker1]
changed: [master]
changed: [worker2]

TASK [create the 'ubuntu' user] ***************************************************************************************************************
changed: [worker1]
changed: [worker2]
changed: [master]

TASK [allow 'ubuntu' to have passwordless sudo] ***********************************************************************************************
changed: [master]
changed: [worker1]
changed: [worker2]

TASK [set up authorized keys for the ubuntu user] *********************************************************************************************
changed: [worker1] => (item=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCmQacIjnTa1wuB1XERvPSzasMg/FrmCjtwzTNHo4sk1u8Tbyrm7oh6Fi+CJ5bRxYMRkQ9JWo8ud6jZ4L+Tczcb1RB+U8HqraXRBXHpgOJvgHUcRPsE7x+38vndCsLgONuLbkAcDVcW1RPl++6CipIdbD09YJ6avPfTYEfG+BKIr5AkmIqPM+e2JVD7pgKRSjiNLeMQU2TKVvYOJ74mwNLjQWVBE4KLYFHHKzNA6a40e//MFMoI/YMnPWwnQ/GstBvnWCzwMBJS6uDeFQAEACeeXhfE2GMbM0MM4hFPELm3ZB/5PcXCzSjQ2R6BCmcp/6G4Vyr8tTripCxZCWCCaSw/ wim@WAUTERW-M-T3ZT)
changed: [master] => (item=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCmQacIjnTa1wuB1XERvPSzasMg/FrmCjtwzTNHo4sk1u8Tbyrm7oh6Fi+CJ5bRxYMRkQ9JWo8ud6jZ4L+Tczcb1RB+U8HqraXRBXHpgOJvgHUcRPsE7x+38vndCsLgONuLbkAcDVcW1RPl++6CipIdbD09YJ6avPfTYEfG+BKIr5AkmIqPM+e2JVD7pgKRSjiNLeMQU2TKVvYOJ74mwNLjQWVBE4KLYFHHKzNA6a40e//MFMoI/YMnPWwnQ/GstBvnWCzwMBJS6uDeFQAEACeeXhfE2GMbM0MM4hFPELm3ZB/5PcXCzSjQ2R6BCmcp/6G4Vyr8tTripCxZCWCCaSw/ wim@WAUTERW-M-T3ZT)
changed: [worker2] => (item=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCmQacIjnTa1wuB1XERvPSzasMg/FrmCjtwzTNHo4sk1u8Tbyrm7oh6Fi+CJ5bRxYMRkQ9JWo8ud6jZ4L+Tczcb1RB+U8HqraXRBXHpgOJvgHUcRPsE7x+38vndCsLgONuLbkAcDVcW1RPl++6CipIdbD09YJ6avPfTYEfG+BKIr5AkmIqPM+e2JVD7pgKRSjiNLeMQU2TKVvYOJ74mwNLjQWVBE4KLYFHHKzNA6a40e//MFMoI/YMnPWwnQ/GstBvnWCzwMBJS6uDeFQAEACeeXhfE2GMbM0MM4hFPELm3ZB/5PcXCzSjQ2R6BCmcp/6G4Vyr8tTripCxZCWCCaSw/ wim@WAUTERW-M-T3ZT)

PLAY RECAP ************************************************************************************************************************************
master                     : ok=5    changed=4    unreachable=0    failed=0
worker1                    : ok=5    changed=4    unreachable=0    failed=0
worker2                    : ok=5    changed=4    unreachable=0    failed=0

Next, we will install the Kubernetes dependencies:

WAUTERW-M-T3ZT:ansible-k8s-digitalocean wim$ ansible-playbook -i hosts kube-dependencies.yml

PLAY [all] ************************************************************************************************************************************

TASK [Gathering Facts] ************************************************************************************************************************
ok: [worker1]
ok: [master]
ok: [worker2]

TASK [install Docker] *************************************************************************************************************************
changed: [worker1]
changed: [worker2]
changed: [master]

TASK [install APT Transport HTTPS] ************************************************************************************************************
changed: [worker1]
changed: [worker2]
changed: [master]

TASK [add Kubernetes apt-key] *****************************************************************************************************************
changed: [worker1]
changed: [worker2]
changed: [master]

TASK [add Kubernetes' APT repository] *********************************************************************************************************
changed: [worker1]
changed: [master]
changed: [worker2]

TASK [install kubelet] ************************************************************************************************************************
changed: [worker1]
changed: [master]
changed: [worker2]

TASK [install kubeadm] ************************************************************************************************************************
changed: [worker1]
changed: [master]
changed: [worker2]

PLAY [master] *********************************************************************************************************************************

TASK [Gathering Facts] ************************************************************************************************************************
ok: [master]

TASK [install kubectl] ************************************************************************************************************************
ok: [master]

PLAY RECAP ************************************************************************************************************************************
master                     : ok=9    changed=6    unreachable=0    failed=0
worker1                    : ok=7    changed=6    unreachable=0    failed=0
worker2                    : ok=7    changed=6    unreachable=0    failed=0

When the dependencies are installed, it’s time to configure the master.


WAUTERW-M-T3ZT:ansible-k8s-digitalocean wim$ ansible-playbook -i hosts master.yml

PLAY [master] *********************************************************************************************************************************

TASK [Gathering Facts] ************************************************************************************************************************
ok: [master]

TASK [initialize the cluster] *****************************************************************************************************************
changed: [master]

TASK [create .kube directory] *****************************************************************************************************************
 [WARNING]: Module remote_tmp /home/ubuntu/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running
as another user. To avoid this, create the remote_tmp dir with the correct permissions manually

changed: [master]

TASK [copy admin.conf to user's kube config] **************************************************************************************************
changed: [master]

TASK [install Pod network] ********************************************************************************************************************
changed: [master]

PLAY RECAP ************************************************************************************************************************************
master                     : ok=5    changed=4    unreachable=0    failed=0

And finally, we will have the workers join our Kubernetes cluster by executing the workers playbook.

WAUTERW-M-T3ZT:ansible-k8s-digitalocean wim$ ansible-playbook -i hosts workers.yml

PLAY [master] *********************************************************************************************************************************

TASK [get join command] ***********************************************************************************************************************
changed: [master]

TASK [set join command] ***********************************************************************************************************************
ok: [master]

PLAY [workers] ********************************************************************************************************************************

TASK [Gathering Facts] ************************************************************************************************************************
ok: [worker2]
ok: [worker1]

TASK [join cluster] ***************************************************************************************************************************
changed: [worker1]
changed: [worker2]

PLAY RECAP ************************************************************************************************************************************
master                     : ok=2    changed=1    unreachable=0    failed=0
worker1                    : ok=2    changed=1    unreachable=0    failed=0
worker2                    : ok=2    changed=1    unreachable=0    failed=0

The verify that everything went fine and according to plan, we can do a quick check by logging into the master node and check if the workers have joined successfully.

ubuntu@server-manual-1:/root$ kubectl get nodes
NAME              STATUS   ROLES    AGE     VERSION
server-manual-1   Ready    master   8m53s   v1.13.0
server-manual-2   Ready    <none>   46s     v1.13.0
server-manual-3   Ready    <none>   46s     v1.13.0

Install Kubernetes on DigitalOcean (manually)

Introduction

In a previous post, we have created 3 Ubuntu servers on DigitalOcean with Terraform. We will continue with installing Kubernetes on these servers. For performance reasons however I have created 3 servers with 2 virtual CPU’s and 2GB of memory. 

Creating 3 servers on DigitalOcean

The previous post however still applies perfectly but we need to change some parameters. For completeness, here are the Terraform files I have been using. I refer to this post for further information.

Here is the create_server.tf file:

resource "digitalocean_droplet" "server" {
    count = "${var.numberofservers}"
    name = "server-manual-${count.index+1}"
    #name = "${var.servername}-${format("%02d", count.index+1)}"
    image = "ubuntu-18-04-x64"
    size = "s-2vcpu-2gb"
    region = "${var.region}"
    ssh_keys = [
        "${var.ssh_fingerprint}"
    ]
    tags   = ["${digitalocean_tag.webserver.id}"]   
}

resource "digitalocean_tag" "webserver" {
    name = "web"
}


resource "digitalocean_record" "server_dns_record" {
  count     = "${var.numberofservers}"
  name      = "dns-server-${count.index+1}"
  domain    = "${var.domain_name}"
  type      = "A"
  name      = "${element(digitalocean_droplet.server.*.name, count.index+1)}"
  value     = "${element(digitalocean_droplet.server.*.ipv4_address, count.index+1)}"
}

Here is the terraform.tfvars file:

do_token = "bc1***1d7"
ssh_fingerprint = "5a:25:***:8d:03"
servername="server"
numberofservers = 3
numberofcpus = 2
domain_name = "wimwauters.com"
region     = "ams2"

And finally also the provider.tf file:

variable "do_token" {}
variable "ssh_fingerprint" {}
variable "numberofservers" {}
variable "domain_name" {}
variable "region" {}
variable "numberofcpus" {}

provider "digitalocean"{
  token = "${var.do_token}"
}


Next, apply the configuration by executing them:

WAUTERW-M-T3ZT:DigitalOcean_Test wim$ terraform init
WAUTERW-M-T3ZT:DigitalOcean_Test wim$ terraform plan
WAUTERW-M-T3ZT:DigitalOcean_Test wim$ terraform apply

Installing Kubernetes

When the servers are ready, you should be able to ssh into the server using your public key. 

WAUTERW-M-T3ZT:Keys_and_Certificates wim$ ssh -i keypair_digitalocean_146185179184 root@82.196.11.165

The first server, server-manual-1 will become our Kubernetes master. The two remaining servers will be our Kubernetes workers.

Create a shell script with the following contents:

root@server-manual-1:~# cat kubernetes.sh
apt-get update && apt-get install -y apt-transport-https
curl -s https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable"
apt update && apt install -qy docker-ce
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" \
> /etc/apt/sources.list.d/kubernetes.list
apt-get update && apt-get install -y kubeadm kubelet kubectl

Don’t forget to give execute permissings to the script:

root@server-manual-1:~# sudo chmod +x kubernetes.sh
root@server-manual-1:~# ./kubernetes.sh

Let this process finish and repeat it for the two remaining servers: server-manual-2 and server-manual-3.

Once finished with the installation of the dependencies, we need to initialize Kubernetes.

root@server-manual-1:~# kubeadm init --apiserver-advertise-address=82.196.11.165 --pod-network-cidr=192.168.30.0/16

At the end of this, a join command will be returned which we will need to use later on to have our worker nodes join the Kubernetes cluster. So take not of that command. In my case this is:

kubeadm join 82.196.11.165:6443 --token eru2tr.4lx8gs1j1bdnuh95 --discovery-token-ca-cert-hash sha256:575155d259d1752c0324ce9ebffa9ee02b395d576c6470ef51410bfdc658a03f

We also need to create a new user on the master node and give him sudo rights. Luckily this is rather straightforward on Ubuntu:

root@server-manual-1:~# adduser ubuntu
Adding user ubuntu' ...Adding new groupubuntu' (1000) …
Adding new user ubuntu' (1000) with groupubuntu' …
Creating home directory /home/ubuntu' ...Copying files from/etc/skel' …
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
Changing the user information for ubuntu
Enter the new value, or press ENTER for the default
root@server-manual-1:~# usermod -aG sudo ubuntu
root@server-manual-1:~# su - ubuntu

Next, we need to use this user (in my case ubuntu) to setup the Kubernetes configuration:

ubuntu@server-manual-1:~$ mkdir -p $HOME/.kube
ubuntu@server-manual-1:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[sudo] password for ubuntu:
ubuntu@server-manual-1:~$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

Last but not least, we need to install the Flannel network on the cluster:

ubuntu@server-manual-1:~$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

The master Kubernetes node is now ready. You can validate this by executing the following command:

ubuntu@server-manual-1:~$ kubectl get nodes
NAME              STATUS   ROLES    AGE   VERSION
server-manual-1   Ready    master   11m   v1.13.0

Configuring the worker nodes

Now that the master node is fully configured, we need to ensure that our worker nodes are joining the cluster. If you did not install all the dependencies on the worker nodes, now is the time to do this. If you followed along with this guide, you should have done this step already.

Next step is to execute that join command that our master node returned. So perform the following command on both the worker nodes.

root@server-manual-2:~# kubeadm join 82.196.11.165:6443 --token eru2tr.4lx8gs1j1bdnuh95 --discovery-token-ca-cert-hash sha256:575155d259d1752c0324ce9ebffa9ee02b395d576c6470ef51410bfdc658a03f

Once done, after some time (around 20sec) you can execute the ‘kubectl get nodes’ command on the master node and see that it joined the cluster successfully.

ubuntu@server-manual-1:~$ kubectl get nodes
NAME              STATUS   ROLES    AGE   VERSION
server-manual-1   Ready    master   15m   v1.13.0
server-manual-2   Ready    <none>   25s   v1.13.0

Do the same on the second worker node

root@server-manual-3:~# kubeadm join 82.196.11.165:6443 --token eru2tr.4lx8gs1j1bdnuh95 --discovery-token-ca-cert-hash sha256:575155d259d1752c0324ce9ebffa9ee02b395d576c6470ef51410bfdc658a03f

If all went well you should also see the second worker appear in the cluster.

ubuntu@server-manual-1:~$ kubectl get nodes
NAME              STATUS   ROLES    AGE     VERSION
server-manual-1   Ready    master   19m     v1.13.0
server-manual-2   Ready    <none>   4m30s   v1.13.0
server-manual-3   Ready    <none>   45s     v1.13.0