DigitalOcean – Install Kubernetes using Ansible

By | 17/10/2018

Couple of weeks ago, we have created multiple posts where we created some servers on DigitalOcean. One of these posts can be found here. If you want to follow along with this guide, then use that post to create 3 droplets.

If all went well, you will see the following screen in DigitalOcean.

Using Ansible to install Kubernetes

In this post, we will focus on using Ansible to install Kubernetes. In fact, we will be implementing this guide, be it on Ubuntu 18.04 with Kubernetes 1.13 and latest flannel release. In other words, this post is just a little more up to date but the general principles apply. Also, what we do here with Ansible are essentially the same steps as I did in this post where I configured Kubernetes manually.

First of all, let’s create a hosts file for our Ansible scripts. We will define 1 master and 2 workers. The IP addresses are the same as the ones in the DigitalOcean screenshot (obviously).

#hosts
[masters]
master ansible_host=82.196.4.40 ansible_user=root

[workers]
worker1 ansible_host=82.196.4.203 ansible_user=root
worker2 ansible_host=82.196.0.134 ansible_user=root

[all:vars]
ansible_python_interpreter=/usr/bin/python3

Next, we will define create a file to install all the updates, to create the ubuntu user and ensure the ubuntu user has sudo rights.
# initial.yml
- hosts: all
  become: yes
  tasks:
    - name: Update and upgrade apt packages
      become: true
      apt:
        upgrade: yes
        update_cache: yes
        cache_valid_time: 86400 #One day

    - name: create the 'ubuntu' user
      user: name=ubuntu append=yes state=present createhome=yes shell=/bin/bash

    - name: allow 'ubuntu' to have passwordless sudo
      lineinfile:
        dest: /etc/sudoers
        line: "ubuntu ALL=(ALL) NOPASSWD: ALL"
        validate: "visudo -cf %s"

    - name: set up authorized keys for the ubuntu user
      authorized_key: user=ubuntu key="{{item}}"
      with_file:
        - ~/.ssh/keypair_digitalocean_146185179184.pub

Next up, we will create a different file that installs the Kubernetes specific dependencies on the 3 nodes. First of all, we start with the installation of Docker, then we add Kubernetes as well as the kubelet and kubeadm toolset.

#kube-dependencies.yml
- hosts: all
  become: yes
  tasks:
   - name: install Docker
     apt:
       name: docker.io
       state: present
       update_cache: true

   - name: install APT Transport HTTPS
     apt:
       name: apt-transport-https
       state: present

   - name: add Kubernetes apt-key
     apt_key:
       url: https://packages.cloud.google.com/apt/doc/apt-key.gpg
       state: present

   - name: add Kubernetes' APT repository
     apt_repository:
      repo: deb http://apt.kubernetes.io/ kubernetes-xenial main
      state: present
      filename: 'kubernetes'

   - name: install kubelet
     apt:
       name: kubelet
       state: present
       update_cache: true

   - name: install kubeadm
     apt:
       name: kubeadm
       state: present

- hosts: master
  become: yes
  tasks:
   - name: install kubectl
     apt:
       name: kubectl
       state: present

When the above is finished, we will create a specific file for the master node. This file will take care of the initialization of the Kubernetes cluster, will create the .kube directory and will copy the admin.conf file to the user’s kube.config file and also install the flannel network. Just similar as what we did in the manual process.

#master.yml
- hosts: master
  become: yes
  tasks:
    - name: initialize the cluster
      shell: kubeadm init --pod-network-cidr=10.244.0.0/16 >> cluster_initialized.txt
      args:
        chdir: $HOME
        creates: cluster_initialized.txt

    - name: create .kube directory
      become: yes
      become_user: ubuntu
      file:
        path: $HOME/.kube
        state: directory
        mode: 0755

    - name: copy admin.conf to user's kube config
      copy:
        src: /etc/kubernetes/admin.conf
        dest: /home/ubuntu/.kube/config
        remote_src: yes
        owner: ubuntu

    - name: install Pod network
      become: yes
      become_user: ubuntu
      shell: kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml >> pod_network_setup.txt
      args:
        chdir: $HOME
        creates: pod_network_setup.txt

Last file we create is the workers.yml file. In that file, we will first retrieve the join command from the master node and then we will join worker nodes to the cluster.

#workers.yml
- hosts: master
  become: yes
  gather_facts: false
  tasks:
    - name: get join command
      shell: kubeadm token create --print-join-command
      register: join_command_raw

    - name: set join command
      set_fact:
        join_command: "{{ join_command_raw.stdout_lines[0] }}"


- hosts: workers
  become: yes
  tasks:
    - name: join cluster
      shell: "{{ hostvars['master'].join_command }} >> node_joined.txt"
      args:
        chdir: $HOME
        creates: node_joined.txt

If you followed along, we have a number of yml files. These are:

  • hosts
  • initial.yml
  • kube-dependencies.yml
  • master.yml
  • workers.yml

Once we have all the files, we can execute the Ansible playbooks. We will start with the initial.yml file.

WAUTERW-M-T3ZT:ansible-k8s-digitalocean wim$ ansible-playbook -i hosts initial.yml

PLAY [all] ************************************************************************************************************************************

TASK [Gathering Facts] ************************************************************************************************************************
ok: [worker1]
ok: [worker2]
ok: [master]

TASK [Update and upgrade apt packages] ********************************************************************************************************
 [WARNING]: Could not find aptitude. Using apt-get instead.

changed: [worker1]
changed: [master]
changed: [worker2]

TASK [create the 'ubuntu' user] ***************************************************************************************************************
changed: [worker1]
changed: [worker2]
changed: [master]

TASK [allow 'ubuntu' to have passwordless sudo] ***********************************************************************************************
changed: [master]
changed: [worker1]
changed: [worker2]

TASK [set up authorized keys for the ubuntu user] *********************************************************************************************
changed: [worker1] => (item=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCmQacIjnTa1wuB1XERvPSzasMg/FrmCjtwzTNHo4sk1u8Tbyrm7oh6Fi+CJ5bRxYMRkQ9JWo8ud6jZ4L+Tczcb1RB+U8HqraXRBXHpgOJvgHUcRPsE7x+38vndCsLgONuLbkAcDVcW1RPl++6CipIdbD09YJ6avPfTYEfG+BKIr5AkmIqPM+e2JVD7pgKRSjiNLeMQU2TKVvYOJ74mwNLjQWVBE4KLYFHHKzNA6a40e//MFMoI/YMnPWwnQ/GstBvnWCzwMBJS6uDeFQAEACeeXhfE2GMbM0MM4hFPELm3ZB/5PcXCzSjQ2R6BCmcp/6G4Vyr8tTripCxZCWCCaSw/ wim@WAUTERW-M-T3ZT)
changed: [master] => (item=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCmQacIjnTa1wuB1XERvPSzasMg/FrmCjtwzTNHo4sk1u8Tbyrm7oh6Fi+CJ5bRxYMRkQ9JWo8ud6jZ4L+Tczcb1RB+U8HqraXRBXHpgOJvgHUcRPsE7x+38vndCsLgONuLbkAcDVcW1RPl++6CipIdbD09YJ6avPfTYEfG+BKIr5AkmIqPM+e2JVD7pgKRSjiNLeMQU2TKVvYOJ74mwNLjQWVBE4KLYFHHKzNA6a40e//MFMoI/YMnPWwnQ/GstBvnWCzwMBJS6uDeFQAEACeeXhfE2GMbM0MM4hFPELm3ZB/5PcXCzSjQ2R6BCmcp/6G4Vyr8tTripCxZCWCCaSw/ wim@WAUTERW-M-T3ZT)
changed: [worker2] => (item=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCmQacIjnTa1wuB1XERvPSzasMg/FrmCjtwzTNHo4sk1u8Tbyrm7oh6Fi+CJ5bRxYMRkQ9JWo8ud6jZ4L+Tczcb1RB+U8HqraXRBXHpgOJvgHUcRPsE7x+38vndCsLgONuLbkAcDVcW1RPl++6CipIdbD09YJ6avPfTYEfG+BKIr5AkmIqPM+e2JVD7pgKRSjiNLeMQU2TKVvYOJ74mwNLjQWVBE4KLYFHHKzNA6a40e//MFMoI/YMnPWwnQ/GstBvnWCzwMBJS6uDeFQAEACeeXhfE2GMbM0MM4hFPELm3ZB/5PcXCzSjQ2R6BCmcp/6G4Vyr8tTripCxZCWCCaSw/ wim@WAUTERW-M-T3ZT)

PLAY RECAP ************************************************************************************************************************************
master                     : ok=5    changed=4    unreachable=0    failed=0
worker1                    : ok=5    changed=4    unreachable=0    failed=0
worker2                    : ok=5    changed=4    unreachable=0    failed=0

Next, we will install the Kubernetes dependencies:

WAUTERW-M-T3ZT:ansible-k8s-digitalocean wim$ ansible-playbook -i hosts kube-dependencies.yml

PLAY [all] ************************************************************************************************************************************

TASK [Gathering Facts] ************************************************************************************************************************
ok: [worker1]
ok: [master]
ok: [worker2]

TASK [install Docker] *************************************************************************************************************************
changed: [worker1]
changed: [worker2]
changed: [master]

TASK [install APT Transport HTTPS] ************************************************************************************************************
changed: [worker1]
changed: [worker2]
changed: [master]

TASK [add Kubernetes apt-key] *****************************************************************************************************************
changed: [worker1]
changed: [worker2]
changed: [master]

TASK [add Kubernetes' APT repository] *********************************************************************************************************
changed: [worker1]
changed: [master]
changed: [worker2]

TASK [install kubelet] ************************************************************************************************************************
changed: [worker1]
changed: [master]
changed: [worker2]

TASK [install kubeadm] ************************************************************************************************************************
changed: [worker1]
changed: [master]
changed: [worker2]

PLAY [master] *********************************************************************************************************************************

TASK [Gathering Facts] ************************************************************************************************************************
ok: [master]

TASK [install kubectl] ************************************************************************************************************************
ok: [master]

PLAY RECAP ************************************************************************************************************************************
master                     : ok=9    changed=6    unreachable=0    failed=0
worker1                    : ok=7    changed=6    unreachable=0    failed=0
worker2                    : ok=7    changed=6    unreachable=0    failed=0

When the dependencies are installed, it’s time to configure the master.


WAUTERW-M-T3ZT:ansible-k8s-digitalocean wim$ ansible-playbook -i hosts master.yml

PLAY [master] *********************************************************************************************************************************

TASK [Gathering Facts] ************************************************************************************************************************
ok: [master]

TASK [initialize the cluster] *****************************************************************************************************************
changed: [master]

TASK [create .kube directory] *****************************************************************************************************************
 [WARNING]: Module remote_tmp /home/ubuntu/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running
as another user. To avoid this, create the remote_tmp dir with the correct permissions manually

changed: [master]

TASK [copy admin.conf to user's kube config] **************************************************************************************************
changed: [master]

TASK [install Pod network] ********************************************************************************************************************
changed: [master]

PLAY RECAP ************************************************************************************************************************************
master                     : ok=5    changed=4    unreachable=0    failed=0

And finally, we will have the workers join our Kubernetes cluster by executing the workers playbook.

WAUTERW-M-T3ZT:ansible-k8s-digitalocean wim$ ansible-playbook -i hosts workers.yml

PLAY [master] *********************************************************************************************************************************

TASK [get join command] ***********************************************************************************************************************
changed: [master]

TASK [set join command] ***********************************************************************************************************************
ok: [master]

PLAY [workers] ********************************************************************************************************************************

TASK [Gathering Facts] ************************************************************************************************************************
ok: [worker2]
ok: [worker1]

TASK [join cluster] ***************************************************************************************************************************
changed: [worker1]
changed: [worker2]

PLAY RECAP ************************************************************************************************************************************
master                     : ok=2    changed=1    unreachable=0    failed=0
worker1                    : ok=2    changed=1    unreachable=0    failed=0
worker2                    : ok=2    changed=1    unreachable=0    failed=0

The verify that everything went fine and according to plan, we can do a quick check by logging into the master node and check if the workers have joined successfully.

ubuntu@server-manual-1:/root$ kubectl get nodes
NAME              STATUS   ROLES    AGE     VERSION
server-manual-1   Ready    master   8m53s   v1.13.0
server-manual-2   Ready    <none>   46s     v1.13.0
server-manual-3   Ready    <none>   46s     v1.13.0

Leave a Reply

Your email address will not be published. Required fields are marked *