Install Kubernetes on DigitalOcean (manually)

By | 08/10/2018

Introduction

In a previous post, we have created 3 Ubuntu servers on DigitalOcean with Terraform. We will continue with installing Kubernetes on these servers. For performance reasons however I have created 3 servers with 2 virtual CPU’s and 2GB of memory. 

Creating 3 servers on DigitalOcean

The previous post however still applies perfectly but we need to change some parameters. For completeness, here are the Terraform files I have been using. I refer to this post for further information.

Here is the create_server.tf file:

resource "digitalocean_droplet" "server" {
    count = "${var.numberofservers}"
    name = "server-manual-${count.index+1}"
    #name = "${var.servername}-${format("%02d", count.index+1)}"
    image = "ubuntu-18-04-x64"
    size = "s-2vcpu-2gb"
    region = "${var.region}"
    ssh_keys = [
        "${var.ssh_fingerprint}"
    ]
    tags   = ["${digitalocean_tag.webserver.id}"]   
}

resource "digitalocean_tag" "webserver" {
    name = "web"
}


resource "digitalocean_record" "server_dns_record" {
  count     = "${var.numberofservers}"
  name      = "dns-server-${count.index+1}"
  domain    = "${var.domain_name}"
  type      = "A"
  name      = "${element(digitalocean_droplet.server.*.name, count.index+1)}"
  value     = "${element(digitalocean_droplet.server.*.ipv4_address, count.index+1)}"
}

Here is the terraform.tfvars file:

do_token = "bc1***1d7"
ssh_fingerprint = "5a:25:***:8d:03"
servername="server"
numberofservers = 3
numberofcpus = 2
domain_name = "wimwauters.com"
region     = "ams2"

And finally also the provider.tf file:

variable "do_token" {}
variable "ssh_fingerprint" {}
variable "numberofservers" {}
variable "domain_name" {}
variable "region" {}
variable "numberofcpus" {}

provider "digitalocean"{
  token = "${var.do_token}"
}


Next, apply the configuration by executing them:

WAUTERW-M-T3ZT:DigitalOcean_Test wim$ terraform init
WAUTERW-M-T3ZT:DigitalOcean_Test wim$ terraform plan
WAUTERW-M-T3ZT:DigitalOcean_Test wim$ terraform apply

Installing Kubernetes

When the servers are ready, you should be able to ssh into the server using your public key. 

WAUTERW-M-T3ZT:Keys_and_Certificates wim$ ssh -i keypair_digitalocean_146185179184 root@82.196.11.165

The first server, server-manual-1 will become our Kubernetes master. The two remaining servers will be our Kubernetes workers.

Create a shell script with the following contents:

root@server-manual-1:~# cat kubernetes.sh
apt-get update && apt-get install -y apt-transport-https
curl -s https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable"
apt update && apt install -qy docker-ce
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" \
> /etc/apt/sources.list.d/kubernetes.list
apt-get update && apt-get install -y kubeadm kubelet kubectl

Don’t forget to give execute permissings to the script:

root@server-manual-1:~# sudo chmod +x kubernetes.sh
root@server-manual-1:~# ./kubernetes.sh

Let this process finish and repeat it for the two remaining servers: server-manual-2 and server-manual-3.

Once finished with the installation of the dependencies, we need to initialize Kubernetes.

root@server-manual-1:~# kubeadm init --apiserver-advertise-address=82.196.11.165 --pod-network-cidr=192.168.30.0/16

At the end of this, a join command will be returned which we will need to use later on to have our worker nodes join the Kubernetes cluster. So take not of that command. In my case this is:

kubeadm join 82.196.11.165:6443 --token eru2tr.4lx8gs1j1bdnuh95 --discovery-token-ca-cert-hash sha256:575155d259d1752c0324ce9ebffa9ee02b395d576c6470ef51410bfdc658a03f

We also need to create a new user on the master node and give him sudo rights. Luckily this is rather straightforward on Ubuntu:

root@server-manual-1:~# adduser ubuntu
Adding user ubuntu' ...Adding new groupubuntu' (1000) …
Adding new user ubuntu' (1000) with groupubuntu' …
Creating home directory /home/ubuntu' ...Copying files from/etc/skel' …
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
Changing the user information for ubuntu
Enter the new value, or press ENTER for the default
root@server-manual-1:~# usermod -aG sudo ubuntu
root@server-manual-1:~# su - ubuntu

Next, we need to use this user (in my case ubuntu) to setup the Kubernetes configuration:

ubuntu@server-manual-1:~$ mkdir -p $HOME/.kube
ubuntu@server-manual-1:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[sudo] password for ubuntu:
ubuntu@server-manual-1:~$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

Last but not least, we need to install the Flannel network on the cluster:

ubuntu@server-manual-1:~$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

The master Kubernetes node is now ready. You can validate this by executing the following command:

ubuntu@server-manual-1:~$ kubectl get nodes
NAME              STATUS   ROLES    AGE   VERSION
server-manual-1   Ready    master   11m   v1.13.0

Configuring the worker nodes

Now that the master node is fully configured, we need to ensure that our worker nodes are joining the cluster. If you did not install all the dependencies on the worker nodes, now is the time to do this. If you followed along with this guide, you should have done this step already.

Next step is to execute that join command that our master node returned. So perform the following command on both the worker nodes.

root@server-manual-2:~# kubeadm join 82.196.11.165:6443 --token eru2tr.4lx8gs1j1bdnuh95 --discovery-token-ca-cert-hash sha256:575155d259d1752c0324ce9ebffa9ee02b395d576c6470ef51410bfdc658a03f

Once done, after some time (around 20sec) you can execute the ‘kubectl get nodes’ command on the master node and see that it joined the cluster successfully.

ubuntu@server-manual-1:~$ kubectl get nodes
NAME              STATUS   ROLES    AGE   VERSION
server-manual-1   Ready    master   15m   v1.13.0
server-manual-2   Ready    <none>   25s   v1.13.0

Do the same on the second worker node

root@server-manual-3:~# kubeadm join 82.196.11.165:6443 --token eru2tr.4lx8gs1j1bdnuh95 --discovery-token-ca-cert-hash sha256:575155d259d1752c0324ce9ebffa9ee02b395d576c6470ef51410bfdc658a03f

If all went well you should also see the second worker appear in the cluster.

ubuntu@server-manual-1:~$ kubectl get nodes
NAME              STATUS   ROLES    AGE     VERSION
server-manual-1   Ready    master   19m     v1.13.0
server-manual-2   Ready    <none>   4m30s   v1.13.0
server-manual-3   Ready    <none>   45s     v1.13.0

Leave a Reply

Your email address will not be published. Required fields are marked *