Monthly Archives: May 2017

Installing local Kubernetes environment

Introduction

There are various way to start with Kubernetes. By far the most recommended seems to be to go to Google Cloud Platform and launch it from there. However, if you’re like me, you prefer to have a local environment which you can fully control.

In this post, we will address the topic to install such a local Kubernetes cluster. In subsequent posts, we will continue to expand on this and run various applications.

Installing single-node Kubernetes with Minikube

By far, the easiest way to setup a single-node Kubernetes cluster is minikube. Installation instructions can be found here. Before installing minikube, ensure you have virtualbox installed. Also, ensure you have installed kubectl. Kubectl is Kubernetes’ command-line tool, which allows you to deploy and manage applications on Kubernetes.
Then -to install minikube- execute the following command:

WAUTERW-M-T3ZT:Kubernetes wim$ brew cask install minikube

Then run the minikube cluster by executing the following command:

WAUTERW-M-T3ZT:~ wim$ minikube start
Starting local Kubernetes v1.6.0 cluster...
Starting VM...
.....

You will see that a virtual machine (called…euh…minikube) is created in Virtualbox. Let’s explore our cluster a bit more first:

WAUTERW-M-T3ZT:~ wim$ kubectl cluster-info
Kubernetes master is running at https://192.168.99.100:8443
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

You see here that the master is running. Now, it’s always annoying that no dashboard is shown so let’s sort that out first. In any case, minikube allows for that quite easily:

WAUTERW-M-T3ZT:~ wim$ minikube dashboard

It will then automatically open the dashboard in the browser. In my situation, it brought me to http://192.168.99.100:30000/#!/deployment?namespace=default.

Installing single-node Kubernetes with CoreOS

WAUTERW-M-T3ZT:Kubernetes wim$ git clone https://github.com/coreos/coreos-kubernetes.git
Cloning into 'coreos-kubernetes'...
remote: Counting objects: 10238, done.
remote: Compressing objects: 100% (4/4), done.
remote: Total 10238 (delta 0), reused 0 (delta 0), pack-reused 10234
Receiving objects: 100% (10238/10238), 26.37 MiB | 5.82 MiB/s, done.
Resolving deltas: 100% (3366/3366), done.
WAUTERW-M-T3ZT:Kubernetes wim$ cd coreos-kubernetes/single-node/
WAUTERW-M-T3ZT:single-node wim$ vagrant up
Generating RSA private key, 2048 bit long modulus
.....
.....
.....
Signature ok
subject=/CN=kube-admin
Getting CA Private Key
Bundled SSL artifacts into ssl/kube-admin.tar
ssl/ca.pem ssl/admin-key.pem ssl/admin.pem
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Importing base box 'coreos-alpha'...
==> default: Matching MAC address for NAT networking...
==> default: Checking if box 'coreos-alpha' is up to date...
==> default: Setting the name of the VM: single-node_default_1497952480776_23955
==> default: Fixed port collision for 22 => 2222. Now on port 2200.
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
    default: Adapter 1: nat
    default: Adapter 2: hostonly
==> default: Forwarding ports...
    default: 22 (guest) => 2200 (host) (adapter 1)
==> default: Running 'pre-boot' VM customizations...
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 127.0.0.1:2200
    default: SSH username: core
    default: SSH auth method: private key
==> default: Machine booted and ready!
==> default: Configuring and enabling network interfaces...
==> default: Running provisioner: file...
==> default: Running provisioner: shell...
    default: Running: inline script
==> default: Running provisioner: file...
==> default: Running provisioner: shell...
    default: Running: inline script
WAUTERW-M-T3ZT:single-node wim$ kubectl get nodes
NAME          STATUS    AGE       VERSION
172.17.4.99   Ready     2m        v1.5.4+coreos.0

WAUTERW-M-T3ZT:single-node wim$ kubectl cluster-info
Kubernetes master is running at https://172.17.4.99:443
Heapster is running at https://172.17.4.99:443/api/v1/proxy/namespaces/kube-system/services/heapster
KubeDNS is running at https://172.17.4.99:443/api/v1/proxy/namespaces/kube-system/services/kube-dns
kubernetes-dashboard is running at https://172.17.4.99:443/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard

Let’s try to have a look at the dashboard now by opening a browser and going to the above mentioned URL. You will likely get an ‘Unauthorized’ back. The reason for this is that the API Server needs a client certificate/token/user and pass (depends on configuration) to authorize the client otherwise it will return Unauthorized.

Luckily we can access the dashboard through SSH tunneling. Have a look here. This guy wrote a shell script to create such a tunnel. The code, adapted for multi-node deployment, is displayed here. Note: it’s a little different than the script on Github since that one assumes a single-node setup. In any case, this script saved me a ton of time:

#!/bin/bash

# Usage: Assuming a vagrant based kubernetes (as in https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant-single.html), run this script in the same folder of the Vagrantfile (where you would normally do "vagrant up")
# * Then insert the password (by default: kubernetes)
# * Browse localhost:9090

USERNAME='kubernetes'
PASSWORD='kubernetes'

function main() {
  Create_user_on_kubernetes_machine

  SSH_port_forwarding

  # Enjoy (at localhost:9090)
}

function Create_user_on_kubernetes_machine() {
  # Attribution: https://help.ubuntu.com/community/AddUsersHowto
  # Attribution: http://stackoverflow.com/questions/2150882/how-to-automatically-add-user-account-and-password-with-a-bash-script
  vagrant ssh -c "if [ ! -d /home/$USERNAME ]; then sudo useradd $USERNAME -m -s /bin/bash && echo '$USERNAME:$PASSWORD' | sudo chpasswd; fi"
}


function SSH_port_forwarding() {
  KUBERNETES_HOST=$(kubectl cluster-info | head -n 1 | grep -o -E '([0-9]+\.){3}[0-9]+')
  # Attribution: https://github.com/kubernetes/dashboard/issues/692
  # * Comment: https://github.com/kubernetes/dashboard/issues/692#issuecomment-251617588
  #     * By bbalzola: https://github.com/bbalzola
  TARGET=$(kubectl describe services kubernetes-dashboard --namespace=kube-system | grep Endpoints | awk '{ print $2 }')

  # Attribution: https://help.ubuntu.com/community/SSH/OpenSSH/PortForwarding
  ssh -L 9090:$TARGET $USERNAME@$KUBERNETES_HOST
}

main

Then give execution rights to the script and run it. When it asks for a password, remember that the default password is ‘kubernetes’. Next, go to http://127.0.0.1:9090 to see the Dashboard. If all is well, you will see something like the below screenshot.

Installing multi-node Kubernetes with CoreOS

In case you are following along with this post, you already have cloned the repository and you can skip the next step. If not, just quickly execute it:

WAUTERW-M-T3ZT:Kubernetes wim$ git clone https://github.com/coreos/coreos-kubernetes.git
Cloning into 'coreos-kubernetes'...
remote: Counting objects: 10238, done.
remote: Compressing objects: 100% (4/4), done.
remote: Total 10238 (delta 0), reused 0 (delta 0), pack-reused 10234
Receiving objects: 100% (10238/10238), 26.37 MiB | 5.82 MiB/s, done.
Resolving deltas: 100% (3366/3366), done.

This time, go to the multi-node folder.

WAUTERW-M-T3ZT:Kubernetes wim$ cd coreos-kubernetes/multi-node/vagrant/
WAUTERW-M-T3ZT:vagrant wim$ cp config.rb.sample config.rb

Edit the config.rb file to adjust the number of workers (also don’t forget to uncomment the line)

#$update_channel="alpha"
#$controller_count=1
#$controller_vm_memory=512
$worker_count=3
#$worker_vm_memory=1024
#$etcd_count=1
WAUTERW-M-T3ZT:vagrant wim$ vagrant up
Generating RSA private key, 2048 bit long modulus
.................................................+++
........................+++
e is 65537 (0x10001)
Generating SSL artifacts in ssl
Generating RSA private key, 2048 bit long modulus
.....
..... #output deleted for keeping things clear
.....
subject=/CN=kube-worker-172.17.4.203
Getting CA Private Key
Bundled SSL artifacts into ssl/kube-worker-172.17.4.203.tar
ssl/ca.pem ssl/worker-key.pem ssl/worker.pem
Bringing machine 'e1' up with 'virtualbox' provider...
Bringing machine 'c1' up with 'virtualbox' provider...
Bringing machine 'w1' up with 'virtualbox' provider...
Bringing machine 'w2' up with 'virtualbox' provider...
Bringing machine 'w3' up with 'virtualbox' provider...
==> e1: Importing base box 'coreos-alpha'...
==> e1: Matching MAC address for NAT networking...
==> e1: Checking if box 'coreos-alpha' is up to date...
==> e1: Setting the name of the VM: vagrant_e1_1497941477968_71562
==> e1: Fixed port collision for 22 => 2222. Now on port 2200.
==> e1: Clearing any previously set network interfaces...
==> e1: Preparing network interfaces based on configuration...
    e1: Adapter 1: nat
    e1: Adapter 2: hostonly
==> e1: Forwarding ports...

When Vagrant has finished booting up the environment, you will notice that we have a Kubernetes environment with 1 controller, 1 etcd store and 3 workers, just as we described in our config.rb file.

Let’s now see if we can actually do something with our environment by running a simple kubectl command;

WAUTERW-M-T3ZT:vagrant wim$ kubectl get nodes
Unable to connect to the server: dial tcp 192.168.99.101:8443: i/o timeout

Oops, seems like something is not right. The problem is that kubectl is not pointed to the right context. In order to do so, simply follow the below steps:

WAUTERW-M-T3ZT:vagrant wim$  export KUBECONFIG="${KUBECONFIG}:$(pwd)/kubeconfig"
WAUTERW-M-T3ZT:vagrant wim$ kubectl config use-context vagrant-multi

Then issue the command again:

WAUTERW-M-T3ZT:vagrant wim$ kubectl get nodes
NAME           STATUS                     AGE       VERSION
172.17.4.101   Ready,SchedulingDisabled   1m        v1.5.4+coreos.0
172.17.4.201   Ready                      1m        v1.5.4+coreos.0
172.17.4.202   Ready                      52s       v1.5.4+coreos.0
172.17.4.203   Ready                      1m        v1.5.4+coreos.0

Sometimes it happens that you receive the below error message.

WAUTERW-M-T3ZT:vagrant wim$ kubectl get nodes
The connection to the server 172.17.4.101:443 was refused - did you specify the right host or port?

The reason for this is that when the cluster is first launched, it must download all the container images for the cluster components (Kubernetes, dns, heapster, etc). It can take a few minutes before the Kubernetes api-server is available. After a while, it should be solved automatically and you will get back a correct response.

Now, let’s have a look at some of the details of our cluster:

WAUTERW-M-T3ZT:vagrant wim$ kubectl cluster-info
Kubernetes master is running at https://172.17.4.101:443
Heapster is running at https://172.17.4.101:443/api/v1/proxy/namespaces/kube-system/services/heapster
KubeDNS is running at https://172.17.4.101:443/api/v1/proxy/namespaces/kube-system/services/kube-dns
kubernetes-dashboard is running at https://172.17.4.101:443/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard

Let’s try to have a look at the dashboard now by opening a browser and going to the above mentioned URL. You will likely get an ‘Unauthorized’ back. The reason for this is that the API Server needs a client certificate/token/user and pass (depends on configuration) to authorize the client otherwise it will return Unauthorized.

Luckily we can access the dashboard through SSH tunneling. Have a look here. This guy wrote a shell script to create such a tunnel. The code, adapted for multi-node deployment, is displayed here. Note: it’s a little different than the script on Github since that one assumes a single-node setup. In any case, this script saved me a ton of time:

#!/bin/bash

# Usage: Assuming a vagrant based kubernetes (as in https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant-single.html), run this script in the same folder of the Vagrantfile (where you would normally do "vagrant up")
# * Then insert the password (by default: kubernetes)
# * Browse localhost:9090

USERNAME='kubernetes'
PASSWORD='kubernetes'

function main() {
  Create_user_on_kubernetes_machine

  SSH_port_forwarding

  # Enjoy (at localhost:9090)
}

function Create_user_on_kubernetes_machine() {
  # Attribution: https://help.ubuntu.com/community/AddUsersHowto
  # Attribution: http://stackoverflow.com/questions/2150882/how-to-automatically-add-user-account-and-password-with-a-bash-script
  vagrant ssh c1 -c "if [ ! -d /home/$USERNAME ]; then sudo useradd $USERNAME -m -s /bin/bash && echo '$USERNAME:$PASSWORD' | sudo chpasswd; fi"
}


function SSH_port_forwarding() {
  KUBERNETES_HOST=$(kubectl cluster-info | head -n 1 | grep -o -E '([0-9]+\.){3}[0-9]+')
  # Attribution: https://github.com/kubernetes/dashboard/issues/692
  # * Comment: https://github.com/kubernetes/dashboard/issues/692#issuecomment-251617588
  #     * By bbalzola: https://github.com/bbalzola
  TARGET=$(kubectl describe services kubernetes-dashboard --namespace=kube-system | grep Endpoints | awk '{ print $2 }')

  # Attribution: https://help.ubuntu.com/community/SSH/OpenSSH/PortForwarding
  ssh -L 9090:$TARGET $USERNAME@$KUBERNETES_HOST
}

main

Then give execution rights to the script and run it. When it asks for a password, remember that the default password is ‘kubernetes’. Next, go to http://127.0.0.1:9090 to see the Dashboard. If all is well, you will see something like the below screenshot.

Note: when the above script is run, it will ask you for a password. Since we have not changed it, the default password ‘kubernetes’ will apply here.