Monthly Archives: September 2017

Installing Contiv plugin on Kubernetes cluster

Introduction

In various posts, we have already introduced some container networking solutions (see here and here). In this post, we will focus on Contiv, an opensource networking plugin that brings policy based networking across containers.

What is Contiv

Contiv is a Docker certified (yet opensource) networking plugin for Docker Swarm but it also works on Kubernetes. It is primarily driven by Cisco. It delivers policy-based networking across containers running on a variety of infrastructure, such as on-premise or cloud, and different flavors of Linux as host OS. That’s a mouthful but essentially it allows you to setup a network (overlay network, L2, L3) across all your containers while applying policies across them. Contiv supports multiple kinds of policies, such as:

  • Isolation policies: container A cannot reach container B, container B can communicate with Container C but not with Container D,…
  • Bandwidth policies: container A has a bandwith restriction of 500kbps to container B

Installing Contiv

The installation process is described in this post. We would like to use Contiv as the container networking solution for our Kubernetes cluster.

As we have already an up and running Kubernetes cluster on Ubuntu 16.04 (see this post), I wanted to test the Contiv-Kubernetes integration.

Let’s go ahead:

wim@k8s-master:~/contiv-1.1.4$ sudo ./install/k8s/install.sh -n 192.168.10.64
[sudo] password for wim:
Installing Contiv for Kubernetes
secret "aci.key" created
Generating local certs for Contiv Proxy
Setting installation parameters
Applying contiv installation
To customize the installation press Ctrl+C and edit ./.contiv.yaml.
clusterrolebinding "contiv-netplugin" created
clusterrole "contiv-netplugin" created
serviceaccount "contiv-netplugin" created
clusterrolebinding "contiv-netmaster" created
clusterrole "contiv-netmaster" created
serviceaccount "contiv-netmaster" created
configmap "contiv-config" created
daemonset "contiv-netplugin" created
replicaset "contiv-netmaster" created
daemonset "contiv-etcd" created
daemonset "contiv-netplugin" deleted
clusterrolebinding "contiv-netplugin" configured
clusterrole "contiv-netplugin" configured
serviceaccount "contiv-netplugin" configured
clusterrolebinding "contiv-netmaster" configured
clusterrole "contiv-netmaster" configured
serviceaccount "contiv-netmaster" configured
configmap "contiv-config" configured
daemonset "contiv-netplugin" created
replicaset "contiv-netmaster" configured
daemonset "contiv-etcd" configured
Installation is complete
=========================================================

Contiv UI is available at https://192.168.10.64:10000
Please use the first run wizard or configure the setup as follows:
 Configure forwarding mode (optional, default is routing).
 netctl global set --fwd-mode routing
 Configure ACI mode (optional)
 netctl global set --fabric-mode aci --vlan-range -
 Create a default network
 netctl net create -t default --subnet= default-net
 For example, netctl net create -t default --subnet=20.1.1.0/24 -g 20.1.1.1 default-net

=========================================================

Nothing too difficult so far. The above commands will install the Contiv networking plugin to our Kubernetes environment. If all went well, you should also see the Contiv user interface

The default username and password is admin/admin.

In the next posts, we will get our hands dirty with Contiv. Stay tuned!

Docker-machine with vSphere ESX

Introduction

In various previous posts, we have been using docker-machine to create some servers on Virtualbox (see here), Digital Ocean (see here) and AWS (see here).

As I recently setup an entire vSphere environment on an Intel NUC, I wanted to use docker-machine to setup some servers also on my vSphere environment. This short posts describes how to achieve that.

Docker-machine has a vsphere driver as well and it actually turns out to be quite straightforward to launch docker hosts on vSphere.

The command to use is the following:

WAUTERW-M-T3ZT:~ wim$ docker-machine create --driver vmwarevsphere --vmwarevsphere-username=*******@vsphere.local --vmwarevsphere-password=********* --vmwarevsphere-vcenter= --vmwarevsphere-datastore= -vmwarevsphere-pool= 

The above command can look a bit obscure so let’s try it out. On your MAC, run the following command (obviously tweaked for your setup)

WAUTERW-M-T3ZT:~ wim$ docker-machine create --driver vmwarevsphere --vmwarevsphere-username=*******@vsphere.local --vmwarevsphere-password=********* --vmwarevsphere-vcenter=192.168.10.11 --vmwarevsphere-datastore=Datastore_Samsung_500GB -vmwarevsphere-pool=192.168.10.10 VM-Docker1

So in my setup the ‘192.168.10.10’ is the IP address of my ESX host, the ‘192.168.10.11’ is the IP address of the Vcenter Server Appliance.

The following screenshot shows everything went smoothly: