Off and running …

9th August 2018 0 By Jonny

It’s alive

Part of my job involves working with OpenShift – which is pretty cool, however, it’s also not very practical to have this set up as a physical lab environment. I have previously set this up in VMs, basically to have an environment to work on and test on for studying for (and passing) the OpenShift exam. However, having an environment in VMs isn’t really the same as having an actual live environment that does something. I wanted a low cost, and easy to use alternative. I’m also a bit of a fan of using Raspberry Pi systems, as they’re cheap (hooray!), small, consume low levels of power, and have a large developer community established. If a function isn’t available out of the box, chances are someone has added the function or at least tried and documented the failure.

Prepping the Boards

Each Pi is booted from a 32GB mircoSD card which was flashed with the latest version of Raspbian (at the time of writing this is: stretch – 9.4). I went through the usual steps of configuring a fixed IP address on the physical network port, setting a hostname, enabling SSH, dropping the shared video memory to 16MB (possibly this can be dropped further – I’ve never really investigated this), and changing the default password. I bought new Raspberry Pi 3 B+ boards for this project as these have the slightly faster CPU, and have an upgraded NIC which is listed as gigabit (although actual throughput is still limited by the bus it is on), however older models should work just as well, albeit at lower network speeds. I also bought a set of stackable mounts for the Raspberry Pi boards, so that I could try and keep it looking reasonably neat and tidy, a 6 USB port charging brick so that all the boards can be powered from the same socket, and a 16 port managed switch (ok, slight overkill here). Nothing like a great big single point of failure for these kind of projects …

They also run docker, and the ARM ports of Kubernetes (k8s) run on them as well. I chose to set up a 5 node k8s cluster, with one node as the Master and the remaining four as worker nodes.

Raspberry Pi stack connected to a 16 port Gigabit switch
Raspberry Pi ‘datacentre’

Above is a picture of it all hooked up … The network cable that stretches off the screen is connected across the desk to a further Pi board that has been set up as a container image building system. More on that later I’m sure.

Installing Docker

Installing docker on Raspbian is actually incredibly easy:

curl -sSL https://get.docker.com | sh

There is a slight drawback here though, that this command will install the latest version of docker-ce (docker-ce-18.06.0 the ce is community edition). Normally, I’m all for having the latest version of whatever. When it comes to actually getting k8s up and running it will fail with a warning that it hasn’t been tested on this version of docker-ce and that the latest version it has been tested on is 17.03 – which is rather old. I decided to try a later release of the version 17 codebase on the grounds that it would probably be compatible but would also be somewhat more up to date. As such, I’ve used docker-ce version 17.12.1 installed with the following command:

sudo apt-get install docker-ce=17.12.1~ce-0~raspbian

Spoiler alert: this allows k8s to create the cluster without an error …

Docker is then started and enabled as a system service:

sudo systemctl start docker && sudo systemctl enable docker

This was performed on all 5 systems. For a larger number of nodes a tool like ansible might be useful for this sort of thing – as it was I used 5 SSH sessions in a tmux console, which isn’t really very DevOps-ish of me I know. With this set up in place, I was ready to navigate my way through installing and running k8s.

Installing Kubernetes

Once again, on every node, the k8s repository should be enabled, and the k8s packages installed. I used a variety of sources and materials to research installing k8s the way I wanted it, however most of the information came from a blog post on the hypriot website. Enabling the repository and installing the required packages is straightforward enough, with the repository added and enabled by the following two commands:

sudo curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
sudo echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list

Don’t worry about the kubernetes-xenial entry in the apt source list – the available binary packages work perfectly fine on the Raspbian stretch version running on the Raspberry Pi. The kubernetes-xenial repository is kept up to date, and includes the armhf architecture that the Raspberry Pi requires. I’m also not going to set up the k8s cluster manually, and I only need one master node, so I’m going to use kubeadm to bootstrap my cluster. I will need to install the kubeadm package on every node in the cluster:

sudo apt-get install kubeadm

As simple as that, k8s is now installed on the cluster nodes. Nothing is configured and running at this point though, so the next steps will involve setting up the k8s software and actually creating the k8s cluster.

Note: This will install the latest version of Kubernetes on the nodes. Currently (9th August 2018) this is Kubernetes 1.11.2 – however older versions can be specified to run previous releases, e.g. 1.10.5. For this particular cluster, the software was installed prior to the 1.11.2 release, and as such it is running 1.11.1. I have a similar system installed at work running version 1.10.5. My thoughts are that I’ll keep the cluster at work on even releases, and the cluster at home on odd releases.

Despite my belief that the kubernetes-xenial repository would be kept up to date, so far 1.10.6 hasn’t been published in this repository, and therefore my work based cluster might be upgraded to 1.11.x rather than stay on 1.10.x

As of 9th August – k8s 1.10.6 is available through this repository, so the work cluster remains on 1.10.x

Setting up and Configuring the k8s Cluster

The k8s cluster will be set up on the node that I’ll assign as the k8s Master node – which will run the k8s scheduler, controller-manager, apiserver, and etcd services. By default it will also be a schedulable node within the cluster which means that it can accept application workloads. In practice though, it will usually not run many applications added to k8s as it will already have a significant workload in it’s role as a Master node. Worth noting, that in OpenShift, the Master nodes are usually not schedulable. It’s also at this point that a decision needs to be made regarding which container network interface (CNI) to use. There are many possible choices of CNI available within k8s, and as with many options in open source there is no wrong choice to be made here. There are, however, some obvious contenders to use – especially given this is a fairly basic and simple k8s deployment (and that it is running on the ARM architecture). flannel and weave are probably the two most common CNI options available. I selected flannel, mainly because I’d seen it mentioned in other posts, and that it’s configuration and operation seemed straightforward and uncomplicated. After all, I wanted to learn more about k8s rather than the intricacies of container networking (although this is something I may look at later).

Having made the decision to use flannel for my CNI provider, it’s time to initialise the k8s cluster and prepare for the use of flannel networking. On the Master node, the following command is executed:

kubeadm init --pod-network-cidr 10.244.0.0/16

The –pod-network-cidr option instructs kubeadm to use the specified network address for pod communications. The network address specified (10.244.0.0/16) is coded into the flannel yaml file that will be used later to set up the CNI. The values can be changed to different values if this is preferred (e.g. 10.244.0.0/16 is already in use on your network), however it is easier (less complex) to specify the default value used by flannel on the k8s initialisation command line.

The intialisation will take a bit of time to complete as the kubeadm command goes about downloading the necessary images, launching containers, and setting up the k8s services. If all goes well this process will complete without errors – and at this point k8s will report a warning that it has only been tested on docker version up to 17.03 if you’re running a newer version of docker. I’ve found that docker builds with a major version of 18 will not complete the initialisation, and had to downgrade to major version 17. Once up and running, upgrading to a major version with 18 has been successful. I’ve not tested many version 18 releases though, so your mileage may vary here.

Assuming the initialisation has completed successfully, you should be presented with output similar to the following:

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
http://kubernetes.io/docs/admin/addons
You can now join any number of machines by running the following on each node:
kubeadm join --token=de23ca.e9bcd5449ab98473 192.168.2.10

Note: The token is cluster specific. The above value is just an example. Each cluster will have a unique token created.

Typically the tokens last for 24 hours – the lifetime can be specified on the kubeadm init command line. For testing environments it can be useful to have a non-expiring token, which can be set by using the –token-ttl to 0, however this is bad practice for a production system. At this point I should have a one node k8s cluster up and running! Hooray! A one node cluster isn’t really very useful though, and the next step is to add the remaining nodes via the ‘kubeadm join’ command on each node as listed above. Once the remaining nodes have been added to the cluster via the above ‘kubeadm join’ command the following kubectl command can be executed on the Master node:

sudo kubectl get nodes --kubeconfig=/etc/kubernetes/admin.conf

Which should produce output similar to:

NAME            STATUS    ROLES    AGE    VERSION
blue            Ready     master   11m    v1.11.1
green           Ready     <none>   8s     v1.11.1
orange          Ready     <none>   12s    v1.11.1
white           Ready     <none>   3s     v1.11.1
pink            Ready     <none>   15s    v1.11.1

I have named my nodes very unimaginatively, with the idea being that the name matches the colour of the USB power cable.

For ease of administration, I’ve also copied the /etc/kubernetes/admin.conf file to a client workstation at $HOME/.kube/config which is the default location used by the kubectl command for kubeconfig files. This allows me to run the kubectl tool without needing to specify the kubeconfig location each time on the command line. My cluster is still not fully functional though, as I need to initialise the CNI (discussed earlier). The most recent release of flannel is v0.10 and can be applied as follows (from my client workstation):

curl -sSL https://rawgit.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml | sed "s/amd64/arm/g" | kubectl create -f -
What does this actually do?

The kube-flannel.yml file defines a daemonset (at it’s simplest a daemonset is a pod that runs on every node in the cluster) that enables inter-pod and inter-node networking, using the CIDR network defined previously. The yaml file defaults to expecting the x86_64 CPU architecture, which obviously won’t work for the ARM based Raspberry Pi boards. The sed command replaces any instance of ‘amd64’ with ‘arm’ ensuring that the correct architecture is deployed. As a side note, v 0.10.0 of flannel uses alpine Linux containers which should be lightweight and aligns nicely with the aims of running Kubernetes on Raspberry Pi hardware. The deployment of the flannel CNI networking can be verified by checking the pods running in the kube-system namespace:

kubectl -n kube-system get pods
NAME READY STATUS RESTARTS AGE coredns-78fcdf6894-rf2kk 1/1 Running 3 8d ​coredns-78fcdf6894-tbsgl 1/1 Running 3 8d
etcd-blue ​ 1/1 Running 3 8d kube-apiserver-blue 1/1 Running 8 8d kube-controller-manager-... 1/1 Running 5 8d kube-flannel-ds-dh9qd 1/1 Running 0 15h kube-flannel-ds-h4ljx 1/1 Running 6 8d kube-flannel-ds-jlgg4 1/1 Running 68 6d kube-flannel-ds-lvgb7 1/1 Running 3 8d kube-flannel-ds-x6gmc 1/1 Running 0 15h kube-proxy-2zlkf 1/1 Running 4 8d kube-proxy-css7s 1/1 Running 3 8d kube-proxy-jxj2v 1/1 Running 3 8d kube-proxy-m6v7g 1/1 Running 4 8d kube-proxy-vkdww 1/1 Running 5 8d kube-scheduler-blue 1/1 Running 4 8d kubernetes-dashboard-7d59788d44-ktspx 1/1 Running 5 7d metrics-server-7b9cfd4895-sf8g7 1/1 Running 4 7d

From the above, there are 5 instances of kube-flannel running (one per node). The actual node assignment can be viewed using kubectl get pods -n kube-system -o wide. The other pods of note are the kube-scheduler, kube-controller-manager, etcd, and kube-apiserver which are all running on the blue node which is the node I have used as my Master node. There are two coredns pods also running on the Master node to provide name resolution within the cluster. Each node also has a kube-proxy node which enables connections to the services within the pods. I have since added the kubernetes-dashboard and metrics-server pods to provide a web frontend to k8s and to collect performance statistics from the cluster.

At this point, the k8s cluster should be up and running and have a CNI installed and working as well. It’s ready to start using and hosting containerised applications!

Errors encountered on the way

Whilst it was easy and straightforward to get the cluster up and running, I noticed that some of the networking pods would occasionally crash or not start up, and that some pods that I later added also had networking difficulties, e.g. unable to mount the NFS defined persistent volumes. I believe I have traced this to either networking kernel module or the firmware of the NIC. Updating the Pi ‘firmware’ via rpi-update has seemed to either resolve this issue or reduce the frequency of these errors.

Summary

The commands used to get to this point are as follows.
All nodes:

curl -sSL https://get.docker.com | sh
sudo apt-get install docker-ce=17.12.1~ce-0~raspbian sudo systemctl start docker && sudo systemctl enable docker sudo curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - sudo echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list sudo apt-get install kubeadm

Master Node:

kubeadm init --pod-network-cidr 10.244.0.0/16

Remaining Nodes:

kubeadm join --token=<token> <ip_address>

Workstation client with kubeconfig copied to $HOME/.kube/config:

curl -sSL https://rawgit.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml | sed "s/amd64/arm/g" | kubectl create -f -
kubectl get nodes [-o wide]
kubectl get pods -n kube-system [-o wide]