Building my own images

28th August 2018 1 By Jonny

I’d finished my previous post by having setup a gogs git repository instance on my k8s cluster and made this instance accessible to both my local network (HTTP) via an external HAProxy instance and to the wider Internet (HTTPS) through my router and using a letsencrypt TLS certificate. Now that I have my gogs service up, running, and available, it’s time to start making it work and give it some content.

My first git repo

Technically not really my first git repo, as I’ve used git before with and other gitlab instances, however this will be my first git repo hosted on my gogs service. I definitely started with something incredibly simple. I moved my previous container image build of Alpine Linux 3.7 and 3.8 from my workstation to a git repo on gogs. I created a repository named docker-alpine-arm and set the repository type to public. This is a benefit of running my own git repository server – I can choose to have as many public or private repositories as I want. Typically with repositories such as you have to pay to have more than one private repository.

Basically, I’m cheap … although I have invested time and effort in putting this all together, so it’s arguable as to whether I have saved anything here.

The actual data in this repository is not very special – it contains the Dockerfile I’d used previously to build my Apline Linux 3.8 image and also the Apline Linux 3.8 root filesystem for ARM processors.

More and more repos

I then started creating more and more repos so that I could host (and build) all of images that I would be deploying. I also use docker on my NAS systems to run applications like web servers (usually as reverse proxies), DNS servers, an Icinga2 monitoring system, and a separate MariaDB instance which is used to record room temperatures. Therefore, I ended up with quite a mix of git repositories:

As can be seen quite a wide selection of container image repositories, and I haven’t quite settled on a naming scheme yet … It should be apparent that at some point in the future there will be a blog post detailing the wordpress deployment, which this blog is hosted on and how that has been put together as an example (buzzword alert) ‘n-tier‘ application.

I’m not going to detail all the benefits of using a source code control system – that should be obvious. Although, to be honest, my container image builds aren’t going to change all that much. It is useful to keep them all in a central location – which I can back up separately. By having the service hosted on my k8s cluster I can also very quickly and easily re-deploy the entire environment with little effort.

Migrating and Upgrading

One of the benefits of running applications in containers is that they can be redeployed quickly and easily with no real fuss. The end goal of course is to be able to treat contained applications as disposable (and re-usable) resources. This lends itself ideally when it comes to migrating or upgrading applications. I’ll cover this in more detail in a later post I expect, but prior to deploying my Raspberry Pi based cluster, I had set up a small test k8s cluster on some CentOS (x86_64) VMs. This cluster hosted some small nginx and MariaDB pods.

In order to migrate from the x86-64 based Alpine Linux pods on the VM hosted cluster to the ARM based Alpine Linux pods on the RPi k8s cluster, all I had to do was alter the deployment YAML files to use the ARM images rather than the x86_64 images. The remaining configuration was left as it had been previously.

Actually, there were some small alterations made to the new RPi based deployment – the persistent volumes were backed up/cloned on the NAS for use with the new RPi based cluster. The RPi based cluster was then pointed to use the cloned directories rather than the original persistent volume locations.

Once the NAS server had been configured with the appropriate NFS exports for the new RPi based persistent volumes, the deployments could be made. This was confirmed as having succeeded. At this point there are two instances of gogs available (one on the VM hosted k8s cluster, and one on the new RPi hosted k8s cluster). They shared the same original configuration as the backend NFS exports had been cloned. The next step was to reconfigure the HAProxy to point to the RPi k8s cluster rather than the VM based cluster.

With the HAProxy pointing to the new RPi cluster an uninterrupted and seamless (architecture) migration had been completed.

Caveat: this is an incredibly low volume installation, with no-one noticing but me. However it did provide a proof (to me) of the flexibility of deploying applications in containers and having these fronted by a load balancer.

Caveat 2: Obviously it helped massively that MariaDB and Gogs were available on both x86_64 and ARM architectures at the same versioning. I’d argue, though, that this is a greater justification for using open source software – it is generally more feasible to run the same software across multiple architectures. Portability is great!

In the next exciting episode I’ll take this to the (clichĂ© alert!) next level, and will add in a Jenkins server to automate the image building. My home lab will slowly, but surely, become a CI/CD environment!