Adding a container image registry

14th August 2018 2 By Jonny

From the previous post, I now had my 5 node k8s cluster, and had just built my own very basic container based on Alpine Linux 3.8 for the Raspberry Pi. The next step is to have this container available to my k8s cluster, which means setting up a container image registry.

Installing the registry

As far as I can tell at this point, there aren’t as many choices available for a container registry as I expected. The most popular method seems to be running docker-registry as a docker container. This isn’t officially available for the Raspberry Pi platform, although there are some efforts at building an Pi compatible container. In the end, I decided on installing docker-distribution package on a CentOS 7 VM, which includes a version 2.0 docker registry.

yum install docker-distribution
systemctl start docker-distribution && systemctl enable docker-distribution

I’ll also need to open port 5000 on the local firewall to make it accessible to remote hosts. Most registry clients will connect to the registry through a secure (TLS) connection, however by default the installation does not generate a certificate for TLS connections. I’ve used openssl to generate a self-signed certificate and key pair for TLS connections (which is still not ideal, but good enough for a home lab). I have updated my /etc/docker-distribution/registry/config.yml to look as follows:

version: 0.1
   service: registry
       layerinfo: inmemory
       rootdirectory: /var/lib/registry
       enabled: true
   addr: :5000
           certificate: /etc/docker-latest/certs.d/localhost:5000/registry.crt
           key: /etc/docker-latest/certs.d/localhost:5000/registry.key

Restarting the docker-distribution daemon, and I have my private registry up and running. The next step will be to populate it with the container images I’ve built.

I have also enabled the ability to delete images from the registry. I added this flag later as I hadn’t quite figured out the naming scheme for images I wanted to use and ended up with some duplicates and some naming schemes that didn’t make sense. Also, being able to delete old/unused images freed up some space.

Pushing the images

Back on my Raspberry Pi builder system, I have to push the built image up to the registry. The steps to follow are:

  1. Build the image.
  2. Tag the image.
  3. Push the image to the registry

Building the image:

docker build .

It really is as simple as that. I’ve added my ‘pi’ user to the docker group, which grants members access to the docker daemon. The docker command then builds a container image from the Dockerfile in the current directory. Once built, the list of locally available container images can be displayed as follows:

docker images 
<none> <none> 32106d439320 4 days ago 4.01MB

As can be seen, this image doesn’t belong to a repository and it hasn’t been tagged yet. I can pass a tag flag to docker build to name and tag the image as I’d prefer, for example:

docker build . -t centos-docker.ipa.champion:5000/arm/alpine/alpine:latest

To tag the existing image:

docker tag 32106d439320 centos-docker.ipa.champion:5000/arm/alpine/alpine:latest

Great! I have my local image reading for pushing the remote container registry I have created. I need to configure the local docker daemon on the Pi builder system to use my ‘insecure’ registry (it is to be listed as insecure as I am using a self-signed certificate). To do this I alter (or add) the /etc/docker/daemon.json file to contain the following information:

# cat /etc/docker/daemon.json  
 "insecure-registries" : ["centos-docker.ipa.champion:5000"]

Once this file is in place, the docker daemon can be restarted, and the image pushed as such:

docker push centos-docker.ipa.champion:5000/arm/alpine/alpine:latest

The image should then be pushed to my remote container registry.

Pulling the images

This is only half the job complete. I need to be able to pull the images from the registry onto my k8s cluster. This will require configuring the docker daemons on each k8s node to use my private (insecure) registry. As the nodes are simply using docker, it’s the same change as above. The /etc/docker/daemon.json file should contain the same information as on my builder system, and the docker daemon restarted on each node.

As before, I did this via a 5-way tmux console, but a better approach would be to use a tool like Ansible – something for the future I think.

Whether this has all worked, can be tested out by deploying a pod to the k8s cluster using the container that was uploaded to the registry. A sample yaml file (alpine-example.yaml) could be:

apiVersion: v1
kind: Pod
 name: alpine-test
 - image: centos-docker.ipa.champion:5000/arm/alpine/alpine
     - sleep
     - "3600"
   imagePullPolicy: Always
   name: alpine-test
 restartPolicy: Always

This is a simple pod that runs the command ‘sleep 3600’ (keeps it running for one hour, with a restart policy of always meaning it will be restarted after the one hour has elapsed). The pod can be deployed as follows:

$ kubectl create -f alpine-example.yaml  
pod/alpine-test created
$ kubectl get pods -o wide NAME   READY  STATUS     RESTARTS AGE  IP   NODE alpine-test 0/1    ContainerCreating  0         5s   <none> green.kube.ipa.champion

$ kubectl get pods -o wide
NAME         READY  STATUS     RESTARTS  AGE  IP       NODE alpine-test  1/1    Running    0         23s   green.kube.ipa.champion

The image has been pulled from the remote registry and is now up and running on the k8s cluster. We can exec into this image as follows:

$ kubectl exec -it alpine-test -- /bin/sh
/ # cat /etc/alpine-release  
3.8.0 / #

Since deploying docker-distribution as the registry service, I’ve found both portus (developed by SUSE) and harbor (developed by VMware but now part of the CNCF) as freely available container registries (both seem to be provided as container images though, and require more detailed configuration than docker-distribution).

Where to next?

The next steps in this journey will be to save my container Dockerfiles (and other requirements) for redeployment and rebuilding when necessary.