Hosting my own container registry
As per previous posts, I have a kubernetes cluster running at home, and also configured a local jenkins pipeline to create the container images I want to use. Recently I decided I’d like to have a look at using a different container registry instead the very standard docker-registry. To be honest, there wasn’t a lot of choice available. It basically boiled down to portus and harbor. Disappointing the Red Hat quay (pronounced kw-ay …) has not yet been open sourced and is therefore not easily available. The portus project is essentially a front end to the docker-registry, so in the interests of trying something different I went with harbor which is an open source project sponsored by VMware. I did like that it includes items like clair and notary in addition to a web front end for the registry itself.
I’ll write up later how I installed it etc. (spoiler alert: it is available via a helm chart), for the moment I want to note down a few of the steps I followed in order to get the image signing via notary to work, so that when I inevitably break it in the near future, I can come back here and remind myself how I did it in the first place.
The notary component is responsible for the container image signature bits and pieces. In fact, what I discovered was that it’s a combination of where the image is built and how it is submitted to the image registry that needs to be considered.
First off, I have also reimplemented by jenkins environment (so yet another blog post required to remind future me how I did this). I now have two jenkins slaves to perform the image building, so this will need to replicated across the two slaves. The notary client is available in the Ubuntu repositories, however, I’ve decided to grab the binary from the github release page.
$ wget https://github.com/theupdateframework/notary/releases/download/v0.6.1/notary-Linux-amd64 $ sudo mv notary-Linux-amd64 /usr/local/bin/notary
For ease of use I set up an alias in the ~/.profile of the jenkins user as follows:
alias notary='/usr/local/bin/notary -s https://harbor.ervine.org:4443 -d ~/.docker/trust $@'
This means that I can simply call the notary command without having to constantly specify which server to use and which directory to find the key material in. This will obviously cause problems if later on I want to use a different server.
Configure the Notary client
I’m a bit rubbish with cryptography and signatures. I mean, I get roughly what’s going on with public/private keys and why they’re useful, but it’s not a subject I find intrinsically interesting, and it’s more that I realise it’s useful to use and deploy. As I understood it from reading in various places, I needed to set up an identity comprised of a key and certificate. This was generated with OpenSSL and I didn’t much care for any trusted certificates.
Honestly, I don’t know how I would get anything done without google and from a technical point of view, stackoverflow.com
openssl req -newkey rsa:2048 -new -nodes -x509 -days 3650 -keyout key.pem -out cert.pem
The key material objects will last 10 years, which should be plenty. Configuring the notary client is my first step here, and I’ll basically be following the steps documented on the docker github documentation page.
Starting with importing the (identity) key to the notary client:
$ /usr/local/bin/notary -d .docker/trust key import key.pem Enter passphrase for new delegation key with ID 57dff6a:
This is unhelpful – although the import of the key to notary to identify the user won’t be part of the automated pipeline process, I don’t want to be manually entering passphrases, this all needs to be taken care of. Fortunately, notary and docker can prefill these values from environment variables (which if this was all being done on a kubernetes pod – future project! – could be delivered via kubernetes secrets). Therefore, the .profile file has been expanded to also include my passphrases:
$ cat .profile export DOCKER_CONTENT_TRUST=1 export DOCKER_CONTENT_TRUST_SERVER=https://harbor.ervine.org:4443 export DOCKER_CONTENT_TRUST_ROOT_PASSPHRASE="secret_passphrase1" export DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE="secret_passphrase2" export NOTARY_ROOT_PASSPHRASE="secret_passphrase3" export NOTARY_TARGETS_PASSPHRASE="secret_passphrase4" export NOTARY_DELEGATION_PASSPHRASE="secret_passphrase5" export NOTARY_SNAPSHOT_PASSPHRASE="secret_passphrase6" alias notary='/usr/local/bin/notary -s https://harbor.ervine.org:4443 -d ~/.docker/trust $@'
With these environment variables in place (possibly the DOCKER_CONTENT_TRUST_*_PASSPHRASE entries are not required), it is now possible to run through the commands without having to manually input passphrases.
$ /usr/local/bin/notary -d .docker/trust key import key.pem $
No output returned which means it’s been successful! Also note, I have used the full path to the notary command rather than use the alias, as this command does not require the notary server address to be specified. I’ve probably been overly cautious here though.
Carrying on with the notary client configuration as follows (these will use the notary alias):
$ notary init harbor.ervine.org/library/x86_64/alpine/dnsmasq Enter username: jonny Enter password: $
The above command has initialised notary for the specified repository (this needs to be done for each repository). I will also be sharing my key material across my jenkins builder nodes so that the initialization only needs to be performed once per repository. Again, no message returned means the command completed successfully.
$ notary key rotate harbor.ervine.org/library/x86_64/alpine/dnsmasq snapshot -r Enter username: jonny Enter password: Successfully rotated snapshot key for repository harbor.ervine.org/library/x86_64/alpine/dnsmasq $
This time the notary server has returned a message informing me that the snapshot key has been rotated (without requiring my intervention). The final step in this part of the process is to publish the changes. I think that because I have been prompted for my username and password (for the harbor registry) means this has already been published, but it does no harm to follow the documentation:
$ notary publish harbor.ervine.org/library/x86_64/alpine/dnsmasq Pushing changes to harbor.ervine.org/library/x86_64/alpine/dnsmasq Enter username: jonny Enter password: Successfully published changes for repository harbor.ervine.org/library/x86_64/alpine/dnsmasq $
Note quite finished though … It is important to set up the notary delegation configuration at this point
$ notary delegation add harbor.ervine.org/library/x86_64/alpine/dnsmasq targets/releases --all-paths cert.pem Addition of delegation role targets/releases with keys [57dff6a5f567c17f6097348a3c6056049246697dbfdd006a07c7d3814e9ef511], with paths ["" <all paths>], to repository "harbor.ervine.org/library/x86_64/alpine/dnsmasq" staged for next publish. $ notary delegation add harbor.ervine.org/library/x86_64/alpine/dnsmasq targets/jenkins --all-paths cert.pem Addition of delegation role targets/jenkins with keys [57dff6a5f567c17f6097348a3c6056049246697dbfdd006a07c7d3814e9ef511], with paths ["" <all paths>], to repository "harbor.ervine.org/library/x86_64/alpine/dnsmasq" staged for next publish. $ notary publish harbor.ervine.org/library/x86_64/alpine/dnsmasq Pushing changes to harbor.ervine.org/library/x86_64/alpine/dnsmasq Enter username: jonny Enter password: Successfully published changes for repository harbor.ervine.org/library/x86_64/alpine/dnsmasq $
At this point I should be in a position to configure the docker client and get ready to start pushing images.
Configure the docker client
It’s also important to ensure that the docker client also uses the same identity as the notary client. To that end, the docker client must trust the identity key. Without configuring the docker indentity the image push will fail:
$ docker login harbor.ervine.org Username: jonny Password: WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded $ docker push harbor.ervine.org/library/x86_64/alpine/dnsmasq:latest The push refers to repository [harbor.ervine.org/library/x86_64/alpine/dnsmasq] b697a9315f9a: Layer already exists 858e002d9f09: Layer already exists latest: digest: sha256:7af02226018ecadaf6071f9a9c4b1404727930f402a3e5c5dad6d4962c5b2151 size: 739 Signing and pushing trust metadata ERRO couldn't add target to targets: could not find necessary signing keys, at least one of these keys must be available: ed2fb8b2346207b80f1b11a72bf790e6f5799ab2dccd47229c6490453372f834 failed to sign harbor.ervine.org/library/x86_64/alpine/dnsmasq:latest: could not find necessary signing keys, at least one of these keys must be available: ed2fb8b2346207b80f1b11a72bf790e6f5799ab2dccd47229c6490453372f834 $
Note to self, I should probably set up a credential helper … The reason the push has failed is that the docker client is not able to sign the image with the appropriate signing key. That can be corrected by importing the key file. The following command will import the signing key to the docker client
$ docker trust key load key.pem Loading key from "key.pem"... Successfully imported key from key.pem
Let’s see if that has all worked (fingers crossed) and if I can now push images to the harbor registry and have them signed.
$ docker push harbor.ervine.org/library/x86_64/alpine/dnsmasq:latest The push refers to repository [harbor.ervine.org/library/x86_64/alpine/dnsmasq] b697a9315f9a: Layer already exists 858e002d9f09: Layer already exists latest: digest: sha256:7af02226018ecadaf6071f9a9c4b1404727930f402a3e5c5dad6d4962c5b2151 size: 739 Signing and pushing trust metadata Successfully signed harbor.ervine.org/library/x86_64/alpine/dnsmasq:latest
Some additional notes
This all looks straightforward … however, I’ll have to check it all through again. Having had it work the first time, when I went to my second jenkins build node, I copied across my .docker directory. This probably wasn’t wise, and it all only started working once I deleted this and started from scratch.
I need to synchronise the build nodes so that every node can build every image. This can surely be semi-automated rather than have me run through the same set of commands again and again …