Freshly served containers

29th August 2018 1 By Jonny

Up to this point I’ve been able to create my own container images and also store their build configuration on a git repository. My next challenge was to automate building these images and making them available to my kubernetes cluster. It is my understanding that it’s considered better practice to simply redeploy a newly built image rather than try and keep an existing container up to date. This aligns with the concept that containers are considered disposable and can simply be ‘spun up’ as required.

Automating the container image build

The container images are being build from a Dockerfile and any dependent files and scripts that need to be included. The image is then built by performing a command such as:

docker build . -t registry.ervine.org:5000/x86_64/alpine/nginx

This command uses the current working directory as the source and builds an image tagged as registr.ervine.org:5000/x86_64/alpine/nginx. A subsequent step then pushes this image to my local container image registry.

docker push registry.ervine.org:5000/x86_64/alpine/nginx

My immediate thought was to simply run a cron job on whichever hosts I decided to designate as my ‘build hosts’, but then I decided this wasn’t very DevOps/CI/CD of me. All the cool kids use an automated build utility … such as Jenkins. I’d not used Jenkins before, so I had no idea what I was letting myself in for, but was keen to have a look at it, and see what it could do for my requirements.

As usual, I fired up a basic CentOS 7 VM, and installed the Jenkins package on it. Fortunately, the Jenkins software is contained in the EPEL, so installing Jenkins was very striaghtforward. The installation includes a systemd startup script, and I simply needed to open port 8080 of the local firewall.

yum install -y jenkins
firewall-cmd --add-port 8080/tcp --perm && systemctl reload firewalld
systemctl start jenkins && systemctl enable jenkins

And there we have Jenkins up and running. Simple, eh? Now all I needed to do was configure it to do my bidding and build my container images for me. 

Hmmm, lots of options here … where to start?

Quite a lot of the time, I have no idea what I’m doing – most of the time I have an idea of what I want to have done though. It was time for some good old googling to get an idea. First off it seemed I needed to install and enable some plugins. Docker related plugins seemed like a good starting point, so the following docker plugins were installed and enabled:

  • CloudBees Docker Build and Publish plugin
  • CloudBees Docker Custom Build Environment Plugin
  • Docker
  • Gogs plugin (ok, not docker-related, but this looks like it might be useful)
  • Kubernetes pipeline (again, not docker-related, but might be helpful later on)

There were other plugins listed as installed, e.g. an LDAP authentication plugin. I can’t remember if I installed these or not – probably not. With these plugins installed I’m ready to go!

Some googling later …

Automated (Pipeline) build settings

Following some other posts on using Jenkins with building containers it seems that one method to do this is to create a Pipeline job. Each job requires a description, and then I can add a Gogs webhook (I’ll come back to this later), and also requires the Pipeline Definition set to use Pipeline Script from SCM. At the time I had no idea what this meant – I now know this means that Jenkins will source the Pipeline Script it uses for this job from the Source Control (Manager?) that you specify. From this dropdown I select Git and provide the location of my git repository.

My first Jenkins pipeline job

From the screenshot, I have selected my ARM based Alpine Linux 3.7 image build. This is actually set to be a public repository, so although I have selected credentials to use to retrieve the repository, this isn’t strictly necessary here. I have also selected to use the master branch, as I haven’t got round to splitting my builds into branches yet (future plan though!).

The final piece of the jigsaw is to specify a pipeline ‘build’ file that Jenkins will execute to run the job. By default this file is named ‘Jenkinsfile’ and there is no reason to deviate from this default. My Jenkinsfile looks as follows:

node("docker-arm") {
    docker.withRegistry('https://centos-docker.ipa.champion:5000') {

git url: "ssh://git@gogs.ipa.champion:2022/jonny/docker-alpine-3.7-arm", credentialsId: 'beaa0c1b-8a6c-4dda-a052-dea4ecb2f89e'
sh "git rev-parse HEAD > .git/commit-id" def commit_id = readFile('.git/commit-id').trim() println commit_id stage "build" def app = docker.build "arm/alpine/alpine-3.7" stage "publish" app.push("${env.BUILD_NUMBER}") app.push("latest") }

}

The opening ‘node’ setting tells Jenkins where to run this job – this job will run on a node that belongs to the docker-arm group of build servers that I have specified (I’ll come to this later). The Jenkinsfile describes the registry to use and the git repository to use (as well as any necessary credentials to use, should it be a private repository). The job then builds the image, and then pushes the image with the ‘latest’ tag to my container image registry.

Other build settings

As I mentioned above, there are sections of the Jenkins job above the Pipeline location that I am also using. 

Triggering a build

I have configured a Gogs webhook secret (and configured the same secret on my git repository on the gogs server. Under the build triggers, I have selected to have the container image build every day at a random time between midnight and 8:00am. The image will also be built automatically when a push to the gogs git repository is detected.

Assuming my Dockerfile and Jenkinsfile are created correctly, I should now have an automated build process in place.

The sun is shining which means my builds are completing and being pushed to my registry server.

Summary

I’ve flown through setting up a Jenkins server and configuring some Pipeline jobs here. I’ve also skipped over creating the webhooks, user credentials, and creating node groups for building. This is possibly something for a future post.

However, by installing Jenkins and creating some jobs and getting them run through updates to gogs git repositories I have effectively mirrored the functionality of the Docker Hub, albeit on a much smaller scale. When I make changes to my builds, I’ll have these nice fresh images ready to go.

For the future

It would probably make sense to create different releases/versions of my repositories, so that I can work on the master branch, whilst the automated build works on my ‘good’ branch. I could then mark the master as the ‘good’ branch when I’d finished any changes to it and Jenkins would build it whilst I then continued to work on the master branch.

Such an approach would also allow me to have an ‘Alpine Linux’ repository rather than a separate ‘Alpine Linux 3.7’ and ‘Alpine Linux 3.8’ repositories.