Hosting a git repo

17th August 2018 1 By Jonny

Having created a very simple container image (basic ARM based Alpine Linux 3.8) and pushed it to my own private container image registry, the next next step to navigate is to provide a repository for the build files.

Source Control

There is really only one source control utility to use, and that is ‘git‘. I still find it somewhat amazing to think that git was written by the Linux kernel creator in a short space of time to replace the proprietary tool BitKeeper – which had previously been provided free of charge to kernel developers. The owners of BitKeeper were going to start charging, and instead of paying, git was born. The ubiquity of git as a source code control system has seen a whole host of tools and services built up around it, github and gitlab being two very prominent examples.

I don’t need (or want) a complete solution like gitlab (which is described as open core, so you can run your own local version). Instead, I’m going to deploy an instance of gogs – described as having low resource requirements and can run pretty much anywhere (it’s written in Go, so can run anywhere that Go can be compiled).

A further major plus point in it’s favour is that it’s readily available as a container image ready to go. There’s not an awful lot of detail on the docker hub page – just simple instructions to pull the image. There’s a bit more detail on the main gogs image page though. Given I’ve been deployed a k8s cluster, I’ll be deploying gogs to my cluster rather than a standalone docker instance.

Deploying gogs

Having previously deployed a simple nginx pod to k8s, deploying gogs will follow a very similar process. I’ll create a gogs-deploy.yaml file and then deploy this to the cluster.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-gogs
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
run: my-gogs
template:
metadata:
labels:
run: my-gogs
spec:
containers:
- image: gogs/gogs-rpi
imagePullPolicy: Always
name: my-gogs

As can be seen, this yaml file will create a gogs pod on the cluster using a deployment. Deployments can provide more flexibility than just a single pod, and can launch multiple pods, specify number of replicas etc. This is a very simply deployment and will just launch the gogs container in a pod using the image from the official gogs docker image.

$ kubectl create -f gogs-deploy.yaml 
deployment.extensions/my-gogs created

$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
my-gogs 1 1 1 0 9s

$ kubectl get pods
NAME READY STATUS RESTARTS AGE
my-gogs-775458c799-lnznq 0/1 ContainerCreating 0 11s

$ kubectl get pods NAME READY STATUS RESTARTS AGE my-gogs-775458c799-lnznq 1/1 Running 0 1m

Very quickly and easily, an instance of gogs has been spawned. A bit more information on the running pod can be found as follows:

kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
my-gogs-775458c799-lnznq 1/1 Running 0 28m 10.244.2.121 pink.kube.ipa.champion

The pod has been assigned an IP address from the pod IP range that was specified when the k8s cluster was created (which is the default range used by flannel), and as can be seen the pod has been launched on the node named ‘pink’. This IP address isn’t reachable though, and at the moment my gogs instance isn’t available to remote systems. It’s also worth noting at this point that the gogs instance would be stateless, and that means that any data we provide to it will be lost when the pod is destroyed. Two items that need to be addressed in that case – make the data survive beyond the lifetime of the pod, and make the pod reachable from outside the k8s cluster.

Persistent Volumes

As the title suggests, I am going to create a persistent volume within k8s for the gogs pod to use to store it’s data. There are many supported providers of persistent volumes available. However, most of these aren’t feasible to use in a home lab environment. Somewhat predictably, I selected to use the NFS persistent volume type. NFS is easily configured on my NAS system, and provides a reliable(ish) and remote data store for the pods running on the cluster. All the cluster nodes are connected to a single power supply, so one more single point of failure in the NAS system isn’t such a huge concern either.

Persistent Volumes (PV) also require a Persistent Volume Claim (PVC) so that they can be used by a pod when they are introduced to the cluster. The PV and PVC can be defined by yaml files such as those below:

apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-gogs
spec:
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
nfs:
path: /exports/kubedata/gogs
server: <nfs_server_hostname>

The yaml file describing the PVC resource is as follows:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-gogs
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi

As can be seen there are some parameters that match between the PV and PVC resources, such as the accessMode and storage request size. Whilst these don’t need to be identical (as they are above), they do need to have certain settings aligned. The k8s documentation provides further details.

The gogs pod will also need it’s configuration to be modified to take advantage of the PV – at the moment there is no pod configuration to use the PV and PVC. An updated deployment yaml file incorporating the PV and PVC could like this:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-gogs
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
run: my-gogs
template:
metadata:
labels:
run: my-gogs
spec:
containers:
- image: gogs/gogs-rpi
imagePullPolicy: Always
name: my-gogs
volumeMounts:
- mountPath: /data
name: pv-gogs
volumes:
- name: pv-gogs
persistentVolumeClaim:
claimName: pvc-gogs

From the yaml above, the PV will be mounted to /data within the gogs pod. The gogs container is created from a github repository which contains a Dockerfile.rpihub that is responsible for creating the gogs-rpi image being used. This shows that the /data directory is listed as a volume within the container image, and it makes sense to link my PV to this container volume.

$ kubectl create -f gogs-pv.yaml
$ kubectl create -f gogs-pvc.yaml
$ kubectl apply -f gogs-deploy.yaml

$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
my-gogs 1 0 0 0 2h
$ kubectl get pods -w
NAME READY STATUS RESTARTS AGE
my-gogs-775458c799-lnznq 0/1 Terminating 0 2h
my-gogs-775458c799-lnznq 0/1 Terminating 0 2h
my-gogs-775458c799-lnznq 0/1 Terminating 0 2h
my-gogs-6565756595-l5jbq 0/1 Pending 0 0s
my-gogs-6565756595-l5jbq 0/1 Pending 0 2s
my-gogs-6565756595-l5jbq 0/1 ContainerCreating 0 3s
my-gogs-6565756595-l5jbq 1/1 Running 0 12s

The final command (watching the output of ‘kubectl get pods’) shows the container being terminated and a new container being spun up, which takes on the updated configuration of the deployment, and should be connected to the PV that was also created.

$ kubectl describe pod my-gogs-6565756595-l5jbq
...
Volumes:
pv-gogs:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: pvc-gogs
ReadOnly: false
...

Cutting out a lot of the detail that ‘kubectl describe pod’ provides, displayed is the information relating to the PV in use and the PVC used. Later on the actual NFS server can be checked to ensure that the gogs data is created as expected.

Making the pod accessible

I now have a pod running and utilising an NFS PV to store data. But, I can’t access the gogs instance running in the pod, so it’s not much use to me at the moment. I need to expose the service that is running to the network. In fact, gogs is running two services: a web front end on port 3000 and an SSH service on port 22. Starting with the web front end. I want to make this accessible via my browser.

$ kubectl get deployments -o wide
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
my-gogs 1 1 1 1 5h my-gogs gogs/gogs-rpi run=my-gogs

$ kubectl expose deployment my-gogs --type=NodePort
error: couldn't find port via --port flag or introspection
See 'kubectl expose -h' for help and examples.

That’s not so good – trying to expose the deployment has failed. This is because kubectl doesn’t know which port I want exposed. I can either define the ports that are used by the container in my deployment yaml like so:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-gogs
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
run: my-gogs
template:
metadata:
labels:
run: my-gogs
spec:
containers:
- image: gogs/gogs-rpi
imagePullPolicy: Always
name: my-gogs
ports:
- containerPort: 3000
protocol: TCP
volumeMounts:
- mountPath: /data
name: pv-gogs
volumes:
- name: pv-gogs
persistentVolumeClaim:
claimName: pvc-gogs

As can be seen, in the my-gogs deployment yaml file above I have now defined TCP port 3000 as being used by the container. If this yaml file was applied to the existing deployment the pod would be terminated and recreated, with TCP port 3000 defined, and the previous kubectl expose command would work. Alternatively, I can specify which port I want to expose on the command line:

$ kubectl expose deployment my-gogs --type=NodePort --port=3000
service/my-gogs exposed
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 26d
my-gogs NodePort 10.109.70.12 3000:30592/TCP 29s

I have managed to expose TCP port 3000 on port 30592. This port is now accessible through all the nodes of the cluster as it is exposed through the kube-proxy pods that run as a daemonset on each node in the cluster.

$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-78fcdf6894-wf82p 1/1 Running 0 1d
coredns-78fcdf6894-xv67m 1/1 Running 0 3d
etcd-blue.kube.ipa.champion 1/1 Running 8 1d
kube-apiserver-blue.kube.ipa.champion 1/1 Running 21 1d
kube-controller-manager-blue.kube.ipa.champion 1/1 Running 6 1d
kube-flannel-ds-76wpm 1/1 Running 2 3d
kube-flannel-ds-bmwdr 1/1 Running 0 3d
kube-flannel-ds-dgcxx 1/1 Running 0 3d
kube-flannel-ds-k5bgd 1/1 Running 0 3d
kube-flannel-ds-mxpk8 1/1 Running 0 3d
kube-proxy-7n2m8 1/1 Running 0 3d
kube-proxy-8shb8 1/1 Running 0 3d
kube-proxy-hf4tz 1/1 Running 0 3d
kube-proxy-mdh9q 1/1 Running 0 3d
kube-proxy-nrlhs 1/1 Running 0 3d
kube-scheduler-blue.kube.ipa.champion 1/1 Running 5 1d
kubernetes-dashboard-7d59788d44-mh4xp 1/1 Running 0 1d
metrics-server-7b9cfd4895-z6rk8 1/1 Running 0 3d

Visiting any IP address in the cluster on port 30592 should bring up the gogs landing setup page:

The my-gogs deployment has been exposed as a service of type NodePort which means that each node is reachable on the port assigned to the service. There are other types of services that can be exposed, e.g. ClusterIP makes a service reachable (and name resolvable) by any node on the cluster. This is useful for exposing private services within the cluster, e.g. databases.

Summary

In this post I’ve managed to launch a pod via a deployment, associate a persistent volume to the pod so it can consume storage hosted on a remote NFS server. I have then made this pod reachable from external hosts by exposing the service. In the next post I’ll create another pod running mariadb and use this pod to connect to the my-gogs instance and finish setting up my source code repository.