Providing a backend

20th August 2018 1 By Jonny

Previously, I’d deployed a pod running an instance of gogs to provide a source control repository in which I would aim to store my k8s YAML files and my container image build files.

Building a backend

The next step I want to follow is to provision a database backend for gogs using a MariaDB container. Strictly speaking there is no real need to have a separate database backend. The gogs installation can quite happily make use of a local sqlite database, however this is as much about learning and experimenting as anything else.

Unfortunately there didn’t appear to be an official MariaDB container image publicly available for the ARM v7 (32 bit) architecture available. There is an ARM v8 (64 bit) official image available based upon an Ubuntu image. With a bit of further googling, a promising image could be found which provided MariaDB on an Alpine Linux (ARM v7) base.

Since I put together my k8s cluster, this image has been updated to use Apline Linux 3.8. When I tried to use Alpine Linux 3.8 outside of this project with MariaDB I encountered errors when starting MariaDB up. These errors did not occur on Alpine Linux 3.7. As of writing MariaDB on my cluster is running on Alpine Linux 3.7.

The MariaDB container can take a set of variables as part of it’s start up allowing the user to define several parameters, amongst them MYSQL_DATABASE, MYSQL_USER, MYSQL_PASSWORD, and MYSQL_ROOT_PASSWORD parameters. These parameters allow the MariaDB database to be pre-configured and be up and running when the container finishes bootstrapping. With k8s, these parameters can be saved to the cluster as secrets (passwords in particular) or as configmaps (other less sensitive data) for the deployment. I have opted to save all four parameters as secrets rather than split their locations.

My resulting deployment YAML looks as follows:

apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: mariadb
spec:
selector:
matchLabels:
app: mariadb
strategy:
type: Recreate
template:
metadata:
labels:
app: mariadb
spec:
containers:
- image: yobasystems/alpine-mariadb:armhf
name: mariadb
env: #Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mariadb-root-pass
key: mariadb-root-pass
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: gogs-username
key: gogs-user
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: gogs-password
key: gogs-pass
- name: MYSQL_DATABASE
valueFrom:
secretKeyRef:
name: gogs-db
key: gogs-maria-db
ports:
- containerPort: 3306
name: mariadb
volumeMounts:
- name: mariadb-pv
mountPath: /var/lib/mysql
volumes:
- name: mariadb-pv
persistentVolumeClaim:
claimName: mariadb-pvc

The k8s secrets were created on the command line – however it’s just as easy to create secrets via a YAML file as well.

kubectl create secret generic <secret_name> --from-literal=<key>=<value> 

This was repeated for each secret required. With the necessary secrets in place, the MariaDB container could be deployed.

$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE gogs-5b75cc9c86-jhk55 1/1 Running 0 6d 10.244.3.65 white.kube.ipa.champion mariadb-596959f7cf-mllz2 1/1 Running 13 4d 10.244.4.87 orange.kube.ipa.champion

As can be seen from the above, the my-gogs and mariadb pods are now running.

Making MariaDB reachable

Hopefully, it’s also noticeable that the MariaDB pod is also making use of a persistent volume to store the /var/lib/mysql data. This should mean that the pod can be restarted and replaced whilst retaining the data previously committed to the database.

As noted previously when running the gogs pod, it is necessary to make the service(s) offered by the pod reachable. In this case, I only need to make the MariaDB service reachable from other pods within the cluster (and in this particular example the pods are in the same namespace). The easiest way to do this is to expose the deployment as a service, but this time use the ClusterIP type as opposed to the NodePort type. Exposing as a ClusterIP creates an ‘internal’ IP address to the cluster which will listen on the exposed port (3306 for MariaDB) across all nodes, and be resolvable as a specified name (by default the name of the resource being exposed). As I have opted to expose the deployment, it is done as follows:

$ kubectl expose deployment mariadb --type=ClusterIP
$ kubectl get svc -n test
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE gogs NodePort 10.100.131.192 3000:30037/TCP,22:31384/TCP 17d mariadb ClusterIP 10.105.129.154 3306/TCP 17d

In the above example, the gogs deployment has been exposed on port 30037 for HTTP (and 31384 for SSH) using NodePorts. The mariadb deployment has been exposed as a ClusterIP service on IP address 10.10.5.129.154 on port 3306.

Using the exposed backend

When I am setting up gogs to use a MySQL/MariaDB backend I can specify the server as ‘mariadb’ and use the regular MySQL/MariaDB port of 3306. The internal DNS resolver (Core-DNS) will provide name resolution services for this service.

Ready and waiting for the user

Gogs is now set up and ready to go. I’ve now covered off quite a few elements of deploying applications to k8s:

  • Deploying applications in pods as deployments
  • Providing external storage to pods through persistent volumes
  • Providing configuration data to pods through secrets
  • Exposing access to services within pods through NodePort and ClusterIP services
  • Tying together separate pods to demonstrate an n-tier application deployment on k8s (albeit n=2 here)

In the next exciting post I’ll document how I’ve made these services externally accessible and how I’ve used TLS to secure the HTTP connections.