Deploying Ruby on Rails apps to Kubernetes from Gitlab CI
By Martijn Storck
Gitlab allows you to create and manage Kubernetes clusters directly from the interface and use them for powerful features like review apps, canary deployments and the likes. If you use Auto DevOps it can automatically deploy for you as well. But what if all you want to do is deploy an application with an existing .gitlab-ci.yml
to a cluster that’s managed outside of gitlab? Here’s what I do. This article assumes that a Gitlab CI pipeline is already in place and that you have basic Kubernetes knowledge.
In order for this to work you’ll have to add the Kubernetes cluster to Gitlab. Refer to the Gitlab manual for instructions on how to to this.
Running migrations
Migrations can be run in a Kubernetes Job. It’s bad practice
to rely on the latest
tag in Kubernetes deployments, so instead of a Docker image we put a placeholder in the spec:
apiVersion: batch/v1
kind: Job
metadata:
name: migrate
namespace: my-namespace
spec:
ttlSecondsAfterFinished: 180
template:
spec:
containers:
- name: app
imagePullPolicy: Always
image: IMAGE_PLACEHOLDER
command: ["rails", "db:migrate"]
env:
- name: RAILS_MASTER_KEY
valueFrom:
secretKeyRef:
name: "rails-master-key"
key: "rails-master-key"
restartPolicy: Never
Check this file into version control so it’s available in the CI pipeline. To apply the job we use sed to replace the IMAGE_PLACEHOLDER
with the location of the exact image built in the build step of the CI pipeline:
sed "s%IMAGE_PLACEHOLDER%$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA%" kube/migrate-job.yml | kubectl apply -f -
When starting a job using kubectl it is created and run in the background. Luckily we can use the kubectl wait
command to wait for our job to finish, as follows:
kubectl wait --for=condition=complete --timeout=360s jobs/migrate
If the job fails or doesn’t complete within five minutes the CI job will fail.
Updating the web and worker containers
There is a multitude of ways to restart deployments with the new image. I prefer using kubectl set image
to other options such as the sed
we need to use for the migration job or a kubectl rollout restart
with a latest
image tag in the spec.
To make sure we only set the image where needed we could hardcode the deployment names. For example to update the app container in woth the web and worker deployment:
kubectl set image deployments/web app="$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA"
kubectl set image deployments/worker app="$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA"
However, imagine if someone creates a new deployment using the same image. They’d need to remember to add it to the CI pipeline. Instead, let’s use Kubernetes Labels to tag the containers that use the app image. Add the following labels to your deployments:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
namespace: my-namespace
labels:
app: my-app
ciDeployApp: "true"
…
Now we can use the following command to update all the containers with the ciDeployApp label:
kubectl set image deployments -l "app=my-app,ciDeployApp=true" app="$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA"
Putting it all together
When you have a Kubernetes cluster attached to your Gitlab project, Gitlab CI will automatically share the authentication details of the cluster in a set of environment variables. When using the docker runner a kubectl config file is automatically mounted in the container and the path is in $KUBECONFIG
. This means kubectl automatically connects to your cluster! The only requirement is adding an environment
key to your CI stage referencing the Kubernetes environment to be used for the job.
Because we have multiple jobs to run we use the bitnami/kubectl image in our deploy stage. To be able to use this with Gitlab CI’s script
we need to empty out the default entrypoint of this image (which is the kubectl binary). Apart from that the image contains all the basic tools needed to do a deploy as described above.
This is the final specification for the deploy stage:
deploy_test:
only:
- master
stage: deploy
image:
name: bitnami/kubectl:latest
entrypoint: [""]
services: []
environment:
name: production
kubernetes:
namespace: my-namespace
before_script: []
script:
- echo Deploying $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA to $KUBE_NAMESPACE
- kubectl delete --ignore-not-found=true jobs/migrate
- sed "s%IMAGE_PLACEHOLDER%$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA%" kube/test/migration-job.yml | kubectl apply -f -
- kubectl wait --for=condition=complete --timeout=360s jobs/migrate
- kubectl logs jobs/migrate
- kubectl delete jobs/migrate
- kubectl set image deployments -l "app=my-app,ciDeployApp=true" app="$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA"
The following steps get executed in the script
section:
- Log the exact image that will be deployed in the CI output
- Delete any stray migrate job that got left over
- Apply the new migration job with the new image name
- Wait up to 5 minutes for the migrate job to finish
- Print the logs to the CI output for reference
- Delete the job; we don’t need it anymore since it succeeded and we logged the output
- Set the image for the
app
containers in all deployments tagged withapp: my-app
andciDeployApp: true
If any of the steps fail, the job will halt and the pipeline will fail.
This script depends on the application image being tagged with $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
in an earlier build step. The benefit of this is workflow is that it’s always determined which exact image was deployed to an environment. This can be verified using kubectl describe deployment
:
% kubectl describe deployment web
Name: web
Namespace: my-namespace
CreationTimestamp: Fri, 29 May 2020 13:53:26 +0200
Labels: app=my-app
ciDeployApp=true
…
Pod Template:
Labels: app=my-app
tier=web
Annotations:
Containers:
app:
Image: registry.gitlab.com/my-project/my-app:ca25e0b0c174124d105150d1e3df865170462fbb
Port: 3000/TCP
Host Port: 0/TCP
As I said, there are many ways to deploy apps for Kubernetes but this has been working well for us. Let me know if you have ideas to improve it further!