Over-engineering static site hosting using Kubernetes
Tuesday, 28 July 2020
3 August 2020
In this post I will go through the outlines of how I deploy this blog using some simple GitOps workflows with Tekton and flux. I’ll try to walk through most of the deployment process, focusing on the parts I think are the most interesting and with the least amount of pre-existing articles/posts.
In the beggining of this year I built a small kubernetes cluster as a learning experience. Blogging had never been part of my plan for what to do with my cluster, but in the process of learning and setting up different services I discovered quite a lot of things that I have wanted to share with others.
One of these things is Tekton Pipelines copuled with Tekton Triggers for automated, cross-compiled builds of docker images. One of the first things I decided to try deploying with CI/CD was a static site, and this is also where I originally came up with the idea to create this blog.
I first changed from Drone, since the community edition lacked support for native Kubernetes pipelines. To get support for Kubernetes in Drone you need to use the enterprise edition and while I could use that for free as long as I ran <5000 builds per year I decided to try something else.
After drone I tried out GitLab CI/CD. GitLab had native support for Kubernetes in the community edition, and I even found pre-built images for arm64. What ultimately made me make the switch from GitLab was their lacking control of pod-level settings in Kubernetes. As I mentioned earlier I wanted to build my images in-cluster, with as few permissions as possible granted to the build container. And since I could not set the required permissions and annotations to get around apparmor with GitLab Runner I started to look for something else.
Enter Tekton Pipelines
What I ultimately ended up with was Tekton Pipelines. This was probably not the most efficient method, but the flexibility it allowed made up for shortcomings in other areas. One alternative to Tekton I looked at briefly was Jenkins X , which packages Tekton Pipelines along with other tools to build a complete CI/CD distribution. I chose not to use that since I didn’t think I would make use of most of the bundled tools in this personal project with no other developers than my self.
Tekton Pipelines differs quite a lot from Drone and GitLab CI/CD in that pipelines and pipeline tasks are defined a head of time as templates and are then run using TaskRuns or PipelineRuns configured for your specific project. At the start I didn’t like this method of doing things at all, since I liked the way the other tools stuck with a pipeline and tasks defined in your projects main repository.
As I built different tasks and pipelines I quickly grew comfortable with the Tekton way of defining pipelines and how quickly I could apply my templates for any new projects. The first tasks and pipelines I created were quite rough, and needed a couple of iterations before they were general enough to easily be used by other projects. But as I got over that hurdle I started using Tekton really clicked for me.
With my new found love for Tekton i started looking for some easy starter project to apply my tools on. It didn’t take long for me to decide that I wanted to create a blog. Since I wanted to further improve my experience with the GitOps workflow database-backed cms:es like Wordpress or ghost where out of the question. Instead my eyes fell on Hugo a static site generator written in go, with quite good support for org-mode as well.
Building the site is done through a Tekton pipeline with multiple step, along with Tekton Triggers listening for webhooks from my site repo on github.
The pipeline is split up in multiple steps:
- Pull the master branch from git.
- Build the site using this wonderful hugo docker image.
- Build a docker image containg the compiled output from hugo to be served by nginx.1
- Push the image(s) to a docker registry.
To build the docker images with as few permissions as possible inside kubernetes I’ve decided to use buildah instead of plain docker. This is in part because I run Fedora Silverblue on my personal laptop, and also because buildah has quite good support for rootless builds.
I’ve set up buildah to initially export my images as files to a shared drive instead of directly to a local registry. These files are then pulled in to a shared manifest in step 4) and then pushed to the registry. This allows me to split the build and push stages in to distinct tasks.
Now, to deploy the newly generated and created site to kubernetes. After having viewed the different alternatives I decided to use fluxcd as it seemed pretty stable, and quite simple as it mostly uses the same types of resources that I already used to deploy applications to my cluster.
I have flux setup to watch the registry for any new images tagged master-*, and if it finds any, deploy them to the cluster replacing any old ones. This completes the process, and new loads of the site should now display the version corresponding to the master-branch of my blog repository.
Hosting a static site with this process isn’t exactly the most efficient way, and certainly not the easiest way. I could probably have been up and running in a much shorter time using a service like github pages or netlify. Even something as simple as a script to scp the files over to any html webhost would have been much easier.
However I did create this setup with the primary goal of getting a blog up and running as quickly as possible. What I wanted to do was to learn some devops tools along with kubernetes. In that regard I think this project has been a success, and I have learnt quite a lot about self-hosting using kubernetes.
Although I regard this a success there are some parts that I want to change as soon as I have the time. The most pressing one is that I want to replace flux with something like ArgoCD. The reason why being that flux doesn’t really handle one of jobs like database migrations.2
I’m looking forward to expanding on this site, and trying out more tools and workflow in my kubernetes journey.
After finishing this post replaced fluxcd with ArgoCD.
With ArgoCD I now have a lot more flexibility than I had with flux. The biggest drawback though is that I can no longer use the built in automation in flux for updating images as they get pushed to the registry.
To get around this I’ve had to extend my building pipeline to also update the kubernetes manifests and push the changes to git.
The new steps can be summarized as the following:
Clone kubernetes manifest repo.
Update kustomization.yaml with the new image tag.
kustomize edit set image "$CONTAINER_IMAGE:$NEW_TAG"
Commit and push the changes to git.
The last stage also triggers a webhook wich makes ArgoCD fetch the changes and sync my cluster with the new manifest.
I have to say that I’m satisfied with this new workflow, even if it has required more manual work from my side. There are still some things that I want to add, like Renovate for automated updates of images other than those updated by the CD pipeline.
This step is split in to two tasks. Since I run a cluster with both arm64 and amd64 machines I build one image for each arch. These images are then pushed to the registry along with a docker manifest. ↩︎
While one off jobs aren’t really needed for a static site, I ran into problems when deploying plausible analytics as a part of my site. More about jobs in fluxcd can be read about here https://github.com/fluxcd/flux/issues/2440 ↩︎