My Server Setup Part 3: Nomad Jobs
In this series of posts I am describing the infrastructure that hosts all of my websites and development projects.
- Part 1: Hosting and Configuration
- Part 2: Nomad Configuration
- Part 3: Nomad Jobs
- Part 4: Continuous Delivery
- Part 5: Eliminating the Downtime
Summary
- All services I currently run are deployed as Docker containers.
- Various static HTTP sites based on plain HTML, Jekyll, Hugo, and custom Python generators.
- A number of development projects running everything from Go to PHP to Python.
- Supporting services: Fabiolb, Aleff, Prometheus, Alertmanager, Noman (custom component).
Docker containers
Nomad supports deploying a wide range of packages, from raw binaries through Docker containers to Java jars. For my pusposes I’ve only needed to use Docker container so far, but it’s a comfort knowing there are other options available in the ecosystem I’m using.
Building containers
I’ll cover this in detail in the next part (Continuous Delivery), but as an overview I have a DockerHub repository connected to the GitHub repository for each project. That watches for new tags and builds a new container for each new tag it discovers. A webhook from DockerHub to a custom component called Noman then handles deploying the updated container.
Static HTTP sites
The vast majority of what I’m hosting, and the things that are important that they stay up, are static websites. These are very simple containers, and I’m going to use this site as an example.
Base container - statigo
In almost all cases the base container used for the build is something called statigo. This is a very simple Go project designed to create the smallest possible basic static HTTP server Docker image. Before building this I tried a number of servers, including nginx, lighttpd, and several others. They all created Docker images that were far larger than their purpose suggested was needed, and they all wanted more CPU and memory resources than I was sure were necessary. Obviously they all provide functionality that statigo does not, but that’s the point; statigo gives me what I need and nothing more.
The current statigo:latest
image weighs in at a mere 1.86MB so it’s quite likely that the website content will be the bulk of your final image.
Example (stut.dev - this site)
This site is built with Jekyll. The Dockerfile
looks like this:
FROM jekyll/jekyll:stable AS build
RUN apk add --no-cache --update tzdata
ENV TZ=Europe/London
RUN cp /usr/share/zoneinfo/Europe/London /etc/localtime
COPY . ./
RUN jekyll build
FROM stut/statigo:latest
COPY --from=build /srv/jekyll/_site/ ./
Note that the timezone is set because some of my Jekyll sites are stamped with the date/time the site was last built.
This is a two-stage build process. We first use the official stable Jekyll image to build the site. The second stage, based on statigo, copies the built site into the web root. The resulting image is barely more than the size of the generated site and contains nothing beyond what’s necessary.
The Dockerfile for most of my sites look very similar, sometimes swapping out Jekyll for a different static site generator. The resulting image is nearly always currently based on statigo.
Other projects
In addition to the static websites I have a number of development projects written with various technologies running on the cluster in various states of completeness. None of these are currently publicly available, but suffice to say they are all HTTP(S) services that run in exactly the same way as the static sites.
Supporting services
I wrote in the last post about Fabiolb and Aleff. The cluster also runs Prometheus and Alertmanager for monitoring purposes. Prometheus is configured to discover metrics endpoints using Consul so setting it up to monitor new services is trivial.
The Alertmanager instance sends notitications to my personal Slack account which will notify me on my phone. The primary source of notifications is currently the Aleff service but several of my development projects also feature metrics and alerts.
Noman
The final service the cluster runs is called Noman, which is short for Nomad Manager - a rather grand and misleading term for the task it actually performs. Since it’s part of the continuous delivery system I’ll cover it in detail in the next and final part.
Next: Continuous Delivery
In the last part I’ll cover how new versions of these services are automatically built and deployed.