stutstut.dev

My Server Setup Part 1: Hosting and Configuration

In this series of posts I will describe the infrastructure that hosts all of my websites and development projects.

Summary

Hosting

I’ve been running publicly available servers since 1998, including running my own hosting company for nearly 10 years, and the main thing I’ve learned is that nobody does it all well. In recent years I’ve used AWS, DigitalOcean, Vultr, and a number of other smaller hosts, and each one had strengths and weaknesses. I currently use Scaleway and I’m very happy with their services so far.

My infrastructure was previously based on Kubernetes but in the past year I’ve switched to Nomad, mainly because that’s what we’re using at work. Since I’m using orchestration all I require from the hosting provider is reliable, cheap servers and a private network. Scaleway’s “instances” meet these requirements very well, especially with the instances they provide that are specifically aimed at developers - cheap and cheerful, but not guaranteed to be reliable enough for production. I feel comfortable using those instead of the production instances because I’m using an orchestration layer.

Server instances

I’m currently using a Nomad cluster of 3 servers, all of the Scaleway type DEV1-S. These are relatively cheap (€9.99 per month each in January 2023) and are plenty powerful for my purposes. They only have 2GB of memory and 20GB of disk space but so far none of the services I’m running require persistent or extensive storage, and memory requirements are tiny. If and when that changes I’m confident Scaleway would be able to provide whatever I need.

Outside connectivity

All servers have public IP addresses and default deny firewall rules to ensure nothing is exposed that shouldn’t be.

Intenal connectivity

All servers are on a Scaleway private network and all server-to-server communication occurs on those interfaces.

Configuration

All 3 servers are running almost exactly the same configuration with the only differences being IP addresses, hostnames, and certificates. They’re running Ubuntu 22.04.1 LTS, chosen because I know Nomad runs well on that.

Ansible

All nodes are deployed using Ansible. It’s a completely custom setup with no community playbooks (it was meant to be a learning experience for me, both in Nomad and Ansible). The deployment is run manually but regularly to ensure all packages are up-to-date and that the playbooks are not rotting.

Since successfully deploying for the first time I’ve added a node without any issues, and also removed that node without issues so I’m ready for whatever expansion I need. I’m a little embarrassed to say that the playbooks are hard-coded in places they shouldn’t be but I’m not currently planning to share or publish them and it doesn’t affect what I’m doing.

The Ansible setup is stored and version-controlled in a git repository.

Terraform

The job configuration on Nomad is managed using Terraform. This makes it very easy to modify the jobs and apply changes.

The Terraform setup and state are stored and version-controlled in a git repository.

Performance

Picking one server at random we find that it’s actively using 454MB of memory with the rest of the 2GB used for caching. CPU usage across the two cores sits at 2% with occasional bursts. Due to the nature of the services being hosted that’s not surprising since they don’t get a lot of use, but it does show that the Nomad system itself has a very small impact on system resources.

Access and Security

The only external access to the servers is on ports 80 (HTTP) and 443 (HTTPS). For management purposes I SSH into a bastion server that’s also on the private network and forward the ports of the management APIs. The bastion server only accepts key-based authentication, has no domain name, and has fail2ban installed to limit bad actors.

Ansible runs over SSH connectinos, and Terraform only requires the Nomad, Consul, and Vault API ports to be forwarded.

DNS

I’ve used Gandi as my domain registrar for a very long time, and I currently use their DNS service for most of my domains. For anything hosted on this cluster the domain has A records pointing @ at every public IP in the cluster. Additionally there is a CNAME record for * that points to @. That covers any potential usage of the domain so long as it’s hosted within this cluster.

Gandi’s DNS system allows me to set this up as a “linked zone” meaning I set it up once and just assign each domain on the cluster to that zone. Very simple.

Next: Nomad

In the next part I’ll cover the Nomad setup and configuration.