Cluster, Part 9: The Border Between | Load Balancing with Fabio
Hello again! It's been a while since the last one (mainly since I've been unsure about a few architectural things), but I'm now ready to continue writing about my setup. Before we continue, here's a refresher of everything we've done so far:
- Cluster, Part 1: Answers only lead to more questions
- Cluster, Part 2: Grand Designs
- Cluster, Part 3: Laying groundwork with Unbound as a DNS server
- Cluster, Part 4: Weaving Wormholes | Peer-to-Peer VPN with WireGuard
- Cluster, Part 5: Staying current | Automating apt updates and using apt-cacher-ng
- Cluster, Part 6: Superglue Service Discovery | Setting up Consul
- Cluster, Part 7: Wrangling... boxes? | Expanding the Hashicorp stack with Docker and Nomad
- Cluster, Part 8: The Shoulders of Giants | NFS, Nomad, Docker Registry
In this post, we're going to look at tying off our primary pipeline. So far, we've got job scheduling with Nomad, (superglue!) service discovery with Consul, and shared storage backed with NFS (although I'm going to revisit this eventually), with everything underpinned by a WireGuard mesh VPN with wesher.
In order to allow people to interact with services that are running on the cluster, we need something that will translate from the weird and strange world of anything running somewhere anywhere, and everywhere in-between into something that makes sense from an outside perspective. We want to have a single gateway by which we can control and manage access.
It is for these purposes that we're going to add Fabio to our stack. Its configuration is backed by Consul, and it is relatively simple and easy to understand. Having the config backed by Consul nets us multiple benefits:
- It can run anywhere on the cluster we like in a pinch
- We can configure new routes directly from a Nomad job spec file (although we still need to update the Unbound config)
- The configuration of Vault gains additional data redundancy being stored on multiple nodes in the cluster
Like in previous parts of this series, Fabio isn't available to install with apt directly, so I've packaged it into my apt repository. If you haven't yet set up my apt repository, up-to-date instructions on how to do so can be found at the top of its main page - just click the aforementioned link (I'm not going to include instructions here, as they may go out of date at a later time).
Once you've set up my apt repository (or downloaded the Fabio binary manually, though I don't recommend that as it's more difficult to keep up-to-date), we can install Fabio like so:
sudo apt install fabio
This should be done on your primary (controller) node in your cluster. You can also do it on a secondary node too if you like to increase redundancy. To do this, just follow these instructions on both nodes one at a time. I'll be doing this soon myself: I've just been distracted with other things :P
Next, we need a service file. For systemd users (I'm using Raspbian at the moment), I have an apt package:
sudo apt install fabio-systemd
With this installed, we need to create a (very) minimal configuration file. Here it is:
proxy.addr = :80;proto=http proxy.auth = name=admin;type=basic;file=/etc/fabio/auth.admin.htpasswd
Pretty short, right? This does 2 things:
- Tells Fabio to listen on port 80 for HTTP requests (we'll be tackling HTTPS in a separate post - we need Vault for that)
- Tells Fabio about the
adminauth realm and where it can find the .htpasswd file that corresponds with it
Fabio's password authentication uses HTTP Basic Auth - which is insecure over unencrypted HTTP. Note that we'll be working towards improving the situation here and I'll insert a reminder when we arrive to change all your passwords where we do, but there are quite a number of obstacles between here and there we have to deal with first.
With this in mind, Take a copy of the above Fabio config file and write it to
/etc/fabio/fabio.properties. Next, we need to generate that htpasswd file we reference in the config file. There are many tools out there that can be used for this purpose - for example the
htpasswd tool in the
htpasswd /etc/fabio/auth.admin.htpasswd username
I like this authentication setup for Fabio, as it allows one to have a single easily configurable set of realms for different purposes if desired.
If you're setting up Fabio on multiple servers, you'll want to put your config file in your shared NFS storage and create a symlink at
/etc/fabio/fabio.properties instead. Do that like this:
sudo ln -s /etc/fabio/fabio.properties /mnt/shared/config/fabio/fabio.properties
/mnt/... path accordingly. Don't forget to adjust the
/etc/fabio/auth.admin.htpasswd path too in
fabio.properties as well.
Now that we've got the configuration file out of the way, we can start Fabio for the first time! Do that like this:
sudo systemctl start fabio.service sudo systemctl enable fabio.service
Don't forget to punch a hole in the firewall:
sudo ufw allow 80/tcp comment fabio
Fabio is running - but it's not particularly useful, as we haven't configured any routes! Let's add some routes now.The first few routes we're going to add will be manual routes, which will allow us to tell Fabio about a static route we want it to add to it's routing table.
Fabio itself actually has a web interface, which will make a good first target for testing out our new cool toy. I mentioned earlier that Fabio gets its configuration from Consul - and it's now that we're going to take advantage of that. Consul isn't just a service discovery tool you see - it's a shared configuration manager too via a fancy hierarchical distributed key-value data store.
In this datastore Fabio looks in particular at the keys in the
fabio directory. Create a new key under here with the Consul CLI like so:
consul kv put "fabio/fabio" 'route add fabio fabio.bobsrockets.com/ http://NODE_NAME.node.mooncarrot.space:9998 tags "mission-control" opts "auth=admin"'
NODE_NAME with the name of the node you're running Fabio on, and
yourdomain.com with a domain name you've bought. Once done, update your DNS config to point
fabio.bobsrockets.com to the node that's running Fabio (you might want to refer back to my earlier post on Unbound - don't forget to restart unbound with
sudo systemctl restart unbound).
When you have your DNS server updated, you should be able to point your browser at
fabio.bobsrockets.com. No reloading of Fabio is needed - it picks up changes dynamically and automagically! It should prompt you for your password, and then you should see your the Fabio web interface. It should look something like this:
As you can see, I've got a number of services running - including a few that I'm going to be blogging about soon-ish, such as Vault (but I haven't yet learnt how to use it :P) and Docker Registry UI (which is useful but has some issues - I'm going to see if HTTPS helps fix some of them up as I'm getting some errors in the dev tools about the SubtleCrypto API, which is only available in secure contexts).
Those services with IP addresses as the destination are defined through Nomad, and auto-update based on the host upon which they are running.
In the web interface you can click on
overrides on the top bar to view and edit the configuration for the static routes you've got configured. You can't create new ones though, which is a shame.
Using the same technique as described above, you can create manual routes for Nomad and Consul - as they have web interfaces too! If you haven't already you'll need to enable it though with
ui = true the Nomad and Consul server configuration files respectively though. For example, you could use these definitions:
route add nomad nomad.seanssatellites.io/ http://nomad.service.seanssatellites.io:4646 tags "mission-control" opts "auth=admin"
route add consul consul.billsboosters.space/ http://consul.service.billsboosters.space:8500 tags "mission-control" opts "auth=admin"
If you do the Consul one first, you can use the web interface to create the definition for Nomad :D
It's perhaps worth making a quick note of some parts of the above route definitions:
opts "auth=admin": This bit activates HTTP Basic Auth with the specified realm
consul.billsboosters.space/: This is the domain through which outside users will access the service. The trailing slash is very important.
From here, the last item on the list for this post are automatic routes via Nomad jobs. Since it's the only job we've got running on Nomad so far, let's use that as an example. Adding a Fabio route in this manner requires 3 steps:
- Find the
servicestanza in your Docker Registry Nomad job file, and edit the
tagslist to include a pair of tags something like
auth=admin(again, the trailing slash is important, and the
urlprefix-bit instructs Fabio that it's the domain name to route traffic from to the container).
- Save the edits to the Nomad job file and re-run it with
nom job run path/to/file.nomad
- Update your DNS with a new record pointing
registry.tillystelescopes.frat the IP address(es) of the node(s) running Fabio
Also pretty simple to get used to, right? From here, step 4 of the official quickstart guide is useful. It explains about the different service tags (like the
auth=admin ones we created above) that are supported. Apparently raw TCP forwarding is also supported - though personally I'm waiting eagerly on UDP forwarding myself for some services I would like to run.
The rest of the Fabio docs are a bit of a mess, but I've found them more understandable than that of Traefik - the solution I investigated before turning to Fabio upon a recommendation from someone over in the r/selfhosted subreddit in frustration (whoever says "Traefik is simple!" is lying - I can't make sense of anything - it might as well be written in hieroglyphs.....).
Looking into the future, our path is diverging into 2 clear routes:
- Getting services up and running on our new cluster
- Securing said cluster to avoid attack
While relatively separate goals, they do intertwine at intervals. Moving forwards, we're going to be oscillating between these 2 goals. Likely topics include Vault (though it'll take several blog posts to realise any benefit from it at this point), and getting some Docker container infrastructure setup.
Speaking of Docker container infrastructure, if anyone has any ideas as to how to auto-rebuild docker containers and/or auto-restart Nomad jobs to keep them up-to-date, I'd love to know in a comment below. I'm currently scratching my head over that one....
Found this interesting? Got an idea that would improve on my setup? Confused about something? Comment below!