Starbeamrainbowlabs

Stardust
Blog


Archive


Mailing List Articles Atom Feed Comments Atom Feed Twitter Reddit Facebook

Tag Cloud

3d 3d printing account algorithms android announcement architecture archives arduino artificial intelligence artix assembly async audio automation backups bash batch blog bookmarklet booting bug hunting c sharp c++ challenge chrome os cluster code codepen coding conundrums coding conundrums evolved command line compilers compiling compression containerisation css dailyprogrammer data analysis debugging demystification distributed computing documentation downtime electronics email embedded systems encryption es6 features ethics event experiment external first impressions future game github github gist gitlab graphics hardware hardware meetup holiday holidays html html5 html5 canvas infrastructure interfaces internet interoperability io.js jabber jam javascript js bin labs learning library linux lora low level lua maintenance manjaro network networking nibriboard node.js operating systems own your code pepperminty wiki performance phd photos php pixelbot portable privacy problem solving programming problems projects prolog protocol protocols pseudo 3d python reddit redis reference releases rendering resource review rust searching secrets security series list server software sorting source code control statistics storage svg talks technical terminal textures thoughts three thing game three.js tool tutorial twitter ubuntu university update updates upgrade version control virtual reality virtualisation visual web website windows windows 10 xmpp xslt

Cluster, Part 9: The Border Between | Load Balancing with Fabio

Hello again! It's been a while since the last one (mainly since I've been unsure about a few architectural things), but I'm now ready to continue writing about my setup. Before we continue, here's a refresher of everything we've done so far:

In this post, we're going to look at tying off our primary pipeline. So far, we've got job scheduling with Nomad, (superglue!) service discovery with Consul, and shared storage backed with NFS (although I'm going to revisit this eventually), with everything underpinned by a WireGuard mesh VPN with wesher.

In order to allow people to interact with services that are running on the cluster, we need something that will translate from the weird and strange world of anything running somewhere anywhere, and everywhere in-between into something that makes sense from an outside perspective. We want to have a single gateway by which we can control and manage access.

It is for these purposes that we're going to add Fabio to our stack. Its configuration is backed by Consul, and it is relatively simple and easy to understand. Having the config backed by Consul nets us multiple benefits:

  • It can run anywhere on the cluster we like in a pinch
  • We can configure new routes directly from a Nomad job spec file (although we still need to update the Unbound config)
  • The configuration of Vault gains additional data redundancy being stored on multiple nodes in the cluster

Like in previous parts of this series, Fabio isn't available to install with apt directly, so I've packaged it into my apt repository. If you haven't yet set up my apt repository, up-to-date instructions on how to do so can be found at the top of its main page - just click the aforementioned link (I'm not going to include instructions here, as they may go out of date at a later time).

Once you've set up my apt repository (or downloaded the Fabio binary manually, though I don't recommend that as it's more difficult to keep up-to-date), we can install Fabio like so:

sudo apt install fabio

This should be done on your primary (controller) node in your cluster. You can also do it on a secondary node too if you like to increase redundancy. To do this, just follow these instructions on both nodes one at a time. I'll be doing this soon myself: I've just been distracted with other things :P

Next, we need a service file. For systemd users (I'm using Raspbian at the moment), I have an apt package:

sudo apt install fabio-systemd

With this installed, we need to create a (very) minimal configuration file. Here it is:

proxy.addr = :80;proto=http
proxy.auth = name=admin;type=basic;file=/etc/fabio/auth.admin.htpasswd

Pretty short, right? This does 2 things:

  1. Tells Fabio to listen on port 80 for HTTP requests (we'll be tackling HTTPS in a separate post - we need Vault for that)
  2. Tells Fabio about the admin auth realm and where it can find the .htpasswd file that corresponds with it

Fabio's password authentication uses HTTP Basic Auth - which is insecure over unencrypted HTTP. Note that we'll be working towards improving the situation here and I'll insert a reminder when we arrive to change all your passwords where we do, but there are quite a number of obstacles between here and there we have to deal with first.

With this in mind, Take a copy of the above Fabio config file and write it to /etc/fabio/fabio.properties. Next, we need to generate that htpasswd file we reference in the config file. There are many tools out there that can be used for this purpose - for example the htpasswd tool in the apache2-utils package:

htpasswd /etc/fabio/auth.admin.htpasswd username

I like this authentication setup for Fabio, as it allows one to have a single easily configurable set of realms for different purposes if desired.

If you're setting up Fabio on multiple servers, you'll want to put your config file in your shared NFS storage and create a symlink at /etc/fabio/fabio.properties instead. Do that like this:

sudo ln -s /etc/fabio/fabio.properties /mnt/shared/config/fabio/fabio.properties

....update the /mnt/... path accordingly. Don't forget to adjust the /etc/fabio/auth.admin.htpasswd path too in fabio.properties as well.

Now that we've got the configuration file out of the way, we can start Fabio for the first time! Do that like this:

sudo systemctl start fabio.service
sudo systemctl enable fabio.service

Don't forget to punch a hole in the firewall:

sudo ufw allow 80/tcp comment fabio

Fabio is running - but it's not particularly useful, as we haven't configured any routes! Let's add some routes now.The first few routes we're going to add will be manual routes, which will allow us to tell Fabio about a static route we want it to add to it's routing table.

Fabio itself actually has a web interface, which will make a good first target for testing out our new cool toy. I mentioned earlier that Fabio gets its configuration from Consul - and it's now that we're going to take advantage of that. Consul isn't just a service discovery tool you see - it's a shared configuration manager too via a fancy hierarchical distributed key-value data store.

In this datastore Fabio looks in particular at the keys in the fabio directory. Create a new key under here with the Consul CLI like so:

consul kv put "fabio/fabio" 'route add fabio fabio.bobsrockets.com/ http://NODE_NAME.node.mooncarrot.space:9998 tags "mission-control" opts "auth=admin"'

Replace NODE_NAME with the name of the node you're running Fabio on, and yourdomain.com with a domain name you've bought. Once done, update your DNS config to point fabio.bobsrockets.com to the node that's running Fabio (you might want to refer back to my earlier post on Unbound - don't forget to restart unbound with sudo systemctl restart unbound).

When you have your DNS server updated, you should be able to point your browser at fabio.bobsrockets.com. No reloading of Fabio is needed - it picks up changes dynamically and automagically! It should prompt you for your password, and then you should see your the Fabio web interface. It should look something like this:

The Fabio web interface

As you can see, I've got a number of services running - including a few that I'm going to be blogging about soon-ish, such as Vault (but I haven't yet learnt how to use it :P) and Docker Registry UI (which is useful but has some issues - I'm going to see if HTTPS helps fix some of them up as I'm getting some errors in the dev tools about the SubtleCrypto API, which is only available in secure contexts).

Those services with IP addresses as the destination are defined through Nomad, and auto-update based on the host upon which they are running.

In the web interface you can click on overrides on the top bar to view and edit the configuration for the static routes you've got configured. You can't create new ones though, which is a shame.

Using the same technique as described above, you can create manual routes for Nomad and Consul - as they have web interfaces too! If you haven't already you'll need to enable it though with ui = true the Nomad and Consul server configuration files respectively though. For example, you could use these definitions:

route add nomad nomad.seanssatellites.io/ http://nomad.service.seanssatellites.io:4646 tags "mission-control" opts "auth=admin"
route add consul consul.billsboosters.space/ http://consul.service.billsboosters.space:8500 tags "mission-control" opts "auth=admin"

If you do the Consul one first, you can use the web interface to create the definition for Nomad :D

It's perhaps worth making a quick note of some parts of the above route definitions:

  • opts "auth=admin": This bit activates HTTP Basic Auth with the specified realm
  • consul.billsboosters.space/: This is the domain through which outside users will access the service. The trailing slash is very important.

From here, the last item on the list for this post are automatic routes via Nomad jobs. Since it's the only job we've got running on Nomad so far, let's use that as an example. Adding a Fabio route in this manner requires 3 steps:

  1. Find the service stanza in your Docker Registry Nomad job file, and edit the tags list to include a pair of tags something like urlprefix-registry.tillystelescopes.fr/ and auth=admin (again, the trailing slash is important, and the urlprefix- bit instructs Fabio that it's the domain name to route traffic from to the container).
  2. Save the edits to the Nomad job file and re-run it with nom job run path/to/file.nomad
  3. Update your DNS with a new record pointing registry.tillystelescopes.fr at the IP address(es) of the node(s) running Fabio

Also pretty simple to get used to, right? From here, step 4 of the official quickstart guide is useful. It explains about the different service tags (like the urlprefix- and auth=admin ones we created above) that are supported. Apparently raw TCP forwarding is also supported - though personally I'm waiting eagerly on UDP forwarding myself for some services I would like to run.

The rest of the Fabio docs are a bit of a mess, but I've found them more understandable than that of Traefik - the solution I investigated before turning to Fabio upon a recommendation from someone over in the r/selfhosted subreddit in frustration (whoever says "Traefik is simple!" is lying - I can't make sense of anything - it might as well be written in hieroglyphs.....).

Looking into the future, our path is diverging into 2 clear routes:

  • Getting services up and running on our new cluster
  • Securing said cluster to avoid attack

While relatively separate goals, they do intertwine at intervals. Moving forwards, we're going to be oscillating between these 2 goals. Likely topics include Vault (though it'll take several blog posts to realise any benefit from it at this point), and getting some Docker container infrastructure setup.

Speaking of Docker container infrastructure, if anyone has any ideas as to how to auto-rebuild docker containers and/or auto-restart Nomad jobs to keep them up-to-date, I'd love to know in a comment below. I'm currently scratching my head over that one....

Found this interesting? Got an idea that would improve on my setup? Confused about something? Comment below!

Cluster, Part 5: Staying current | Automating apt updates and using apt-cacher-ng

Hey there! Welcome to another cluster blog post. In the last post, we looked at setting up a WireGuard mesh VPN as a trusted private network for management interfaces and inter-cluster communication. As a refresher, here's a list of all the parts in this series so far:

Before we get to the Hashicorp stack though (next week, I promise!), there's an important housekeeping matter we need to attend to: managing updates.

In Debian-based Linux distributions such as Raspbian (that I'm using on my Raspberry Pis), updates are installed through apt - and this covers everything from the kernel to the programs we have installed - hence the desire to package everything we're going to be using to enable easy installation and updating.

There are a number of different related command-line tools, but the ones we're interested in are apt (the easy-to-use front-end CLI) and apt-get (the original tool for installing updates).

There are 2 key issues we need to take care of:

  1. Automating the installation of packages updates
  2. Caching the latest packages locally to avoid needless refetching

Caching package updates locally with apt-cacher-ng

Issue #2 here is slightly easier to tackle, surprisingly enough, so we'll do that first. We want to cache the latest packages locally, because if we have lots of machines in our cluster (I have 5, all running Raspbian), then when they update they all have to download the package lists and the packages themselves from the remote sources each time. Not only is this bandwidth-inefficient, but it takes longer and puts more strain on the remote servers too.

For apt, this can be achieved through the use of apt-cacher-ng. Other distributions require different tools - in particular I'll be researching and tackling Alpine Linux's apk package manager in a future post in this series - since I intend to use Alpine Linux as my primary base image for my Docker containers (I also intend to build my own Docker containers from scratch too, so that will be an interesting experience that will likely result in a few posts too).

Anyway, installation is pretty simple:

sudo apt install apt-cacher-ng

Once done, there's a little bit of tuning we need to attend to. apt-cacher-ng by default listens for HTTP requests on TCP port 3142 and has an administrative interface at /acng-report.html. This admin interface is not, by default, secured - so this is something we should do before opening a hole in the firewall.

This can be done by editing the /etc/apt-cacher-ng/security.conf configuration file. It should read something like this:


# This file contains confidential data and should be protected with file
# permissions from being read by untrusted users.
#
# NOTE: permissions are fixated with dpkg-statoverride on Debian systems.
# Read its manual page for details.

# Basic authentication with username and password, required to
# visit pages with administrative functionality. Format: username:password

AdminAuth: username:password

....you may need to use sudo to view and edit it. Replace username and password with your own username and a long unguessable password that's not similar to any existing passwords you have (especially since it's stored in plain text!).

Then we can (re) start apt-cacher-ng:

sudo systemctl enable apt-cacher-ng
sudo systemctl restart apt-cacher-ng

The last thing we need to do here is to punch a hole through the firewall, if required. As I explained in the previous post, I'm using a WireGuard mesh VPN, so I'm allowing all traffic on that interface (for reasons that will - eventually - come clear), so I don't need to open a separate hole in my firewall unless I want other devices on my network to use it too (which wouldn't be a bad idea, all things considered).

Anyway, ufw can be configured like so:

sudo ufw allow 3142/tcp comment apt-cacher-ng

With the apt-cacher server installed and configured, you can now get apt to use it:

echo 'Acquire::http { Proxy \"http://X.Y.Z.W:3142\"; }' | sudo tee -a /etc/apt/apt.conf.d/proxy

....replacing X.Y.Z.W with the IP address (or hostname!) of your apt-cacher-ng server. Note that it will get upset if you use https anywhere in your apt sources, so you'll have to inspect /etc/apt/sources.list and all the files in /etc/apt/sources.list.d/ manually and update them.

Automatic updates with unattended-upgrades

Next on the list is installing updates automatically. This is useful because we don't want to have to manually install updates every day on every node in the cluster. There are positives and negatives about installing updates - I recommend giving the top of this article a read.

First, we need to install unattended-upgrades:

sudo apt install unattended-upgrades

Then, we need to edit the /etc/apt/apt.conf.d/50unattended-upgrades file - don't forget to sudo.

Unfortunately, I haven't yet automated this process (or properly developed a replacement configuration place that can be automatically placed on a target system by a script), so for now we'll have to do this manually (the mssh command might come in handy).

First, find the line that starts with Unattended-Upgrade::Origins-Pattern, and uncomment the lines that end in -updates, label=Debian, label=Debian-Security. For Raspberry Pi users, add the following lines in that bit too:

"origin=Raspbian,codename=${distro_codename},label=Raspbian";

// Additionally, for those running Raspbian on a Raspberry Pi,
// match packages from the Raspberry Pi Foundation as well.
"origin=Raspberry Pi Foundation,codename=${distro_codename},label=Raspberry Pi Foundation";

unattended-upgrades will only install packages that are matched by a list of origins. Unfortunately, the way that you specify which updates to install is a total mess, and it's not obvious how to configure it. I did find an Ask Ubuntu answer that explains how to get unattended-upgrades to install updates. If anyone knows of a cleaner way of doing this - I'd love to know.

The other decision to make here is whether you'd like your hosts to automatically reboot. This could be disruptive, so only enable it if you're sure that it won't interrupt any long-running tasks.

To enable it, find the line that starts with Unattended-Upgrade::Automatic-Reboot and set it to true (uncommenting it if necessary). Then find the Unattended-Upgrade::Automatic-Reboot-Time setting and set it to a time of day you're ok with it rebooting at - e.g. 03:00 for 3am in the morning - but take care to avoid all your servers from rebooting at the same time, as this might cause issues later.

A few other settings need to be updated too. Here are they are, with their correct values:

APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Download-Upgradeable-Packages "1";
APT::Periodic::Unattended-Upgrade "1";
APT::Periodic::AutocleanInterval "7";

Make sure you find the existing settings and update them, because otherwise if you just paste these in, they may get overridden. In short these settings:

  • Enable automatic updates to the package metadata indexes
  • Downloads upgradeable packages
  • Installs downloaded updates
  • Automatically cleans the cache up every 7 days

Once done, save and close that file. Finally, we need to enable and start the unattended-upgrades service:

sudo systemctl enable unattended-upgrades
sudo systemctl restart unattended-upgrades

To learn more about automatic upgrades, these guides might help shed some additional light on the subject:

Conclusion

In this post, we've taken a look at apt package caching and unattended-upgrades. In the future, I'm probably going to have to sit down and either find an alternative to unattended-upgrades that's easier to configure, or rewrite the entire configuration file and create my own version. Comments and suggestions on this are welcome in the comments.

In the next post, we'll be finally getting to the Hashicorp stack by installing and configuring Consul. Hold on to your hats - next week's post is significantly complicated.

Edit 2020-05-09: Add missing instructions on how to get apt to actually use an apt-cacher-ng server

Cluster, Part 4: Weaving Wormholes | Peer-to-Peer VPN with WireGuard

(Above: The WireGuard and wesher logos. Background photo: from Unsplash by Clint Adair.)

Hey - welcome back! Last week, we set Unbound up as our primary DNS server for our network. We also configured cluster member devices to use it for DNS resolution. As a recap, here are links to the all the posts in this series so far:

In this part, we're going to setup a WireGuard) peer-to-peer VPN. This is a good idea for several reasons:

  • It provides defence-in-depth
  • It protects non-encrypted / unprotected private services from the rest of the network

The latter point here is particularly important - especially if you've got other device on your network like I have. If you're somehow following along with this series with devices fancy enough to have multiple network interfaces, you can connect the 2nd network interface of every server to a separate switch, that doesn't connect to anywhere else. Don't forget that you'll need to setup a DHCP server on this new mini-network (or configure static IPs manually on each device, which I don't recommend) - but this is out-of-scope of this article.

In the absence of such an opportunity, a peer-to-peer VPN should do the trick just as well. We're going to be using WireGuard), which I discovered recently. It's very cool indeed - and it's apparently special enough to be merged directly into the Linux Kernel v5.6! It's been given high praise from security experts too.

What I like most is it's simplicity. It follows the UNIX Philosophy, and as such while it's very simple in the way it works, it can be used and applied in so many different ways to solve so many different problems.

With this in mind, let's get to installing it! If you're on an Ubuntu or Debian-based machine, then you should just be able to install it directly:

sudo apt install wireguard

Note that you might have to have your kernel's development headers installed if you experience issues. For Raspbian users (like me), installation is slightly more complicated. We'll need to setup the debian-backports apt repository to pull it in, since the Debian developers have backported it to the latest stable version of Debian (e.g. like a hotfix) - but Raspbian hasn't yet pulled it in. This is done in 2 steps:

# Add the debian-backports GPG key
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 04EE7237B7D453EC 648ACFD622F3D138
# Add the debian-backports apt repo
echo 'deb http://httpredir.debian.org/debian buster-backports main contrib non-free' | sudo tee /etc/apt/sources.list.d/debian-backports.list

Now, we should be able to update to download the new repository metadata:

sudo apt update

Next, we need to install the Raspberry Pi Linux kernel headers. Unlike other distributions (which use the linux-headers-generic package), this is done in slightly different way:

sudo apt install raspberrypi-kernel-headers

This might take a while. Once done, we can now install WireGuard itself:

sudo apt install wireguard

Again, this will take a while. Don't forget to pay close attention to the output - I've noticed that it's fond of throwing error messages, but not actually counting it as an error and ultimately claiming it completed successfully.

With this installed, we can now setup our WireGuard peer-to-peer VPN. WireGuard itself works on a public-private keypair per-device setup. A device first generates a keypair, and then the public key thereof needs copying to all other devices it wants to connect to. In this fashion, both a peer-to-peer setup (like we're after), and a client-server setup (like a more traditional VPN such as IPSec or OpenVPN) can be configured.

The overhead of configuring such a peer-to-peer WireGuard VPN is considerable though, since every time a device is added to the VPN every existing device needs updating to add the public key thereof to establish communications between the 2.

While researching an easier solution to the problem, I came across wesher, which does much of the heavy-lifting for you. It does of course come at the cost of slightly reduced security (since the entire VPN network is protected by a single pre-shared key) and reduced configurability, but from my experiences so far it works like a charm for my purposes - and it eases management too.

It is distributed as a single Go binary, that uses the Raft Consensus Algorithm (the same library that Nomad and Consul use actually) to manage cluster membership and provide self-healing properties. This makes is very easy for my to package into an apt package, which I've added to my apt repository.

The instructions to add my apt repository can be found on it's page, but here they are directly:

# Add the repository
echo "deb https://apt.starbeamrainbowlabs.com/ ./ # apt.starbeamrainbowlabs.com" | sudo tee /etc/apt/sources.list.d/sbrl.list
# Import the signing key
wget -q https://apt.starbeamrainbowlabs.com/aptosaurus.asc -O- | sudo apt-key add -
# Alternatively, import the signing key from a keyserver:
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys D48D801C6A66A5D8
# Update apt's cache
sudo apt update

Don't forget that if you're using a caching server like apt-cacher-ng (which we'll be setting up in the next post in this series), you'll probably want to change the https in the first command there to regular http in order for the caching to take effect. Note that this doesn't affect the security of the downloaded packages, since apt will verify the integrity and GPG signature of all packages downloaded. It only affects the privacy of the connection used to download the packages themselves (more on this in a future post).

Once setup, install wesher like so:

sudo apt install wesher

If you've got a systemd-based system (as I have sadly with Raspbian), I provide a systemd service file in a separate package:

sudo apt install wesher-systemd

Don't forget to perform these steps on all the machines you want to enter the cluster. Next, we need to configure our firewall. I'm using UFW - I hope you set up something similar when you first configured your servers you're clustering (I recommend UFW. Note also that this tutorial series will not cover the basics like this - instead, I'll link to other tutorials and such as I think of them). For UFW users, do this:

sudo ufw allow 7946 comment wesher-gossip
sudo ufw allow 51820/udp comment wesher-wireguard

Wesher requires 2 ports: 1 for the clustering traffic for managing cluster membership, and another for the WireGuard traffic itself. Now, we can initialise the cluster. This has to be done on an interactive terminal for the first time, to avoid leaking the cluster's pre-shared key to log files. Do it like this in a terminal:

sudo wesher

It should tell you that it's generated a new cluster key - it will save it in a private configuration directory. Save this somewhere safe, such as in your password manager. Now, you can press Ctrl + C to quit, and start the systemd service:

sudo systemctl enable --now wesher.service

It's perhaps good practice to check that the service has started successfully:

sudo systemctl status wesher.service

Having 1 node setup is nice, but not very useful. Adding additional nodes to the cluster is a bit different. Follow this tutorial up to and including the installation of the wesher and wesher-systemd packages, and then instead of just doing sudo wesher, do this instead:

sudo wesher --cluster-key CLUSTER_KEY_HERE --join IP_OF_ANOTHER_NODE --overlay-net 172.31.250.0/16 --log-level info

...replacing CLUSTER_KEY_HERE with your cluster key (don't forget to prefix the entire command with a space to avoid it from entering your shell history file), and IP_OF_ANOTHER_NODE with the IP address of another node in the cluster on the external untrusted network. Note that the --overlay-net there is required because of the way I wrote the systemd service file in the wesher-systemd package. I did this:

[Unit]
Description=wesher - wireguard mesh builder
After=network-online.target
After=syslog.target
After=rsyslog.service

[Service]
EnvironmentFile=-/etc/default/wesher
ExecStart=/usr/local/sbin/wesher --overlay-net 172.31.250.0/16 --log-level info
Restart=on-failure
Type=simple
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=wesher

[Install]
WantedBy = multi-user.target

I explicitly specify the subnet of the VPN network here to avoid clashes with other networks in the 10.0.0.0/8 range. I don't have a network in that range, but I know that others do. Unfortunately there's currently a known bug that means that IP address collisions may occur, and the cluster can't currently sort them out. So you need to use a pretty large subnet for now to avoid such collisions (here's hoping this one is patched soon).

Note that if you want to set additional configuration options, you can do so in /etc/default/wesher in the format VAR_NAME=VALUE - 1 per line. A full reference of the supported environment variables can be found here.

Anyway, once wesher has joined the cluster on the new node, press Ctrl + C to exit and then start and enable the systemd service as before (note that wesher saves all the configuration details to a configuration directory, so the cluster key doesn't need to be provided more than once):

sudo systemctl enable --now wesher.service
sudo systemctl status wesher.service

With this, you should have a fully-functional Wireguard peer-to-peer VPN setup with wesher. You can ask a node as to what IP address it has on the VPN by using the following command:

ip address

The IP address should be shown next to the wgoverlay network interface.

The last thing to do here is to configure our Firewall. In my case, I'm using UFW, so the instructions I include here will be specific to UFW (if you use another firewall, translate these commands for your firewall, and command below!).

In my specific case, I want to take the unusual step of allowing all traffic in on the VPN. The reason for this will become apparent in a future post, but in short Nomad dynamically allocates outward-facing ports for services. There's a good reason for this I'll get into at the time, but for now, this is how you'd do that:

sudo ufw allow in on wgoverlay

Of course, we can add override rules here that block traffic if we need to. Note that this only allows in all traffic on the wgoverlay network interface and not all network interfaces as my previous blog about UFW would have done - i.e. like this:

sudo ufw allow 1234/tcp

In the next part, we'll take a look at setting up an apt caching server to improve performance and reduce Internet data usage when downloading system updates.

Found this useful? Spotted a mistake? Confused about something? Comment below! It really helps motivate me to write these posts.

Cluster, Part 3: Laying groundwork with Unbound as a DNS server

Looking up at the canopy of some trees

(Above: A picture from my wallpapers folder. If you took this, comment below and I'll credit you)

Welcome to another blog post about my cluster! Although I had to replace the ATX PSU I was planning on using to power the thing with a USB power supply instead, I've got all the Pis powered up and networked together now - so it's time to finally start on the really interesting bit!

In this post, I'm going to show you how to install Unbound as a DNS server. This will underpin the entire stack we're going to be setting up - and over the next few posts in this series, I'll take you through the entire process.

Starting with DNS is a logical choice, which comes with several benefits:

  • We get a minor performance boost by caching DNS queries locally
  • We get to configure our DNS server to make encrypted DNS-over-TLS queries on behalf of the entire network (even if some of the connected devices - e.g. phones don't support doing so)
  • If we later change our minds and want to shuffle around the IP addresses, it's not such a big deal as if we used IP addresses in all our config files

With this in mind, I starting with DNS before moving on to Docker and the Hashicorp stack:

A picture of the homepages of the Hashicorp stack

Before we begin, let's set out our goals:

  • We want a caching resolver, to avoid repeated requests across the Internet for the same query
  • We want to encrypt queries that leave the network via DNS-over-TLS
  • We want to be able to add our own custom DNS records for a domain of our choosing, for internal resolution only.

The last point there is particularly important. We want to resolve something like 172.16.230.100 to server1.bobsrockets.com internally, but not externally outside the network. This way we can include server1.bobsrockets.com in config files, and if the IP changes then we don't have to go back and edit all our config files - just reload or restart the relevant services.

Without further delay, let's start by installing unbound:

sudo apt install unbound

If you're on another system, translate this for your package manager. See this amazing wiki page for help on translating package manager commands :-)

Next, we need to write the config file. The default config file for me it located at /etc/unbound/unbound.conf:

# Unbound configuration file for Debian.
#
# See the unbound.conf(5) man page.
#
# See /usr/share/doc/unbound/examples/unbound.conf for a commented
# reference config file.
#
# The following line includes additional configuration files from the
# /etc/unbound/unbound.conf.d directory.
include: "/etc/unbound/unbound.conf.d/*.conf"

There's not a lot in the /etc/unbound/unbound.conf.d/ directory, so I'm going to be extending /etc/unbound/unbound.conf. First, we're going to define a section to get Unbound to forward requests via DNS-over-TLS:

forward-zone:
    name: "."
    # Cloudflare DNS
    forward-addr: 1.0.0.1@853
    # DNSlify - ref https://www.dnslify.com/services/resolver/
    forward-addr: 185.235.81.1@853
    forward-ssl-upstream: yes

The . name there simply means "everything". If you haven't seen it before, the fully-qualified domain name for seanssatellites.io for example is as follows:

seanssatellites.io.

Notice the extra trailing dot . there. That's really important, as it signifies the DNS root (not sure on it's technical name. Comment if you know it, and I'll update this). The io bit is the top-level domain (commonly abbreviated as TLD). seanssatellites is the actual domain bit that you buy.

It's a hierarchical structure, and apparently used to be inverted here in the UK before the formal standard was defined by the IETF (Internet Engineering Task Force) - of which RFC 1034 was a part.

Anyway, now that we've told Unbound to forward queries, the next order of business is to define a bunch of server settings to get it to behave the way we want it to:

server:
    interface: 0.0.0.0
    interface: ::0

    ip-freebind: yes

    # Access control - default is to deny everything apparently

    # The local network
    access-control: 172.16.230.0/24 allow
    # The docker interface
    access-control: 172.17.0.1/16 allow

    username: "unbound"

    harden-algo-downgrade: yes
    unwanted-reply-threshold: 10000000


    prefetch: yes

There's a lot going on here, so let's break it down.

Property Meaning
interface Tells unbound what interfaces to listen on. In this case I tell it to listen on all interfaces on both IPv4 and IPv6.
ip-freebind Tells it to try and listen on interfaces that aren't yet up. You probably don't need this, so you can remove it. I'm including it here because I'm currently unsure whether unbound will start before docker, which I'm going to be using extensively. In the future I'll probably test this and remove this directive.
access-control unbound has an access control system, which functions rather like a firewall from what I can tell. I haven't had the time yet to experiment (you'll be seeing that a lot), but once I've got my core cluster up and running I intend to experiment and play with it extensively, so expect more in the future from this.
username The name of the user on the system that unbound should run as.
harden-algo-downgrade Protect against downgrade attacks when making encrypted connections. For some reason the default is to set this to no, so I enable it here.
unwanted-reply-threshold Another security-hardening directive. If this many DNS replies are received that unbound didn't ask for, then take protective actions such as emptying the cache just in case of a DNS cache poisoning attack
prefetch Causes unbound to prefetch updated DNS records for cache entries that are about to expire. Should improve performance slightly.

If you have a flaky Internet connection, you can also get Unbound to return stale DNS cache entries if it can't reach the remote DNS server. Do that like this:

server:
    # Service expired cached responses, but only after a failed 
    # attempt to fetch from upstream, and 10 seconds after 
    # expiration. Retry every 10s to see if we can get a
    # response from upstream.
    serve-expired: yes
    serve-expired-ttl: 10
    serve-expired-ttl-reset: yes

With this, we should have a fully-functional DNS server. Enable it to start on boot and (re)start it now:

sudo systemctl enable unbound.service
sudo systemctl restart unbound.service

If it's not already started, the restart action will start it.

Internal DNS records

If you're like me and want some custom DNS records, then continue reading. Unbound has a pretty nifty way of declaring custom internal DNS records. Let's enable that now. First, you'll need a domain name that you want to return custom internal DNS records for. I recommend buying one - don't use an unregistered one, just in case someone else comes along and registers it.

Gandi are a pretty cool registrar - I can recommend them. Cloudflare are also cool, but they don't allow you to register several years at once yet - so they are probably best set as the name servers for your domain (that's a free service), leaving your domain name with a registrar like Gandi.

To return custom DNS records for a domain name, we need to tell unbound that it may contain private DNS records. Let's do that now:

server:
    private-domain: "mooncarrot.space"

This of course goes in /etc/unbound/unbound.conf, as before. See the bottom of this post for the completed configuration file.

Next, we need to define some DNS records:

server:
    local-zone: "mooncarrot.space." transparent
    local-data: "controller.mooncarrot.space.   IN A 172.16.230.100"
    local-data: "piano.mooncarrot.space.        IN A 172.16.230.101"
    local-data: "harpsichord.mooncarrot.space.  IN A 172.16.230.102"
    local-data: "saxophone.mooncarrot.space.    IN A 172.16.230.103"
    local-data: "bag.mooncarrot.space.          IN A 172.16.230.104"

    local-data-ptr: "172.16.230.100 controller.mooncarrot.space."
    local-data-ptr: "172.16.230.101 piano.mooncarrot.space."
    local-data-ptr: "172.16.230.102 harpsichord.mooncarrot.space."
    local-data-ptr: "172.16.230.103 saxophone.mooncarrot.space."
    local-data-ptr: "172.16.230.104 bag.mooncarrot.space."

The local-zone directive tells it that we're defining custom DNS records for the given domain name. The transparent bit tells it that if it can't resolve using the custom records, to forward it out to the Internet instead. Other interesting values include:

Value Meaning
deny Serve local data (if any), otherwise drop the query.
refuse Serve local data (if any), otherwise reply with an error.
static Serve local data, otherwise reply with an nxdomain or nodata answer (similar to the reponses you'd expect from a DNS server that's authoritative for the domain).
transparent Respond with local data, but resolve other queries normally if the answer isn't found locally.
redirect serves the zone data for any subdomain in the zone.
inform The same as transparent, but logs client IP addresses
inform_deny Drops queries and logs client IP addresses

Adapted from /usr/share/doc/unbound/examples/unbound.conf, the example Unbound configuration file.

Others exist too if you need even more control, like always_refuse (which always responds with an error message).

The local-data directives define the custom DNS records we want Unbound to return, in DNS records syntax (again, if there's an official name for the syntax leave a comment below). The local-data-ptr directive is a shortcut for defining PTR, or reverse DNS records - which resolve IP addresses to their respective domain names (useful for various things, but also commonly used as a step to verify email servers - comment below and I'll blog on lots of other shady and not so shady techniques used here).

With that, our configuration file is complete. Here's the full configuration file in it's entirety:

# Unbound configuration file for Debian.
#
# See the unbound.conf(5) man page.
#
# See /usr/share/doc/unbound/examples/unbound.conf for a commented
# reference config file.
#
# The following line includes additional configuration files from the
# /etc/unbound/unbound.conf.d directory.
include: "/etc/unbound/unbound.conf.d/*.conf"

server:
    interface: 0.0.0.0
    interface: ::0

    ip-freebind: yes

    # Access control - default is to deny everything apparently

    # The local network
    access-control: 172.16.230.0/24 allow
    # The docker interface
    access-control: 172.17.0.1/16 allow

    username: "unbound"

    harden-algo-downgrade: yes
    unwanted-reply-threshold: 10000000

    private-domain: "mooncarrot.space"

    prefetch: yes

    # ?????? https://www.internic.net/domain/named.cache

    # Service expired cached responses, but only after a failed 
    # attempt to fetch from upstream, and 10 seconds after 
    # expiration. Retry every 10s to see if we can get a
    # response from upstream.
    serve-expired: yes
    serve-expired-ttl: 10
    serve-expired-ttl-reset: yes

    local-zone: "mooncarrot.space." transparent
    local-data: "controller.mooncarrot.space.   IN A 172.16.230.100"
    local-data: "piano.mooncarrot.space.        IN A 172.16.230.101"
    local-data: "harpsichord.mooncarrot.space.  IN A 172.16.230.102"
    local-data: "saxophone.mooncarrot.space.    IN A 172.16.230.103"
    local-data: "bag.mooncarrot.space.          IN A 172.16.230.104"

    local-data-ptr: "172.16.230.100 controller.mooncarrot.space."
    local-data-ptr: "172.16.230.101 piano.mooncarrot.space."
    local-data-ptr: "172.16.230.102 harpsichord.mooncarrot.space."
    local-data-ptr: "172.16.230.103 saxophone.mooncarrot.space."
    local-data-ptr: "172.16.230.104 bag.mooncarrot.space."

    fast-server-permil: 500

forward-zone:
    name: "."
    # Cloudflare DNS
    forward-addr: 1.0.0.1@853
    # DNSlify - ref https://www.dnslify.com/services/resolver/
    forward-addr: 185.235.81.1@853
    forward-ssl-upstream: yes

Where do we go from here?

Remember that it's important to not just copy and paste a configuration file, but to understand what every single line of it does.

In a future post in this series, we'll be revising this to forward requests for *.service.mooncarrot.space to Consul, a clustered service discovery system that keeps track of what is running where and presents a DNS server as an interface (there are others).

In the next post, we'll (probably) be looking at setting up Consul - unless something else happens first :P Nomad should be next after that, followed closely by Vault.

Once I've got all that set up (which is a blog post series in and of itself!), I'll then look at encrypting all communications between all nodes in the cluster. After that, we'll (finally?) get to Docker and my plans there. Other things include an apt and apk (the Alpine Linux package manager) caching servers - which will have to be tackled separately.

Oh yeah, and I want to explore Traefik, which is apparently like Nginx, but intended for containerised infrastructure.

All this is definitely going to keep me very busy!

Found this interesting? Got a suggestion? Comment below!

Sources and Further Reading

Demystificating VPNs

After seeing yet another article that misunderstands and misrepresents VPNs, I just hda to make a post about it. This post actually started life as a reddit comment, but I decided to expand on it and make it a full post here on my blog.

VPNs are a technology that simply sends your traffic through an encrypted tunnel that pops out somewhere else.

For the curious, they do this by creating what's known as a virtual tunnel interface on your computer (on Linux-based machines this is often tun0) and alter your machine's routing table to funnel all your network traffic destined for the Internet into the tunnel interface.

The tunnel interface actually encrypts your data and streams it to a VPN server (though there are exceptions) that then forwards it on to the wider Internet for you.

This is great if:

  • You live in a country that censors your Internet connection
  • You don't trust your ISP
  • You are connected to a public open WiFi hotspot
  • You need to access resources on a remote network that only allow those physically present to use them

By using a VPN, you can make your device appear as though it is somewhere else. You can also hide your Internet traffic from the rest of your network that you are connected to.

However, VPNs are not a magic bullet. They are not so great at:

  • Blocking trackers
  • Blocking Ads
  • Blocking mining scripts that suck up your CPU
  • Limiting the amount of your data online services get a hold of

This is because it simply makes you appear as though you are somewhere else - it doesn't block or alter any of the traffic coming and going from your device - online services can still see all your personal data - all that's changed when you use a VPN is that your data is going via a waypoint on it's journey to it's final destination.

All hope is not lost, however - for there are steps you can take to deal with these issues. Try these steps instead:

You may already be aware of these points - but in particular multi-account containers are quite interesting. By using an extension like the one I link to above, you can in effect have multiple browsers open at the same time. By this, I mean that you can have multiple 'sandboxes' - and the site data (e.g. cookies etc) will not cross over from 1 sandbox to another.

This gives websites the illusion of being loaded in multiple different environments - with limited options to figure out that they are in fact on the same machine - especially when combined with other measures.

Hopefully this clears up some of the confusion. If you know anyone else who's confused about VPNs, please share a link to this post with them! The fewer people who get the wrong idea about VPNs, the better.

Found this interesting? Have another privacy related question? Found an error in this post? Comment below!

Setting up a Mosquitto MQTT server

I recently found myself setting up a mosquitto instance (yep, for this) due to a migration we're in the middle of doing and it got quite interesting, so I thought I'd post about it here. This post is also partly documentation of what I did and why, just in case future people come across it and wonder how it's setup, though I have tried to make it fairly self-documenting.

At first, I started by doing sudo apt install mosquitto and seeing if it would work. I can't remember if it did or not, but it certainly didn't after I played around with the configuration files. To this end, I decided that enough was enough and I turned the entire configuration upside-down. First up, I needed to disable the existing sysV init-based service that ships with the mosquitto package:

sudo systemctl stop mosquitto # Just in case
sudo systemctl start mosquitto

Next, I wrote a new systemd service file:

[Unit]

Description=Mosquitto MQTT Broker
After=syslog.target rsyslog.target network.target

[Service]
Type=simple
PIDFile=/var/run/mosquitto/mosquitto.pid
User=mosquitto

PermissionsStartOnly=true
ExecStartPre=-/bin/mkdir /run/mosquitto
ExecStartPre=/bin/chown -R mosquitto:mosquitto /run/mosquitto

ExecStart=/usr/sbin/mosquitto --config-file /etc/mosquitto/mosquitto.conf
ExecReload=/bin/kill -s HUP $MAINPID

StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=mosquitto


[Install]
WantedBy=multi-user.target

This is broadly similar to the service file I developed in my earlier tutorial post, but it's slightly more complicated.

For one, I use PermissionsStartOnly=true and a series of ExecStartPre directives to allow mosquitto to create a PID file in a directory in /run. /run is a special directory on Linux for PID files and other such things, but normally only root can modify it. mosquitto will be running under the mosquitto user (surprise surprise), so we need to create a subdirectory for it and chown it so that it has write permissions.

A PID file is just a regular file on disk that contains the PID (Process IDentifier) number of the primary process of a system service. System service managers such as systemd and OpenRC use this number to manage the health of the service while it's running and send it various signals (such as to ask it to reload its configuration file).

With this in place, I then added an rsyslog definition at /etc/rsyslog.d/mosquitto.conf to tell it where to put the log files:

if $programname == 'mosquitto' then /var/log/mosquitto/mosquitto.log
if $programname == 'mosquitto' then stop

Thinking about it, I should probably check that a log rotation definition file is also in place.

Just in case, I then chowned the pre-existing log files to ensure that rsyslog could read & write to it:

sudo chown -R syslog: /var/log/mosquitto

Then, I filled out /etc/mosquitto/mosquitto.conf with a few extra directives and restarted the service. Here's the full configuration file:

# Place your local configuration in /etc/mosquitto/conf.d/
#
# A full description of the configuration file is at
# /usr/share/doc/mosquitto/examples/mosquitto.conf.example

# NOTE: We can't use tab characters here, as mosquitto doesn't like it.

pid_file /run/mosquitto/mosquitto.pid

# Persistence configuration
persistence true
persistence_location /var/lib/mosquitto/


# Not a file today, thanks
# Log files will actually end up at /var/llog/mosquitto/mosquitto.log, but will go via syslog
# See /etc/rsyslog.d/mosquitto.conf
#log_dest file /var/log/mosquitto/mosquitto.log
log_dest syslog


include_dir /etc/mosquitto/conf.d


# Documentation: https://mosquitto.org/man/mosquitto-conf-5.html

# Require a username / password to connect
allow_anonymous false
# ....which are stored in the following file
password_file /etc/mosquitto/mosquitto_users

# Make a log entry when a client connects & disconnects, to aid debugging
connection_messages true

# TLS configuration
# Disabled at the moment, since we don't yet have a letsencrypt cert
# NOTE: I don't think that the sensors currently connect over TLS. We should probably fix this.
# TODO: Point these at letsencrypt
#cafile /etc/mosquitto/certs/ca.crt
#certfile /etc/mosquitto/certs/hostname.localdomain.crt
#keyfile /etc/mosquitto/certs/hostname.localdomain.key

As you can tell, I've still got some work to do here - namely the TLS setup. It's a bit of a chicken-and-egg problem, because I need the domain name to be pointing at the MQTT server in order to get a Let's Encrypt TLS certificate, but that'll break all the sensors using the current one..... I'm sure I'll figure it out.

But wait! We forgot the user accounts. Before I started the new service, I added some user accounts for client applications to connect with:

sudo mosquitto_passwd /etc/mosquitto/mosquitto_users username1
sudo mosquitto_passwd /etc/mosquitto/mosquitto_users username1

The mosquitto_passwd program prompts for a password - that way you don't end up with the passwords in your ~/.bash_history file.

With all that taken care of, I started the systemd service:

sudo systemctl daemon-reload
sudo systemctl start mosquitto-broker.service

Of course, I ended up doing a considerable amount of debugging in between all this - I've edited it down to make it more readable and fit better in a blog post :P

Lastly, because I'm paranoid, I double-checked that it was running with htop and netstat:


sudo netstat -peanut | grep -i mosquitto
tcp        0      0 0.0.0.0:1883            0.0.0.0:*               LISTEN      112        2676558    5246/mosquitto      
tcp        0      0 x.y.z.w:1883           x.y.z.w:54657       ESTABLISHED 112        2870033    1234/mosquitto      
tcp        0      0 x.y.z.w:1883           x.y.z.w:39365       ESTABLISHED 112        2987984    1234/mosquitto      
tcp        0      0 x.y.z.w:1883           x.y.z.w:58428       ESTABLISHED 112        2999427    1234/mosquitto      
tcp6       0      0 :::1883                 :::*                    LISTEN      112        2676559    1234/mosquitto      

...no idea why it want to connect to itself, but hey! Whatever floats its boat.

Ensure your SSH server is secure with SSH Check

We've got ssllabs.com for testing HTTPS servers to ensure they are setup to be secure, and personally I've been using it for years now (psst, starbeamrainbowlabs.com gets an A+!).

SSH servers are a very different story, however. While I've blogged about them before, I mainly focused on preventing unauthorised access to a server by methods such as password cracking attacks.

Now that I'm coming to the end of my Msc in Security and Distributed Computing, however, I've realised there's a crucial element missing here: the security of the connection itself. HTTPS isn't the only one with complicated cipher suites that it supports that need correctly configuring.

The solution here is to check the SSH server in the same way that we do for a HTTPS web server. For this though we need a tool to do this for us and tell us what's good and what's not about our configuration - which is where SSH Check comes in.

I discovered it recently, and it pretends to connect to an SSH server to gauge it's configuration - after which it quickly disconnects before the remote server asks it for credentials to login.

A screenshot of a test of the example ssh server

Because SSH allows for every stage of the encryption process to be configured individually, SSH Check tests 4 main areas:

  • The key exchange algorithm (the algorithm used to exchange the secret key for symmetric encryption going forwards)
  • The algorithms used in the server's host SSH keys (the key whose ID is shown to you when you connect asking you if you want to continue)
  • The encryption algorithm (the symmetrical encryption algorithm used after key exchange)
  • The MAC algorithm (the Message Authentication Code algorithm - used to ensure integrity of messages)

It displays whether each algorithm is considered safe or not, and which ones are widely considered to be either deprecated or contain backdoors. In addition, it also displays the technical names of each one so that you can easily reconfigure your SSH server to disable unsafe algorithms, which is nice (good luck deciphering the SSL Labs encryption algorithms list and matching it up to the list already configured in your web server......).

It also presents a bunch of other interesting information too, which is nice. It identified a number of potential issues with the way that I had SSH setup for starbeamrainbowlabs.com along with some suggested improvements, which I've now fixed.

If you have a server that you access via SSH, I recommend checking it with SSH Check - especially if you expose SSH publicly over the Internet.

Found this interesting? Got another testing tool you'd like to share? Comment below!

Own your code, Part 2: The curious case of the unreliable webhook

In the last post, I talked about how to setup your own Git server with Gitea. In this one, I'm going to take bit of a different tack - and talk about one of the really annoying problems I ran into when setting up my continuous integration server, Laminar CI.

Since I wanted to run the continuous integration server on a different machine to the Gitea server itself, I needed a way for the Gitea server to talk to the CI server. The natural choice here is, of course, a Webhook-based system.

After installing and configuring Webhook on the CI server, I set to work writing a webhook receiver shell script (more on this in a future post!). Unfortunately, it turned out that that Gitea didn't like sending to my CI server very much:

A ton of failed attempts at sending a webhook to the CI server

Whether it succeeded or not was random. If I hit the "Test Delivery" button enough times, it would eventually go through. My first thought was to bring up the Gitea server logs to see if it would give any additional information. It claimed that there was an i/o timeout communicating with the CI server:

Delivery: Post https://ci.bobsrockets.com/hooks/laminar-config-check: read tcp 5.196.73.75:54504->x.y.z.w:443: i/o timeout

Interesting, but not particularly helpful. If that's the case, then I should be able to get the same error with curl on the Gitea server, right?

curl https://ci.bobsrockets.com/hooks/testhook

.....wrong. It worked flawlessly. Every time.

Not to be beaten by such an annoying issue, I moved on to my next suspicion. Since my CI server is unfortunately behind NAT, I checked the NAT rules on the router in front of it to ensure that it was being exposed correctly.

Unfortunately, I couldn't find anything wrong here either! By this point, it was starting to get really rather odd. As a sanity check, I decided to check the server logs on the CI server, since I'm running Webhook behind Nginx (as a reverse-proxy):

5.196.73.75 - - [04/Dec/2018:20:48:05 +0000] "POST /hooks/laminar-config-check HTTP/1.1" 408 0 "-" "GiteaServer"

Now that's weird. Nginx has recorded a HTTP 408 error. Looking is up reveals that it's a Request Timeout error, which has the following definition:

The server did not receive a complete request message within the time that it was prepared to wait.

Wait what? Sounds to me like there's an argument going on between the 2 servers here - in which each server is claiming that the other didn't send a complete request or response.

At this point, I blamed this on a faulty HTTP implementation in Gitea, and opened an issue.

As a workaround, I ended up configuring Laminar to use a Unix socket on disk (as opposed to an abstract socket), forwarding it over SSH, and using a git hook to interact with it instead (more on how I managed this in a future post. There's a ton of shell scripting that I need to talk about first).

This isn't the end of this tail though! A month or two after I opened the issue, I wound up in the situation whereby I wanted to connect a GitHub repository to my CI server. Since I don't have shell access on github.com, I had to use the webhook.

When I did though, I got a nasty shock: The webhook deliveries exhibited the exact same random failures as I saw with the Gitea webhook. If I'd verified the Webhook server and cleared Gitea's HTTP implementation's name, then what else could be causing the problem?

At this point, I can only begin to speculate what the issue is. Personally, I suspect that it's a bug in the port-forwarding logic of my router, whereby it drops the first packet from a new IP address while it sets up a new NAT session to forward the packets to the CI server or something - so subsequent requests will go through fine, so long as they are sent within the NAT session timeout and from the same IP. If you've got a better idea, please comment below!

Of course, I really wanted to get the GitHub repository connected to my CI server, and if the only way I could do this was with a webhook, it was time for some request-wrangling.

My solution: A PHP proxy script running on the same server as the Gitea server (since it has a PHP-enabled web server set up already). If said script eats the request and emits a 202 Accepted immediately, then it can continue trying to get a hold of the webhook on the CI server 'till the cows come home - and GitHub will never know! Genius.

PHP-FPM (the fastcgi process manager; great alongside Nginx) makes this possible with the fastcgi_finish_request() method, which both flushes the buffer and ends the request to the client, but doesn't kill the PHP script - allowing for further processing to take place without the client having to wait.

Extreme caution must be taken with this approach however, as it can easily lead to a situation where the all the PHP-FPM processes are busy waiting on replies from the CI server, leaving no room for other requests to be fulfilled and a big messy pile-up in the queue forming behind them.

Warnings aside, here's what I came up with:

<?php

$settings = [
    "target_url" => "https://ci.bobsrockets.com/hooks/laminar-git-repo",
    "response_message" => "Processing laminar job proxy request.",
    "retries" => 3,
    "attempt_timeout" => 2 // in seconds, for a single attempt
];

$headers = "host: ci.starbeamrainbowlabs.com\r\n";
foreach(getallheaders() as $key => $value) {
    if(strtolower($key) == "host") continue;
    $headers .= "$key: $value\r\n";
}
$headers .= "\r\n";

$request_content = file_get_contents("php://input");

// --------------------------------------------

http_response_code(202);
header("content-type: text/plain");
header("content-length: " . strlen($settings["response_message"]));
echo($settings["response_message"]);

fastcgi_finish_request();

// --------------------------------------------

function log_message($msg) {
    file_put_contents("ci-requests.log", $msg, FILE_APPEND);
}

for($i = 0; $i < $settings["retries"]; $i++) {
    $start = microtime(true);

    $context = stream_context_create([
        "http" => [
            "header" => $headers,
            "method" => "POST",
            "content" => $request_content,
            "timeout" => $settings["attempt_timeout"]
        ]
    ]);

    $result = file_get_contents($settings["target_url"], false, $context);

    if($result !== false) {
        log_message("[" . date("r") . "] Queued laminar job in " . (microtime(true) - $start_time)*1000 . "ms");
        break;
    }


    log_message("[" . date("r") . "] Failed to laminar job after " . (microtime(true) - $start_time)*1000 . "ms.");
}

I've named it autowrangler.php. A few things of note here:

  • php://input is a special virtual file that's mapped internally by PHP to the client's request. By eating it with file_get_contents(), we can get the entire request body that the client has sent to us, so that we can forward it on to the CI server.
  • getallheaders() lets us get a hold of all the headers sent to us by the client for later forwarding
  • I use log_message() to keep a log of the successes and failures in a log file. So far I've got a ~32% failure rate, but never more than 1 failure in a row - giving some credit to my earlier theory I talked about above.

This ends the tale of the recalcitrant and unreliable webhook. Hopefully you've found this an interesting read. In future posts, I want to look at how I configured Webhook, the inner workings of the git hook I mentioned above, and the collection of shell scripts I've cooked to that make my CI server tick in a way that makes it easy to add new projects quickly.

Found this interesting? Run into this issue yourself? Found a better solution workaround? Comment below!

Powahroot: Client and Server-side routing in Javascript

The powahroot logo, which is a 16x16 pixel-art image and looks like a purple-red carrot with bright orange stripes and yellow light lines coming out of the sides

If I want to really understand something, I usually end up implementing it myself. This is the case with my latest library - powahroot, but also because I didn't really like the way any of the alternatives functioned because I'm picky.

Originally I wrote it for this project (although it's actually for a little satellite project that isn't open-source unfortunately - maybe at some point in the future!) - but I liked it so much that I decided that I had to turn it into a full library that I could share here.

In short, a routing framework helps you get requests handled in the right places in your application. I've actually blogged about this before, so I'd recommend you go and read that post first before continuing with this one.

For all the similarities between the server side (as mentioned in my earlier post) and the client side, the 2 environments are different enough that they warrant having 2 distinctly separate routers. In powahroot, I provide both a ServerRouter and a ClientRouter.

The ServerRouter is designed to handle Node.js HTTP request and response objects. It provides shortcut methods .get(), .post(), and others to quickly create routes for different request types - and also supports middleware to enable logical separation of authentication, request processing, and response generation.

The ClientRouter, on the other hand, is essentially a stripped-down version of the ServerRouter that's tailored to functioning in a browser environment. It doesn't support middleware (yet?), but it does support the pushstate that's part of the History API.

I've also published it on npm, so you can install it like this:

npm install --save powahroot

Then you can use it like this:

# On the server
import ServerRouter from 'powahroot/Server.mjs';

// ....

const router = new ServerRouter();
router.on_all(async (context, next) => { console.debug(context.url); await next()})
router.get("/files/::filepath", (context, _next) => context.send.plain(200, `You requested ${context.params.filepath}`));
// .....
# On the client
import ClientRouter from 'powahroot/Client.mjs';

// ....

const router = new ClientRouter({
    // Options object. Default settings:
    verbose: false, // Whether to be verbose in console.log() messages
    listen_pushstate: true, // Whether to react to browser pushstate events (excluding those generated by powahroot itself, because that would cause an infinite loop :P)
});

As you can see, powahroot uses ES6 Modules, which makes it easy to split up your code into separate independently-operating sections.

In addition, I've also generated some documentation with the documentation tool on npm. It details the API available to you, and should serve as a good reference when using the library.

You can find that here: https://starbeamrainbowlabs.com/code/powahroot/docs/

It's automatically updated via continuous integration and continuous deployment, which I really do need to get around to blogging about (I've spent a significant amount of time setting up the base system upon which powahroot's CI and CD works. In short I use Laminar CI and a GitHub Webhook, but there's a lot of complicated details).

Found this interesting? Used it in your own project? Got an idea to improve powahroot? Comment below!

How to quickly run TUI programs via SSH

Hello, and welcome to another blog post! I hope everyone had a lovely and restful Easter.

Very often, I want to run a command on a remote machine via SSH and leave it in a terminal in 1 corner of my screen whilst I work in another terminal on that same machine.

Up until now, I've always SSHed into the machine in question and then run the command manually:

user@local:~$ ssh bob@bobsrockets.com
# .....
bob@bobsrockets.com:~$ sudo htop

This is fine, but it takes a moment to connect & setup the terminal on the remote end. What if there was a way to specify the command to run remotely?

Well, it turns out there is. SSH lets you specify the command to run on the remote server instead of the default shell:

ssh sean@seanssatellites.io apt search beanstalk

Sadly, this doesn't always yield the results expected. Colour disappears from the output, and sometimes things like htop (ssh bill@billsboosters.co.uk htop) and sudo (ssh edgar@edsengineering.eu sudo apt update) break altogether:

Error opening terminal: unknown.

I can't remember how I figured it out, but I discovered that the issue is that when you specify the command instead of letting the default shell initialise, it treats it as some sort of 'script-mode', and doesn't allocate a pseudo-terminal on the remote machine.

Thankfully, there's a way to force it to allocate a pseudo-terminal. This is done with the -t flag:

ssh -t bob@bobsrockets.com sudo htop

This then enables interactive commands to work as intended, and causes colour to be displayed again :D

Found this useful? Got another great SSH tip? Comment below!

Art by Mythdael