Starbeamrainbowlabs

Stardust
Blog


Archive


Mailing List Articles Atom Feed Comments Atom Feed Twitter Reddit Facebook

Tag Cloud

3d 3d printing account algorithms android announcement architecture archives arduino artificial intelligence artix assembly async audio automation backups bash batch blender blog bookmarklet booting bug hunting c sharp c++ challenge chrome os cluster code codepen coding conundrums coding conundrums evolved command line compilers compiling compression containerisation css dailyprogrammer data analysis debugging demystification distributed computing docker documentation downtime electronics email embedded systems encryption es6 features ethics event experiment external first impressions freeside future game github github gist gitlab graphics hardware hardware meetup holiday holidays html html5 html5 canvas infrastructure interfaces internet interoperability io.js jabber jam javascript js bin labs learning library linux lora low level lua maintenance manjaro minetest network networking nibriboard node.js operating systems own your code pepperminty wiki performance phd photos php pixelbot portable privacy problem solving programming problems project projects prolog protocol protocols pseudo 3d python reddit redis reference release releases rendering resource review rust searching secrets security series list server software sorting source code control statistics storage svg systemquery talks technical terminal textures thoughts three thing game three.js tool tutorial tutorials twitter ubuntu university update updates upgrade version control virtual reality virtualisation visual web website windows windows 10 worldeditadditions xmpp xslt

Using whiptail for text-based user interfaces

One of my ongoing projects is to implement a Bash-based raspberry pi provisioning system for hosts in my raspberry pi cluster. This is particularly important given that Debian 11 bullseye was released a number of months ago, and while it is technically possible to upgrade a host in-place from Debian 10 buster to Debian 11 bullseye, this is a lot of work that I'd rather avoid.

In implementing a Bash-based provisioning system, I'll have a system that allows me to rapidly provision a brand-new DietPi (or potentially other OSes in the future, but that's out-of-scope of version 1) automatically. Once the provisioning process is complete, I need only reboot it and potentially set a static IP address on my router and I'll then have a fully functional cluster host that requires no additional intervention (except to update it regularly of course).

The difficulty here is I don't yet have enough hosts in my cluster that I can have a clear server / worker division, since my Hashicorp Nomad and Consul clusters both have 3 server nodes for redundancy rather than 1. It is for this reason I need a system in my provisioning system that can ask me what configuration I want the new host to have.

To do this, I rediscovered the whiptail command, which is installed by default on pretty much every system I've encountered so far, and it allows you do develop surprisingly flexible text based user interfaces with relatively little effort, so I wanted to share it here.

Unfortunately, while it's very cool and also relatively easy to use, it also has a lot of options and can result in command invocations like this:

whiptail --title "Some title" --inputbox "Enter a hostname:" 10 40 "default_value" 3>&1 1>&2 2>&3;

...and it only gets more complicated from here. In particular the 2>&1 1>&2 2>&3 bit there is a fancy way of flipping the standard output and standard error.

I thought to myself that surely there must be a way that I can simplify this down to make it easier to use, so I implemented a number of wrapper functions:

ask_yesno() {
    local question="$1";

    whiptail --title "Step ${step_current} / ${step_max}" --yesno "${question}" 40 8;
    return "$?"; # Not actually needed, but best to be explicit
}

This first one asks a simple yes/no question. Use it like this:

if ask_yesno "Some question here"; then
    echo "Yep!";
else
    echo "Nope :-/";
fi

Next up, to ask the user for a string of text:

# Asks the user for a string of text.
# $1    The window title.
# $2    The question to ask.
# $3    The default text value.
# Returns the answer as a string on the standard output.
ask_text() {
    local title="$1";
    local question="$2";
    local default_text="$3";
    whiptail --title "${title}" --inputbox "${question}" 10 40 "${default_text}" 3>&1 1>&2 2>&3;
    return "$?"; # Not actually needed, but best to be explicit
}

# Asks the user for a password.
# $1    The window title.
# $2    The question to ask.
# $3    The default text value.
# Returns the answer as a string on the standard output.
ask_password() {
    local title="$1";
    local question="$2";
    local default_text="$3";
    whiptail --title "${title}" --passwordbox "${question}" 10 40 "${default_text}" 3>&1 1>&2 2>&3;
    return "$?"; # Not actually needed, but best to be explicit
}

These both work in the same way - it's just that with ask_password it uses asterisks instead of the actual characters the user is typing to hide what they are typing. Use them like this:

new_hostname="$(ask_text "Provisioning step 1 / 4" "Enter a hostname:" "${HOSTNAME}")";
sekret="$(ask_password "Provisioning step 2 / 4" "Enter a sekret:")";

The default value there is of course optional, since in Bash if a variable does not hold a value it is simply considered to be empty.

Finally, I needed a mechanism to ask the user to choose at most 1 value from a predefined list:

# Asks the user to choose at most 1 item from a list of items.
# $1        The window title.
# $2..$n    The items that the user must choose between.
# Returns the chosen item as a string on the standard output.
ask_multichoice() {
    local title="$1"; shift;
    local args=();
    while [[ "$#" -gt 0 ]]; do
        args+=("$1");
        args+=("$1");
        shift;
    done
    whiptail --nocancel --notags --menu "$title" 15 40 5 "${args[@]}" 3>&1 1>&2 2>&3;
    return "$?"; # Not actually needed, but best to be explicit
}

This one is a bit special, as it stores the items in an array before passing it to whiptail. This works because of word splitting, which is when the shell will substitute a variable with it's contents before splitting the arguments up. Here's how you'd use it:

choice="$(ask_multichoice "How should I install Consul?" "Don't install" "Client mode" "Server mode")";

As an aside, the underlying mechanics as to why this works is best explained by example. Consider the following:

oops="a value with spaces";

node src/index.mjs --text $oops;

Here, we store value we want to pass to the --text argument in a variable. Unfortunately, we didn't quote $oops when we passed it to our fictional Node.js script, so the shell actually interprets that Node.js call like this:

node src/index.mjs --text a value with spaces;

That's not right at all! Without the quotes around a value with spaces there, process.argv will actually look like this:

[
    '/usr/local/lib/node/bin/node',
    '/tmp/test/src/index.mjs',
    '--text',
    'a',
    'value',
    'with',
    'spaces'
]

The a value with spaces there has been considered by the Node.js subprocess as 4 different values!

Now, if we include the quotes there instead like so:

oops="a value with spaces";

node src/index.mjs --text "$oops";

...the shell will correctly expand it to look like this:

node src/index.mjs --text "a value with spaces";

... which then looks like this to our Node.js subprocess:

[
    '/usr/local/lib/node/bin/node',
    '/tmp/test/src/index.mjs',
    '--text',
    'a value with spaces'
]

Much better! This is important to understand, as when we start talking about arrays in Bash things start to work a little differently. Consider this example:

items=("an apple" "a banana" "an orange")

/tmp/test.mjs --text "${item[@]}"

Can you guess what process.argv will look like? The result might surprise you:

[
    '/usr/local/lib/node/bin/node',
    '/tmp/test.mjs',
    '--text',
    'an apple',
    'a banana',
    'an orange'
]

Each element of the Bash array has been turned into a separate item - even when we quoted it and the items themselves contain spaces! What's going on here?

In this case, we used [@] when addressing our items Bash array, which causes Bash to expand it like this:

/tmp/test.mjs --text "an apple" "a banana" "an orange"

....so it quotes each item in the array separately. If we forgot the quotes instead like this:

/tmp/test.mjs --text ${item[@]}

...we would get this in process.argv:

[
    '/usr/local/lib/node/bin/node',
    '/tmp/test.mjs',
    '--text',
    'an',
    'apple',
    'a',
    'banana',
    'an',
    'orange'
]

Here, Bash still expands each element separately, but does not quote each item. Because each item isn't quoted, when the command is actually executed, it splits everything a second time!

As a side note, if you want all the items in a Bash array in a single quoted item, you need to use an asterisk * instead of an at-sign @ like so:

/tmp/test.mjs --text "${a[*]}";

....which would yield the following process.argv:

[
    '/usr/local/lib/node/bin/node',
    '/tmp/test.mjs',
    '--text',
    'an apple a banana an orange'
]

With that, we have a set of functions that make whiptail much easier to use. Once it's finished, I'll write a post on my Bash-based cluster host provisioning script and explain my design philosophy behind it and how it works.

Switching from XFCE4 to KDE Plasma

While I use Unity (7.5) and Ubuntu on my main laptop, on my travel laptop I instead use Artix Linux. Recently, I've been experiencing an issue where when I login to the lock screen after resume the device from sleep, I get a black screen.

Rather than digging around endlessly attempting to fix the issue (I didn't even know where to start), I've been meaning to try out KDE Plasma, which is 1 of a number of popular desktop environments available. To this end, I switched from XFCE (version 4) to KDE Plasma (5.24 as of the time of typing). this ultimately did end up fixing my issue (my travel laptop would win a prize for the most unusual software setup, as it originated as a Manjaro OpenRC machine).

Now that I've completed that switch (I'm typing this now in Atom running in the KDE Plasma desktop environment!), I thought I'd write up a quick post about the two desktop environments and my first impressions of KDE as compared to XFCE.

(Above: My KDE desktop environment, complete with a desktop background taken from CrossCode. The taskbar is at the top because this is how I had it configured in XFCE.)

The best way I suppose to describe the difference between XFCE and KDE is jumping from your garden pond into the local canal. While XFCE is fairly customisable, KDE is much more so - especially when it comes to desktop effects and the look and feel. I really appreciate the ability to customise the desktop effects to tune them to match what I've previously been used to in Unity (though I still use Unity on both my main laptop and my Lab PC at University) and XFCE.

One such example of this is the workspaces feature. You can customise the number of workspaces and also have them in a grid (just like Unity), which the GNOME desktop that comes with Ubuntu by default doesn't allow for. You can even tune the slide animation between desktops which I found helpful as the default animation was too slow for me.

It also has an enormous library of applications that complement the KDE desktop environment, with everything from your staples such as the terminal, an image viewer, and a file manager to more niche and specialised use-cases like a graph calculator and a colour contrast checker. While these can of course be installed in other desktop environments, it's cool to see such an expansive suite of programs for every conceivable use-case right there.

Related to this, there also appears to be a substantial number of widgets that you can add to your desktop. Like XFCE, KDE has a concept of panels which can hold 1 or more widgets in a line. This is helpful for monitoring system resources for example. While these are for the most part just as customisable as the main desktop environment, I wish that their dependencies were more clearly defined. On more than 1 occasion I found I was missing a dependency for some widget to work that wasn't mentioned in the documentation. upowerd is required for the battery indicator to work (which wasn't running due to a bug caused by a package name change from the great migration of Manjaro back in 2017), and the plasma-nm pacman package is required for the network / WiFi indicator to work, but isn't specified as a dependency when you install the plasma-desktop package. Clearly some work is needed in this area (though, to be fair, as I mentioned earlier I have a very strange setup indeed).

I'm continuing to find 1000 little issues with it that I'm fixing 1 by 1 - just while writing this post I found that dolphin doesn't support jumping to the address bar if you start typing a forward slash / (or maybe it was another related issue? I can't remember), which is really annoying as I do this all the time - but this is a normal experience when switching desktop environments (or, indeed, machines) - at least for me.

On the whole though, KDE feels like a more modern take on XFCE. With fancier graphics and desktop effects and what appears to be a larger community (measuring such things can be subjective though), I'm glad that I made the switch from XFCE to KDE - even if it was just to fix a bug at first (I would never have considered switching otherwise). As a desktop environment, I think it's comfortable enough that I'll be using KDE on a permanent basis on my travel laptop from now on.

systemquery, part 2: replay attack

Hey there! As promised I'll have my writeup about AAAI-22, but in the meantime I wanted to make a quick post about a replay attack I found in my systemquery encryption protocol, and how I fixed it. I commented quickly about this on the last post in this series, but I thought that it warranted a full blog post.

In this post, I'm going to explain the replay attack in question I discovered, how replay attacks work, and how I fixed the replay attack in question. It should be noted though that at this time my project systemquery is not being used in production (it's still under development), so there is no real-world impact to this particular bug. However, it can still serve as a useful reminder as to why implementing your own crypto / encryption protocols is a really bad idea.

As I explained in the first blog post in this series, the systemquery protocol is based on JSON messages. These messages are not just sent in the clear though (much though that would simplify things!), as I want to ensure they are encrypted with authenticated encryption. To this end, I have devised a 3 layer protocol:

Objects are stringified to JSON, before being encrypted (with a cryptographically secure random IV that's different for every message) and then finally packaged into what I call a framed transport - in essence a 4 byte unsigned integer which represents the length in bytes of the block of data that immediately follows.

The encryption algorithm itself is provided by tweetnacl's secretbox() function, which provides authenticated encryption. It's also been independently audited and has 16 million weekly downloads, so it should be a good choice here.

While this protocol I've devised looks secure at first glance, all is not as it seems. As I alluded to at the beginning of this post, it's vulnerable to a reply attack. This attack is perhaps best explained with the aid of a diagram:

Let's imagine that Alice has an open connection to Bob, and is sending some messages. To simplify things, we will only consider 1 direction - but remember that in reality such a connection is bidirectional.

Now let's assume that there's an attacker with the ability listen to our connection and insert bogus messages into our message stream. Since the messages are encrypted, our attacker can't read their contents - but they can copy and store messages and then insert them into the message stream at a later date.

When Bob receives a message, they will decrypt it and then parse the JSON message contained within. Should Bob receive a bogus copy of a message that Alice sent earlier, Bob will still be able to decrypt it as a normal message, and won't be able to tell it apart from a genuine message! Should our attacker figure out what a message's function is, they could do all kinds of unpleasant things.

Not to worry though, as there are multiple solutions to this problem:

  1. Include a timestamp in the message, which is then checked later
  2. Add a sequence counter to keep track of the ordering of messages

In my case, I've decided to go with the latter option here, as given that I'm using TCP I can guarantee that the order I receive messages in is the order in which I sent them. Let's take a look at what happens if we implement such a sequence counter:

When sending a message, Alice adds a sequence counter field that increments by 1 for each message sent. At the other end, Bob increments their sequence counter by 1 every time they receive a message. In this way, Bob can detect if our attacker attempts a replay attack, because the sequence number on the message they copied will be out of order.

To ensure there aren't any leaks here, should the sequence counter overflow (unlikely), we need to also re-exchange the session key that's used to encrypt messages. In doing so, we can avoid a situation where the sequence number has rolled over but the session key is the same, which would give an attacker an opportunity to replay a message.

With that, we can prevent replay attacks. The other thing worth mentioning here is that the sequence numbering needs to be done in both directions - so Alice and Bob will have both a read sequence number and a write sequence number which are incremented independently of one another whenever they receive and send a message respectively.

Conclusion

In this post, we've gone on a little bit of a tangent to explore replay attacks and how to mitigate them. In the next post in this series, I'd like to talk about the peer-to-peer swarming algorithm I've devised - both the parts thereof I've implemented, and those that I have yet to implement.

Sources and further reading

systemquery, part 1: encryption protocols

Unfortunately, my autoplant project is taking longer than I anticipated to setup and debug. In the meantime, I'm going to talk about systemquery - another (not so) little project I've been working on in my spare time.

As I've acquired more servers of various kinds (mostly consisting of Raspberry Pis), I've found myself with an increasing need to get a high-level overview of the status of all the servers I manage. At the moment, this need is satisfied by my monitoring system's (collectd, which while I haven't blogged about my setup directly, I have posted about it here and here) web-based dashboard called Collectd Graph Panel (sadly now abandonware, but still very useful):

This is great and valuable, but if I want to ask questions like "are all apt updates installed", or "what's the status of this service on all hosts?", or "which host haven't I upgraded to Debian bullseye yet?", or "is this mount still working", I currently have to SSH into every host to find the information I'm looking for.

To solve this problem, I discovered the tool osquery. Osquery is a tool to extract information from a network of hosts with an SQL-like queries. This is just what I'm looking for, but unfortunately it does not support the armv7l architecture - which most of my cluster currently runs on - thereby making it rather useless to me.

Additionally, from looking at the docs it seems to be extremely complicated to setup. Finally, it does not seem to have a web interface. While not essential, it's a nice-to-have

To this end, I decided to implement my own system inspired by osquery, and I'm calling it systemquery. I have the following goals:

  1. Allow querying all the hosts in the swarm at once
  2. Make it dead-easy to install and use (just like Pepperminty Wiki)
  3. Make it peer-to-peer and decentralised
  4. Make it tolerate random failures of nodes participating in the systemquery swarm
  5. Make it secure, such that any given node must first know a password before it is allowed to join the swarm, and all network traffic is encrypted

As a stretch goal, I'd also like to implement a mesh message routing system too, so that it's easy to connect multiple hosts in different networks and monitor them all at once.

Another stretch goal I want to work towards is implementing a nice web interface that provides an overview of all the hosts in a given swarm.

Encryption Protocols

With all this in mind, the first place to start is to pick a language and platform (Javascript + Node.js) and devise a peer-to-peer protocol by which all the hosts in a given swarm can communicate. My vision here is to encrypt everything using a join secret. Such a secret would lend itself rather well to a symmetrical encryption scheme, as it could act as a pre-shared key.

A number of issues stood in the way of actually implementing this though. At first, I thought it best to use Node.js' built-in TLS-PSK (stands for Transport Layer Security - Pre-Shared Key) implementation. Unlike regular TLS which uses asymmetric cryptography (which works best in client-server situations), TLS-PSK uses a pre-shared key and symmetrical cryptography.

Unfortunately, although Node.js advertises support for TLS-PSK, it isn't actually implemented or is otherwise buggy. This not only leaves me with the issue of designing a encryption protocol, but also:

  1. The problem of transferring binary data
  2. The problem of perfect forward secrecy
  3. The problem of actually encrypting the data

Problem #1 here turned out to be relatively simple. I ended up abstracting away a raw TCP socket into a FramedTransport class, which implements a simple protocol that sends and receives messages in the form <length_in_bytes><data....>, where <length_in_bytes> is a 32 bit unsigned integer.

With that sorted and the nasty buffer manipulation safely abstracted away, I could turn my attention to problems 2 and 3. Let's start with problem 3 here. There's a saying when programming things relating to cryptography: never roll your own. By using existing implementations, these existing implementations are often much more rigorously checked for security flaws.

In the spirit of this, I sought out an existing implementation of a symmetric encryption algorithm, and found tweetnacl. Security audited, it provides what looks to be a secure symmetric encryption API, which is the perfect foundation upon which to build my encryption protocol. My hope is that by simply exchanging messages I've encrypted with an secure existing algorithm, I can reduce the risk of a security flaw.

This is a good start, but there's still the problem of forward secrecy to tackle. To explain, perfect forward secrecy is where should an attacker be listening to your conversation and later learn your encryption key (in this case the join secret), they still are unable to decrypt your data.

This is achieved by using session keys and a key exchange algorithm. Instead of encrypting the data with the join secret directly, we use it only to encrypt the initial key-exchange process, which then allows 2 communicating parties to exchange a session key, which used to encrypt all data from then on. By re-running the key-exchange process to and generating new session keys at regular intervals, forward secrecy can be achieved: even if the attacker learns a session key, it does not help them to obtain any other session keys, because even knowledge of the key exchange algorithm messages is not enough to derive the resulting session key.

Actually implementing this in practice is another question entirely however. I did some research though and located a pre-existing implementation of JPAKE on npm: jpake.

With this in hand, the problem of forward secrecy was solved for now. The jpake package provides a simple API by which a key exchange can be done, so then it was just a case of plugging it into the existing system.

Where next?

After implementing an encryption protocol as above (please do comment below if you have any suggestions), the next order of business was to implement a peer-to-peer swarm system where agents connect to the network and share peers with one another. I have the basics of this implemented already: I just need to test it a bit more to verify it works as I intend.

It would also be nice to refactor this system into a standalone library for others to use, as it's taken quite a bit of effort to implement. I'll be holding off on doing this though until it's more stable however, as refactoring it now would just slow down development since it has yet to stabilise as of now.

On top of this system, the plan is to implement a protocol by which any peer can query any other peer for system information, and then create a command-line interface for easily querying it.

To make querying flexible, I plan on utilising some form of in-memory database that is populated with queries to other hosts based on the tables mentioned in the user's query. SQLite3 is the obvious choice here, but I'm reluctant to choose it as it requires compilation upon installation - and given that I have experienced issues with this in the past, I feel this has the potential to limit compatibility with some system configurations. I'm going to investigate some other in-memory database libraries for Javascript - giving preference to those which are both light and devoid of complex installation requirements (pure JS is best if I can manage it I think). If you know of a pre Javascript in-memory database that has a query syntax, do let me know in the comments below!

As for querying system information directly, that's an easy one. I've previously found systeminformation - which seems to have an API to fetch pretty much anything you'd ever want to know about the host system!

Sources and further reading

Cluster, Part 12: TLS for Breakfast | Configuring Fabio for HTTPS

Hey there, and happy new year 2022! It's been a little while, but I'm back now with another blog post in my cluster series. In this shorter post, I'm going to show you how I've configured my Fabio load balancer to serve HTTPS.

Before we get started though, I can recommend visiting the series list to check out all the previous parts in this series, as a number of them give useful context for this post.

In the last post, I showed you how to setup certbot / let's encrypt in a Docker container. Building on this, we can now reconfigure Fabio (which we setup in part 9) to take in the TLS certificates we are now generating. I'll be assuming that the certificates are stored on your NFS share you've got setup (see part 8) for this post. In the future I'd love to use Hashicorp Vault for storing these certificates, but as of now I've found Hashicorp Vault to be far too complicated to setup, so I'll be using the filesystem instead.

Configuring Fabio to use HTTPS is actually really quite simple. Open /etc/fabio/fabio.properties for editing, and at the beginning insert a line like this:

proxy.cs = cs=some_name_here;type=file;cert=/absolute/path/to/fullchain.pem;key=/absolute/path/to/privkey.pem

cs stands for certificate store, and this tells Fabio about where your certificates are located. some_name_here is a name you'd like to assign to your certificate store - this is used to reference it elsewhere in the configuration file. /absolute/path/to/fullchain.pem and /absolute/path/to/privkey.pem are the absolute paths to the fullchaim.pem and privkey.pem files from Let's Encrypt. These can be found in the live directory in the Let's Encrypt configuration directory in the subdirectory for the domain in question.

Now that Fabio knows about your new certificates, find the line that starts with proxy.addr. In the last tutorial, we configured this to have a value of :80;proto=http. proxy.addr can take a comma-separated list of ports to listen on, so append the following to the existing value:

:443;proto=https;cs=some_name_here;tlsmin=tls12

This tells Fabio to listen on TCP port 443 for HTTPS requests, and also tells it which certificate store to use for encryption. We also set the minimum TLS version supported to TLS 1.2 - but you should set this value to 1 version behind the current latest version (check this page for that). For those who want extra security, you can also add the tlsciphers="CIPHER,LIST" argument too (see the official documentation for more information - cross referencing it with the ssl-config.mozilla.org is a good idea).

Now that we have this configured, this should be all you need to enable HTTPS! That was easy, right?

We still have little more work to do though to make HTTPS the default and to redirect all HTTP requests to HTTPS. We can do this by adding a route to the Consul key-value store under the path fabio/config. You can do this either by editing it in the web interface by creating a new key under fabio/config and pasting the following in & saving it:

route add route_name_here example.com:80 https://example.com$path opts "redirect=308"

Alternatively, through the command line:

consul kv put fabio/config/some_name_here 'route add some_name_here example.com:80 https://example.com$path opts "redirect=308"'

No need to restart fabio - it should pick routes up automatically. I have found however that I do need to restart it occasionally if it doesn't pick up some changed routes as fast as I'd like though.

With this, we now have automatic HTTPS setup and configured! Coming up in this series:

  • Using Caddy as an entrypoint for port forwarding on my router (status: implemented; there's an awesome plugin for single sign-on, and it's amazing in other ways too) - this replaces the role HAProxy was going to play that I mentioned in part 11
  • Password protecting Docker, Nomad, and Consul (status: on the todo list)
  • Semi-automatic docker image rebuilding with Laminar CI (status: implemented)

Sources and further reading

Cluster Series List

Hey there - I hope you've had a happy Christmas and a great new year! I'm writing this before both, so I'm still looking forward to it :D

It's been a while since I did a series list here. This time, it's for my Cluster series of blog posts, which has reached 11 parts so far! Despite publishing this series list, this series is not over! I intend to post more parts to this series in the future. I'm just posting this as a convenient place to point people interested in my cluster setup.

For those not in the know, I have a cluster of Raspberry Pis which provide my primary source of compute power for continually running services on my home network - and this series documents how I've got it setup. I also have a NAS which provides redundant (and backed up, of course) storage (see here for the setup, and here for the backups), and finally a GPU server which when I look back at the archives I apparently have yet to blog about - oops!

Here's the full series list - I'll update this list as I post more parts to this series.

If I forget to update this post, please do get in touch by leaving a comment below. You can also find all the posts related to my cluster by looking at the cluster tag here on my blog.

Backing up with tar, curl, and SFTP with key-based authentication

I have multiple backup strategies, from restic (which was preceded by duplicity) to btrfs snapshots that I sync over ssh. You can never have too many backups though (especially for your most valuable data that can't be easily replaced), so in this post I want to share another of the mechanisms I employ.

Backup systems have to suit the situation at hand, and in this case I have a personal git server which I backup daily to Backblaze B2. In order to be really absolutely sure that I don't lose it though, I also back it up to my home NAS (see also the series that I wrote on it). As you might have guessed km the title of this post, it takes backups using tar. I have recently upgraded it to transfer these backups over SFTP (SSH File Transfer Protocol).

Given that the sftp command exists, one might wonder why I use curl instead. Unfortunately, sftp as far as I can tell does not support uploading a file passed in though stdin - which is very useful when you have limited disk space on the source host! But using curl, we can pipe the output of tar directly to curl without touching the disk.

Documentation is sadly rather sparse on using curl to upload via SFTP, so it took some digging to figure out how to do it using SSH keys. SSH keys are considerably more secure than using a password (and a growing number of my systems are setup to disallow password authentication altogether), so I'll be using SSH key based authentication in this post.

To start, you'll need to generate a new SSH keypair. I like to use ed25519:

ssh-keygen -t ed25519

When prompted, choose where you want to save it to (preferably with a descriptive name), and then do not put a password on it. This is important, because at least in my case want this to operate completely autonomously without any user input.

Then, copy the public SSH key to your remote server (I strongly recommend using an account that is locked to be SFTP-only and no shell access - this tutorial seems to be good at explaining the steps involved in doing this), and then on the device doing the backing up do a test to both make sure it works and add the remote server to the known_hosts file:

sudo -u backupuser bash
ssh -i path/to/keyfile -T remoteuser@remotehost

Now we've got our SSH / SFTP setup done, we can do the backup itself:

ionice -c Idle nice -n20 tar --create --exclude-tag .BACKUP_IGNORE --gzip --file path/to/dir_to_backup | curl -sS --user "remoteuser:" --key "path/to/sshkey_ed25519" --pubkey "path/to/sshkey_ed25519.pub" -T - "sftp://example.com/path/on/remote/upload_filename.tar.gz"

Let's break this down a bit:

  • ionice -c Idle node -n20: Push the backup job into the background - both for the CPU and disk priorities. Optional.
  • tar --create --exclude-tag .BACKUP_IGNORE --gzip --file path/to/dir_to_backup: An example tar command. Use whatever you want here.
  • --user "remoteuser:": The remoteuser bit there is the user to login to the remote host with. The bit after the colon is technically the password, but we're leaving that blank 'cause we're using SSH keys instead.
  • --key "path/to/sshkey_ed25519": The path to the SSH private key.
  • -T -: Upload the standard input instead of a file on disk
  • --pubkey "path/to/sshkey_ed25519.pub": The path to the SSH public key.
  • sftp://example.com/path/on/remote/upload_filename.tar.gz: The host to upload to and path thereon to upload the standard input to. If you need to specify a custom port here, do sftp://example.com:20202/path/blah/.... instead, where 20202 is your custom port number.

Personally, I'm using this technique with an SSH tunnel, so my variant of the above command looks a bit like this (extra bits around the edges stripped away for clarity):

git_backup_user="sftpbackups";
git_backup_location="sftp://localhost:20204/git-backups";
git_backup_key="path/to/sshkey_ed25519";
upload_filename="git-$(date +"%Y-%m-%d").tar.gz";

nice -n20 tar --create --exclude-tag .BACKUP_IGNORE --gzip --file - git/{data,gitea,repos}/ www/blog | curl -sS --user "${git_backup_user}:" --key "${git_backup_key}" --pubkey "${git_backup_key}.pub" -T - "${git_backup_location}/${upload_filename}"

That's it for this post. If you've got any questions or comments, please post them below.

lnav basics tutorial

Last year, I blogged about lnav. lnav is a fantastic tool for analysing log files, and after getting a question from CrimsonTome I thought I'd write up a longer-form tutorial on the basics of using it, as I personally find it exceedingly useful.

A screenshot of lnav at work

I'll be using an Ubuntu Server 20.04 instance for this tutorial, but anything Linuxy will work just fine. As mentioned in my previous post, it's available in the default repositories for your distribution. For apt-based systems, install like so:

sudo apt install lnav

Adjust for your own package manager. For example, pacman-based distributions should do this:

sudo pacman -S lnav

lnav operates on 1 or more input files. It's common to use logrotate to rotate log files, so this is what I'd recommend to analyse all your logs of a particular type in 1 go (here I analyse generic syslog logs):

lnav /var/log/syslog*

On your system you may need to sudo that. Once you've got lnav started, you may need to wait a moment for it to parse all the log files - especially if you have multi-million line logfiles.

After it's finished loading, we can get to analysing the logs at hand. The most recent logs appear at the bottom, and you'll notice that lnav will have coloured various parts of each log message - the reason for this will become apparently later on. lnav should also livestream log lines from disk too.

Use the arrow keys or scroll up / down to navigate log messages.

lnav operates via a command pallette system, which if you use GitHub's [Atom IDE] (https://atom.io/) or Sublime Text (which is apparently where the feature originated) may already be familiar to you. In lnav's case, it's also crossed with a simple shell. Let's start with the most important command: :filter-out.

To execute a command, simply start typing. Commands in lnav are prefixed with a colon :. :filter-out takes a regular expression as it's only argument and filters all log lines which match the given regular expression out and hides them. Sticking with our earlier syslog theme, here's an example:

:filter-out kernel:

You'll notice that once you've finished typing :filter-out, lnav will show you some help in a pane at the bottom of the screen showing you how to use that command.

:filter-out has a twin that's also useful to remember: :filter-in. Unlike :filter-out, :filter-in does the opposite - anything that doesn't match the specified pattern is hidden from view. Very useful if you know what kind of log messages you're looking for, and they are a (potentially very small) subset of a much larger and more unstructured log file.

:filter-in dovecot:

To delete all existing filters and reset the view, hit Ctrl + R.

lnav has many other built-in commands. Check out the full reference here: https://docs.lnav.org/en/latest/commands.html.

The other feature that lnav comes with is also the most powerful: SQLite3 support. By parsing common log file formats (advanced users can extend lnav by defining their own custom formats, but the specifics of how to do this are best left to the lnav documentation), it can enable you to query your log files by writing arbitrary SQLite queries!

To understand how to query a file, first hit the p key. This will show you how lnav has parsed the log line at the top of the screen (scroll as normal to look at different lines, and hit p again to hide). Here's an example:

Using this information, we can then make an SQL query against the data. Press semicolon ; to open the SQL query prompt, and then enter something like this:

SELECT * FROM syslog_log WHERE log_procname == "gitea";

....hit the enter key when you're done composing your query, and the results should then appear! You can scroll through them just like you do with the regular log viewer - you just can't use :filter-in and :filter-out until you leave the query results window with the q key (this would be a really useful feature though!).

If you're running lnav on your Nginx logs (located in /var/log/nginx/ by default), then I find this query to be of particular use:

SELECT COUNT(cs_referer) AS count, cs_referer FROM access_log GROUP BY cs_referer ORDER BY COUNT(cs_referer) DESC

That concludes this basic tutorial on lnav. There are many more features that lnav offers:

  • :filter-expr for filtering the main view by SQL query
  • Analysing files on remote hosts over SSH
  • Search logs for a given string (press / and start typing)
  • Too many others to list here

Check out the full documentation here: https://docs.lnav.org/

Tips for training (large numbers of) AI models

As part of my PhD, I'm training AI models. The specifics as to what for don't particularly matter for this post (though if you're curious I recommend my PhD update blog post series). Over the last year or so, I've found myself training a lot of AI models, and dealing with a lot of data. In this post, I'm going to talk about some of the things I've found helpful and some of the things things I've found that are best avoided. Note that this is just a snapshot of my current practices now - this will probably gradually change over time.

I've been working with Tensorflow.js and Tensorflow for Python on various Linux systems. If you're on another OS or not working with AI then what I say here should still be somewhat relevant.

Datasets

First up: a quick word on datasets. While this post is mainly about AI models, datasets are important too. Keeping them organised is vitally important. Keeping all the metadata that associated with them is also vitally important. Keeping a good directory hierarchy is the best way to achieve this.

I also recommend sticking with a standard format that's easy to parse using your preferred language - and preferably lots of other languages too. Json Lines is my personal favourite format for data - potentially compressed with Gzip if the filesize of is very large.

AI Models

There are multiple facets to the problem of wrangling AI models:

  1. Code that implements the model itself and supporting code
  2. Checkpoints from the training process
  3. Analysis results from analysing such models

All of these are important for different reasons - and are also affected by where it is that you're going to be training your model.

By far the most important thing I recommend doing is using Git with a remote such as GitHub and committing regularly. I can't stress enough how critical this is - it's the best way to both keep a detailed history of the code you've written and keep a backup at the same time. It also makes working on multiple computers easy. Getting into the habit of using Git for any project (doesn't matter what it is) will make your life a lot easier. At the beginning of a programming session, pull down your changes. Then, as you work, commit your changes and describe them properly. Finally, push your changes to the remote after committing to keep them backed up.

Coming in at a close second is implementing is a command line interface with the ability to change the behaviour of your model. This includes:

  • Setting input datasets
  • Specifying output directories
  • Model hyperparameters (e.g. input size, number of layers, number of units per layer, etc)

This is invaluable for running many different variants of your model quickly to compare results. It is also very useful when training your model in headless environments, such as on High Performance Computers (HPCs) such as Viper that my University has.

For HPCs that use Slurm, a great tip here is that when you call sbatch on your job file (e.g. sbatch path/to/jobfile.job), it will preserve your environment. This lets you pass in job-specific parameters by writing a script like this:

#!/usr/bin/env bash
#SBATCH -J TwImgCCT
#SBATCH -N 1
#SBATCH -n 4
#SBATCH --gres=gpu:1
#SBATCH -o %j.%N.%a.out
#SBATCH -e %j.%N.%a.err
#SBATCH -p gpu05,gpu
#SBATCH --time=5-00:00:00
#SBATCH --mem=25600
# 25600 = 25GiB memory required

# Viper use Trinity ClusterVision: https://clustervision.com/trinityx-cluster-management/ and https://github.com/clustervision/trinityX
module load utilities/multi
module load readline/7.0
module load gcc/10.2.0
module load cuda/11.5.0

module load python/anaconda/4.6/miniconda/3.7

echo ">>> Installing requirements";
conda run -n py38 pip install -r requirements.txt;
echo ">>> Training model";
/usr/bin/env time --verbose conda run -n py38 src/my_model.py ${PARAMS}
echo ">>> exited with code $?";

....which you can call like so:

PARAMS="--size 4 --example 'something else' --input path/to/file --output outputs/20211002-resnet" sbatch path/to/jobfile.job

You may end up finding you have rather a lot of code behind your model - especially for data preprocessing depending on your dataset. To handle this, I go by 2 rules of thumb:

  1. If a source file of any language is more than 300 lines long, it should be split into multiple files
  2. If a collection of files do a thing together rather nicely, they belong in a separate Git repository.

To elaborate on these, having source code files become very long makes them difficult to maintain, understand, and re-use in future projects. Splitting them up makes your life much easier.

Going further, modularising your code is also an amazing paradigm to work with. I've broken many parts of my various codebases I've implemented for my PhD out as open-source projects on npm (the Node Package Manager) - most notably applause-cli, terrain50, terrain50-cli, nimrod-data-downloader, and twitter-academic-downloader.

By making them open-source, I'm not only making my research and methods more transparent and easier for others to independently verify, but I'm also allowing others to benefit from them (and potentially improve them) too! As they say, there's no need to re-invent the wheel.

Eventually, I will be making the AI models I'm implementing for my PhD open-source too - but this will take some time as I want to ensure that the models actually work before doing so (I've got 1 model I implemented fully and documented too, but in the end it has a critical bug that means the whole thing is useless.....).

Saving checkpoints from the training process of your model is also essential. I recommend doing so at the end of each epoch. As part of this, it's also useful to have a standard format for your output artefacts from the training process. Ideally, these artefacts can be used to identify precisely what dataset and hyperparameters that model and checkpoints were trained with.

At the moment, my models output something like this:

+ output_dir/
    + summary.txt       Summary of the layers of the model and their output shapes
    + metrics.tsv       TSV file containing training/validation loss/accuracy and epoch numbers
    + settings.toml     The TOML settings that the model was trained with
    + checkpoints/      Directory containing the checkpoints - 1 per epoch
        + checkpoint_e1_val_acc0.699.hdf5   Example checkpoint filename [Tensorflow for Python]
        + 0/            OR, if using Tensorflow.js instead of Tensorflow for Python, 1 directory per checkpoint
    + this_run.log      Logfile for this run [depends on where the program is being executed]

settings.toml leads me on to settings files. Personally I use TOML for mine, and I use 2 files:

  • settings.default.toml - Contains all the default values of the settings, and is located alongside the code for my model
  • example.toml - Custom settings that override values in the default settings file can be specified using my standard --config CLI argument.

Having a config file is handy when you have multiple dataset input files that rarely change. Generally speaking you want to ensure that you minimise the number of CLI arguments that you have to specify when running your model, as then it reduces cognitive load when you're training many variants of a model at once (I've found that wrangling dozens of different dataset files and model variants is hard enough to focus on and keep organised :P).

Analysis results are the final aspect here that it's important to keep organised - and the area in which I have the least experience. I've found it's important to keep track of which model checkpoint it was that the analysis was done with and which dataset said model was trained on. Keeping the entire chain of dataflow clear and easy to follow is difficult because the analysis one does is usually ad-hoc, and often has to be repeated many times on different model variants.

For this, so far I generate statistics and some graphs on the command line. If you're not already familiar with the terminal / command line of your machine, I can recommend checking out my earlier post Learn Your Terminal, which has a bunch of links to tutorials for this. In addition, jq is an amazing tool for manipulating JSON data. It's not installed by default on most systems, but it's available in most default repositories and well worth the install.

For some graphs, I use Gnuplot. Usually though this is only for more complex plots, as it takes a moment to write a .plt file to generate the graph I want in it.

I'm still looking for a good tool that makes it easy to generate basic graphs from the command line, so please get in touch if you've found one.

I'm also considering integrating some of the basic analysis into my model training program itself, such that it generates e.g. confusion matrices automatically as part of the training process. matplotlib seems to do the job here for plotting graphs in Python, but I have yet to find an equivalent library for Javascript. Again, if you've found one please get in touch by leaving a comment below.

Conclusion

In this post, I've talked about some of the things I've found helpful so far while I've been training models. From using Git to output artefacts to implementing command line interfaces and wrangling datasets, implementing the core AI model itself is actually only a very small part of an AI project.

Hopefully this post has given you some insight into the process of developing an AI model / AI-powered system. While I've been doing some of these things since before I started my PhD (like Git), others have taken me a while to figure out - so I've noted them down here so that you don't have to spend ages figuring out the same things!

If you've got some good tips you'd like to share on developing AI models (or if you've found the tips here in this blog post helpful!), please do share them below.

Securing your port-forwarded reverse proxy

Recently, I answered a question on Reddit about reverse proxies, and said answer was long enough and interesting enough to be tidied up and posted here.

The question itself is concerning port forwarded reverse proxies and internal services:

Hey everyone, I've been scratching my head over this for a while.

If I have internal services which I've mapped a subdomain like dashboard.domain.com through NGINX but haven't enabled the CNAME on my DNS which would map my dashboard.domain.com to my DDNS.

To me this seems like an external person can't access my service because dashboard.domain.com wouldn't resolve to an IP address but I'm just trying to make sure that this is the case.

For my internal access I have a local DNS that maps my dashboard.domain.com to my NGINX.

Is this right?

--u/Jhonquil

So to answer this question, let's first consider an example network architecture:

So we have a router sitting between the Internet and a server running Nginx.

Let's say you've port forwarded to your Nginx instance on 80 & 443, and Nginx serves 2 domains: wiki.bobsrockets.com and dashboard.bobsrockets.com. wiki.bobsrockets.com might resolve both internally and externally for example, while dashboard.bobsrockets.com may only resolve internally.

In this scenario, you might think that dashboard.bobsrockets.com is safe from people accessing it outside, because you can't enter dashboard.bobsrockets.com into a web browser from outside to access it.

Unfortunately, that's not true. Suppose an attacker catches wind that you have an internal service called dashboard.bobsrockets.com running (e.g. through crt.sh, which makes certificate transparency logs searchable). With this information, they could for example modify the Host header of a HTTP request like this with curl:

curl --header "Host: dashboard.bobsrockets.com" http://wiki.bobsrockets.com/

....which would cause Nginx to return dashboard.bobsrockets.com to the external attacker! The same can also be done with HTTPS with a bit more work.

That's no good. To rectify this, we have 2 options. The first is to run 2 separate reverse proxies, with all the internal-only content on the first and the externally-viewable stuff on the second. Most routers that offer the ability to port forward also offer the ability to do transparent port translation too, so you could run your external reverse proxy on ports 81 and 444 for example.

This can get difficult to manage though, so I recommend the following:

  1. Force redirect to HTTPS
  2. Then, use HTTP Basic Authentication like so:
server {
    # ....
    satisfy any;
    allow   192.168.0.0/24; # Your internal network IP address block
    allow   10.31.0.0/16; # Multiple blocks are allowed
    deny    all;
    auth_basic              "Example";
    auth_basic_user_file    /etc/nginx/.passwds;

    # ....
}

This allows connections from your local network through no problem, but requires a username / password for access from outside.

For your internal services, note that you can get a TLS certificate for HTTPS for services that run inside by using Let's Encrypt's DNS-01 challenge. No outside access is required for your internal services, as the DNS challenge is completed by automatically setting (and then removing again afterwards) a DNS record, which proves that you have ownership of the domain in question.

Just because a service is running on your internal network doesn't mean to say that running HTTPS isn't a good idea - defence in depth is absolutely a good idea.

Art by Mythdael