Starbeamrainbowlabs

Stardust
Blog


Archive


Mailing List Articles Atom Feed Comments Atom Feed Twitter Reddit Facebook

Tag Cloud

3d 3d printing account algorithms android announcement architecture archives arduino artificial intelligence artix assembly async audio automation backups bash batch blog bookmarklet booting bug hunting c sharp c++ challenge chrome os cluster code codepen coding conundrums coding conundrums evolved command line compilers compiling compression containerisation css dailyprogrammer data analysis debugging demystification distributed computing documentation downtime electronics email embedded systems encryption es6 features ethics event experiment external first impressions future game github github gist gitlab graphics hardware hardware meetup holiday holidays html html5 html5 canvas infrastructure interfaces internet interoperability io.js jabber jam javascript js bin labs learning library linux lora low level lua maintenance manjaro network networking nibriboard node.js operating systems own your code pepperminty wiki performance phd photos php pixelbot portable privacy problem solving programming problems projects prolog protocol protocols pseudo 3d python reddit redis reference releases rendering resource review rust searching secrets security series list server software sorting source code control statistics storage svg talks technical terminal textures thoughts three thing game three.js tool tutorial twitter ubuntu university update updates upgrade version control virtual reality virtualisation visual web website windows windows 10 xmpp xslt

Goodbye, Flash.

On the 25th January 2017, Adobe announced the end of Flash Player (and I blogged about it). A few days ago, it all officially ended. Although with everything else going on right now you might not have noticed it, but a crucial piece of the web's history has been shutdown.

For those not in the know, before web browsers were as capable as they are today (before we had the HTML5 Canvas and all the other modern apis), Adobe put together a browser plugin called Flash Player.

This plugin - at the time - revolutionised the way content was displayed in your browser. Far from static pages containing animated GIFs or a little bit of Javascript to add the odd effect, Flash allowed developers to deliver interactive content in a way that wasn't possible before - and all you had to do was install a simple browser plugin.

I'm quite sure that sites like Newgrounds, Orisinal, and Nitrome wouldn't have existed if it weren't for Flash.

It was recognised that an update to the HTML 4.01 standard (and associated standards, such as CSS and Javascript) were clearly needed. Fast forward a few years, and HTML5 was born (it was about this time that my programming journey started, and although I played with the Flex SDK and ActionScript 3 - the language flash player content was written in, I ultimately decided to learn Javascript instead).

With it, came all sorts of Javascript APIs such as WebGL and the HTML5 Canvas - and the beginning of the end for Flash. Suddenly, Flash didn't seem necessary any more - since browsers had become capable enough to do what flash does on their own, and libraries such as three.js and game engines such as Phaser were invented to take advantage of these new capabilities.

Naturally, Flash's demise was sure to follow. I agree that turning Flash off is the best thing to do. At the same time though I can't help but feel that there should have been more fanfare about it, and the legacy that Flash is leaving behind.

Thankfully, due to the amazing efforts of BlueMaxima's Flashpoint, over 70 thousand games have been preserved for all to see and play well into the future.

Goodbye, Flash. I'll miss you - you were the first chapter in a much longer tale.

NAS, Part 2: Assembly and Installation

Welcome back! This is part 2 of a series of posts about my new NAS (network attached storage) device I'm building. If you haven't read it yet, I recommend you go back and read part 1, in which I talk about the hardware I'm using.

Since the Fractal Design Node 804 case came first, I was able to install the parts into it as they arrived. First up was the motherboard (an ASUS PRIME B450M-A) and CPU (an AMD Athlon 3000G).

The motherboard was a pain. As I read, the middle panel of the case has some flex in it, so you've got to hold it in place with one hand we you're screwing the motherboard in. This in and of itself wasn't an issue at all, but the screws for the motherboard were really stiff. I think this was just the motherboard, but it was annoying.

Thankfully I managed it though, and then set to work installing the CPU. This went well - the CPU came with thermal paste on top already, so I didn't need to buy my own. The installation process for the stock CPU heatsink + fan was unfamiliar, which took me a moment to decipher how the mechanism worked.

Following this, I connected the front ports from the case up to the motherboard (consulting my motherboard's documentation showed me where I needed to plug these in - I remember this being something I struggled with when I last built an (old) PC when doing some IT technician work experience some years ago). The RAM - while a little stiff (to be expected) - went in fine too. I might buy another stick later if I run into memory pressure, but I thought a single 8GB stick would be a good place to start.

The case came with a dedicated fan controller board that has a high / medium / low switch on the back too, so I wired up the 3 included Noctua case fans to this instead of the slots on the motherboard. The CPU fan (nothing special yet - just the stock fan that came with the CPU) went into the motherboard though, as the fan controller didn't have room - and I thought that the motherboard would be better placed to control the speed of that one.

The inside of the 2 sides of the case.

(Above: The inside of the 2 sides of the case. Left: The 'hot' side, Right: The 'cold' side.)

The case is split into 2 sides: 1 for 'hot' components (e.g. the motherboard and CPU), and another for 'cold' components (e.g. the HDDs and PSU). Next up were the hard disks - so I mounted the SSD for the operating system to the base of the case in the 'hot' side, as the carriage in the cold side fits only 3.5 inch disks, and my SSD is a 2.5 inch disk. While this made the cabling slightly awkward, it all worked out in the end.

For the 3.5 inch HDDs (for data storage), I found I was unable to mount them with the included pieces of bracket metal that allow you to put screws into the bottom set of holes - as the screws wouldn't fit through the top holes. I just left the metal bracket pieces out and mounted the HDDs directly into the carriage, and it seems to have worked well so far.

The PSU was uneventful too. It fit nicely into the space provided, and the semi-modular nature of the cables provided helped tremendously to avoid a mess of cables all over the place as I could remove the cables I didn't need.

Finally, the DVD writer had some stiff screws, but it seemed to mount well enough (just a note: I've been having an issue I need to investigate with this DVD drive whereby I can't take a copy of a disk - e.g. the documentation CD that came with my motherboard - with dd, as it reports an IO error. I need to investigate this further, so more on that in a later post).

The installation of the DVD drive completed the assembly process. To start it up for the first time, I connected my new NAS to my television temporarily so that I could see the screen. The machine booted fine, and I dove straight into the BIOS.

The BIOS that comes with the ASUS motherboard I bought

(Above: The BIOS that comes with the ASUS motherboard, before the clock was set by Ubuntu Server 20.04 - which I had yet to install)

Unlike my new laptop, the BIOS that comes with the ASUS motherboard is positively delightful. It has all the features you'd need, laid out in a friendly interface. I observed some minor input lag, but considering this is a BIOS we're talking about here I can definitely overlook that. It even has an online update feature, where you can plug in an Ethernet cable and download + install BIOS updates from the Internet.

I tweaked a few settings here, and then rebooted into my flash drive - onto which I loaded an Ubuntu Server 20.04 ISO. It booted into this without complaint (unlike a certain laptop I'm rather unhappy with at the moment), and then I selected the appropriate ISO and got to work installing the operating system (want your own multiboot flash drive? I've blogged about that already! :D).

In the next post, I'm going to talk about the filesystem I ultimately chose. I'm also going to show and discuss some performance tests I ran using fio following this Ars Technica guide.

Cluster, Part 10: Dockerisification | Writing Dockerfiles

Hey there - welcome to 2021! I'm back with another cluster post. In double digits too! I think this is the longest series yet on my blog. Before we start, here's a list of all the posts in the series so far:

We've got a pretty cool setup going so far! With Nomad for task scheduling (part 7), Consul to keep track of what's running where (part 6), and wesher keeping communications secured (part 4, although defence in depth says that we'll be returning later to shore up some stuff here) we have a solid starting point from which to work from. And it's only taken 9 blog posts to get to this point :P

In this post, we'll be putting all our hard work to use by looking at the basics of writing Dockerfiles. It's taken me quite a while to get my head around them, so I want to take a moment here to document some of the things I've learnt. A few other things that I want to talk about soon are Hashicorp Vault (it's still giving me major headaches trying to understand the Nomad integration though, so this may be a while), obtaining TLS certificates, and tying in with the own your code series by showing off the Docker image management script setup I have that I've worked into my Laminar CI instance, which makes it easy to rebuild images and all their dependants.

Anyway, Dockerfiles. First question: what? Dockerfiles are essentially a file containing a domain-specific language that defines how a Docker image can be built. They are usually named Dockerfile. Here I use the term image and not container:

  • Image: A Docker image that contains a bunch of files and directories that can be run
  • Container: A copy of an image that is currently running on a host system.

In short: A container is a running image, and a Docker image is the bit that a container spins up from.

Second question: why? The answer is a few different reasons. Although it adds another layer of indirection and complication, it also allows us to square applications away such that we don't care about what host they run on (too much).

A great example here is would be a static file web server. In our case, this is particularly useful because Fabio - as far as I know - isn't actually capable of serving files from disk. Personally I have a fork of a rather nice dashboard I'd like to have running for my cluster too, so I found that it fits perfectly to test the waters.

Next question: how? Well, let's break the process down:

  1. Install Node.js
  2. Install the serve npm package

Thankfully, I've recently packaged Node.js in my apt repository (finally! It's only taken me multiple years.....). Since we might want to build lots of different Node.js based container images, it makes sense to make Node.js its own separate container. I'm also using my apt repository in other container images too which don't necessarily need Node.js, so I've opted to put my apt repository into my base image (If I haven't mentioned it already, I'm using minideb as my base image - which I build with a patch to make it support Raspbian - which is now called Raspberry Pi OS. It's confusing).

To better explain the plan, let's use a diagram:

(Above: A diagram I created. Link to editing file - don't forget this blog is licenced under CC-BY-SA.)

Docker images are always based on another Docker image. Our node-serve Docker image we intend to create will be based on a minideb-node Docker image (which we'll also be creating), which itself will be based on the minideb base image. Base images are special, as they don't have a parent image. They are usually imported via a .tar.gz image for example, but that's a story for another time (also for another time are image based on scratch, a special image that's completely empty).

We'll then push the final node-serve Docker image to a Docker registry. I'm running my own private Docker registry, but you can use the Docker Hub or setup your own private Docker registry.

With this in mind, let's start with a Docker image for Node.js:

ARG REPO_LOCATION

FROM ${REPO_LOCATION}minideb

RUN install_packages libatomic1 nodejs-sbrl

Let's talk about each of the above commands in turn:

  1. ARG REPO_LOCATION: This brings in an argument which is specified at build time. Here we want to allow the user to specify the location of a private Docker registry to pull the base (or parent) image from to begin the build process with.
  2. FROM ${REPO_LOCATION}minideb: This specifies the base (or parent) image to start the build with.
  3. RUN install_packages libatomic1 nodejs-sbrl: The RUN command runs the specified command inside the Docker container, saving a new layer in the process (more on those later). In this case, we call the install_packages command, which is a helper script provided by minideb to make package installation easier.

Pretty simple! This assumes that the minideb base image you're using has my apt repository setup, which make not be the case. To this end, we'd like to automatically set that up. To do this, we'll need to use an intermediate image. This took me some time too get my head around, so if you're unsure about anything, please comment below.

Let's expand on our earlier attempt at a Dockerfile:

ARG REPO_LOCATION

FROM ${REPO_LOCATION}minideb AS builder

RUN install_packages curl ca-certificates

RUN curl -o /srv/sbrl.asc https://apt.starbeamrainbowlabs.com/aptosaurus.asc

FROM ${REPO_LOCATION}minideb

COPY --from=builder /srv/sbrl.asc /etc/apt/trusted.gpg.d/sbrl-aptosaurus.asc

RUN echo "deb https://apt.starbeamrainbowlabs.com/ /" > /etc/apt/sources.list.d/sbrl.list && \
    install_packages libatomic1 nodejs-sbrl;

This one is more complicated, so let's break it down. Here, we have an intermediate Docker image (which we name builder via the AS builder bit at the end of the 1st FROM) in which we download and install curl (the 1st RUN command there), followed by a second image in which we copy the file we downloaded from the first Docker image and place it in a specific place in the second (the COPY directive).

Docker always reads Dockerfiles from top to bottom and executes them in sequence, so it will assume that the last image created is the final one - i.e. from the last FROM directive. Every FROM directive starts afresh from a brand-new copy of the specified parent image.

We've also expanded the RUN directive at the end of the file there to echo the apt sources list file out for my apt repository. We've done it like this in a single RUN command and not 2, because every time you add another directive to a Dockerfile (except ARG and FROM), it creates a new layer in the resulting Docker image. Minimising the number of layers in a Docker image is important for performance, hence the obscurity here in chaining commands together. To build our new Dockerfile, save it to a new empty directory. Then, execute this:

cd path/to/directory/containing_the_dockerfile;
docker build  --pull --tag "minideb-node" .

If you're using a private registry, add --build-arg "REPO_LOCATION=registry.example.com:5000/" just before the . there at the end of the command and prefix the tag with registry.example.com:5000/. If you're developing a new Docker image and having trouble with the cache (Docker caches the result of directives when building images), add --no-cache.

Then, push it to the Docker registry like so:

execute docker push "minideb-node"

Again, prefix minideb-node there with registry.example.com:5000/ should you be using a private Docker registry.

Now, you should be able to start an interactive session inside your new Docker container:

docker run -it --rm minideb-node

As before, prefix minideb-node there with registry.example.com/ if you're using a private Docker registry.

Now that we've got our Docker image for Node.js, we can write another Dockerfile for serve, our static file HTTP server. Let's take a look:

ARG REPO_LOCATION

FROM ${REPO_LOCATION}minideb-node

RUN npm install --global serve && rm -rf "$(npm get cache)";

VOLUME [ "/srv" ]

USER 80:80

ENV NODE_ENV production
WORKDIR /srv
ENTRYPOINT [ "serve", "-l", "5000" ]

This looks similar to the previous Dockerfile, but with a few extra bits added on. Firstly, we use a RUN directive to install the serve npm package and delete the NPM cache in a single command (since we don't want the npm cache sticking around in the final Docker image).

We then use a VOLUME declaration to tell Docker that we expect the /srv to have a volume mounted to it. A volume here is a directory from the host system that will be mounted into the Docker container before it starts running. In this case, it's the web root that we'll be serving files from.

A USER directive tells Docker what user and group IDs we want to run all subsequent commands as. This is important, as it's a bad idea to run Docker containers as root.

The ENV directive there is just to tell Node.js it should run in production mode. Some Node.js applications have some optimisations they enable when this environment variable is set.

The WORKDIR directive defines the current working directory for future commands. It functions like the cd command in your terminal or command line. In this case, the serve npm package always serves from the current working directory - hence we set the working directory here.

Finally, the ENTRYPOINT directive tells Docker what command to execute by default. The ENTRYPOINT can get quite involved and complex, but we're keeping it simple here and telling it to execute the serve command (provided by the serve npm package, which we installed globally earlier in the Dockerfile). We also specify the port number we want serve to listen on with -l 5000 there.

That completes the Dockerfile for the serve npm package. Build it as before, and then you should be able to run it like so:

docker run -it --rm -v /absolute/path/to/local_dir:/srv node-serve

As before, prefix node-serve with the address of your private Docker registry if you're using one. The -v bit above defines the Docker volume that mounts the webroot directory inside the Docker container.

Then, you should be able to find the IP address of the Docker container and enter it into your web browser to connect to the running server!

The URL should be something like this: http://IP_ADDRESS_HERE:5000/.

If you're not running Docker on the same machine as your web browser is running on, then you'll need to do some fancy footwork to get it to display. It's at this point that I write a Nomad job file, and wire it up to Fabio my load balancer.

In the next post, we'll talk more about Fabio. We'll also look at the networking and architecture that glues the whole system together. Finally, we'll look at setting up HTTPS with Let's Encrypt and the DNS-01 challenge (which I found relatively simple - but only once I'd managed to install a new enough version of certbot - which was a huge pain!).

Happy Christmas 2020!

I hope you had a nice Christmas, and I wish you a more restful and brighter new year.

I'm taking a much needed break at the moment, so the normal regularly scheduled blog posts will resume next week :-)

Things hopefully coming up soon in blog posts include 2 cluster posts (1 about Docker, and another about HTTPS in Fabio), and another post in my NAS series - as I'm now setup with btrfs in raid1 mode, which is working rather well (I'm just in the process of moving everything over at the moment).

A Christmas tree

Digitising old audio CDs on a Linux Server

A number of people I know own a number of audio / music CDs. This is great, but unfortunately increasingly laptops aren't coming with an optical drive any more, which makes listening to said CDs challenging. To this end, making a digital copy to add to their personal digital music collections would be an ideal solution.

Recently, I build a new storage NAS (which I'm still in the process of deciding on a filesystem for, but I think I might be going with btrfs + raid1), and the Fractal Design Node 804 case I used has a dedicated space for a slimline DVD writer (e.g. like the one you might find in a car). I've found this to be rather convenient for making digital copies of old audio CDs, and wanted to share the process by which I do it in case you'd like to do it too.

To start, I'm using Ubuntu Server 20.04. This may work on other distributions too, but there are a whole bunch of packages you'll need to install - the names and commands for which you may need to convert for your distribution.

To make the digital copies, we'll be using abcde. I can't find an updated website for it, but it stands for "A Better CD Encoder". It neatly automates much of the manual labor of digitising CDs - including the downloading of metadata from the Internet. To tidy things up after abcde has run to completion, we'll be using ffmpeg for conversion and eyeD3 for mp3 metadata manipulation.

To get started, let's install some stuff!

sudo apt install --no-install-recommends abcde
sudo apt install ffmpeg mkcue eyed3 flac glyrc cdparanoia imagemagick

Lots of dependencies here. Many of them are required by abcde for various features we'll be making use of.

Next, insert the audio CD into the DVD drive. abcde assumes your DVD drive is located at /dev/sr0 I think, so if it's different you'll have to adjust the flags you pass to it.

Once done, we can call abcde and get it to make a digital copy of our CD. I recommend here that you cd to a new blank directory, as abcde creates 1 subdirectory of the current working directory for each album it copies. When you're ready, start abcde:

abcde -o flac -B -b

Here, we call abcde and ask it to save the digital copy as flac files. The reason we do this and not mp3 directly is that I've observed abcde gets rather confused with the metadata that way. By saving to flac files first, we can ensure the metadata is saved correctly.

The arguments above do the following:

  • -o flac: Save to flac files
  • -B: Automatically embed the album art into the saved music files if possible
  • -b: Preserve the relative volume differences between tracks in the album (if replaygain is enabled, which by default I don't think it is)

It will ask you a number of questions interactively. Once you've answered them, it will get to work copying the audio from the CD.

When it's done, everything should be good to go! However flac files can be large, so something more manageable is usually desired. For this, we can mass-convert our flac files to MP3. This can be done like so:

find -iname '*.flac' -type f -print0 | nice -n20 xargs -P "$(nproc)" --null --verbose -n1 -I{} sh -c 'old="{}"; new="${old%.8}.mp3"; ffmpeg -i "${old}" -ab 320k -map_metadata 0 -id3v2_version 3 "${new}";';

There's a lot to unpack here! Before I do though, let's turn it into a bash function real quick which we can put in ~/.bash_aliases for example to make it easy to invoke in the future:

# Usage:
#   flac2mp3
#   flac2mp3 path/to/directory
flac2mp3() {
    dir="${1}";
    if [[ -z "${dir}" ]]; then dir="."; fi
    find "${dir}" -iname '*.flac' -type f -print0 | nice -n20 xargs -P "$(nproc)" --null --verbose -n1 -I{} sh -c 'old="{}"; new="${old%.*}.mp3"; ffmpeg -i "${old}" -ab 320k -map_metadata 0 -id3v2_version 3 "${new}";';
}

Ah, that's better. Now, let's deconstruct it and figure out how it works. First, we have a dir variable which, by default, is set to the current working directory.

Next, we use the one-liner from before to mass-convert all flac files in the target directory recursively to mp3. It's perhaps easier to digest if we separate it out int multiple lines:

find "${dir}" -iname '*.flac' -type f -print0   # Recursively find all flac files, delimiting them with NULL (\0) characters
    | nice -n20 # Push the task into the background
        xargs # for each line of input, execute a command
            --null # Lines are delimited by NULL (\0) characters
            --verbose # Print the command that is about to be executed
            -P "$(nproc)" # Parallelise across as many cores as the machine has
            -n1 # Only pass 1 line to the command to be executed
            -I{} # Replace {} with the filename in question
            sh -c ' # Run this command
                old="{}"; # The flac filename
                new="${old%.*}.mp3"; # Replace the .flac file extension with .mp3
                ffmpeg # Call ffmpeg to convert it to mp3
                    -i "${old}" # Input the flac file
                    -ab 320k # Encode to 320kbps, the max supported by ffmpeg
                    -map_metadata 0 # Copy all the metadata
                    -id3v2_version 3 # Set the metadata tags version (may not be necessary)
                    -c:v copy -disposition:v:0 attached_pic # Copy the album art if it exists
                    "${new}"; # Output to mp3
            '; # End of command to be executed

Obviously it won't actually work when exploded and commented like this, but hopefully it gives a sense of how it functions.

I recommend checking that the album art has been transferred over. The -c:v copy -disposition:v:0 attached_pic bit in particular is required to ensure this happens (see this Unix Stack Exchange answer to a question I asked).

Sometimes abcde is unable to locate album too, so you may need to find and download it yourself. If so, then this one-liner may come in handy:

find , -type f -iname '*.mp3' -print0 | xargs -0 -P "$(nproc)" eyeD3 --add-image "path/to/album_art.jpeg:FRONT_COVER:";

Replace path/to/album_art.jpeg with the path to the album art. Wrapping it in a bash function ready for ~/.bash_aliases makes it easier to use:

mp3cover() {
    cover="${1}";
    dir="${2}";

    if [[ -z "${cover}" ]] || [[ -z "${dir}" ]]; then
        echo "Usage:" >&2;
        echo "    mp3cover path/to/cover_image.jpg path/to/album_dir";
        return 0;
    fi

    find "${dir}" -type f -iname '*.mp3' -print0 | xargs -0 -P "$(nproc)" eyeD3 --add-image "${cover}:FRONT_COVER:"
}

Use it like this:

mp3cover path/to/cover_image.jpg path/to/album_dir

By this point, you should have successfully managed to make a digital copy of an audio CD. If you're experiencing issues, comment below and I'll try to help out.

Note that if you experience any issues with copy protection (I think this is only DVDs / films and not audio CDs, which I don't intend to investigate), I can't and won't help you, because it's there for a reason (even if I don't like it) and it's illegal to remove it - so please don't comment in this specific case.

NAS, Part 1: We need a bigger rocket

In my cluster series of posts, I've been talking about how I've built a Raspberry Pi-based cluster for running compute tasks (latest update: I've got Let's Encrypt working with the DNS-01 challenge, stay tuned for a post on that soon). Currently, this has been backed by a Raspberry Pi 3 with a 1TB WD PiDrive attached. This has a number of issues:

  • The Raspberry Pi 3 has a 100mbps network port
  • It's not redundant
  • I'm running out of storage space

I see 2 ways of solving these issues:

  1. Building a clustered file system, with 1 3.5 inch drive per Pi (or Odroid HC2 perhaps)
  2. Building a more traditional monolithic NAS

Personally, my preference here is option #2, but unfortunately due to some architectural issues in my house (read: the wiring needs redoing by an electrician) I don't actually have access to the number of wall sockets I'd need to put together a clustered setup. If I get those issues sorted, I'll certainly take a look at upgrading - but for now I've decided that I'm going to put together a more traditional monolithic NAS (maybe it can become the backup device in future, who knows) as it will only require a single wall socket (the situation is complicated. Let's just move on).

To this end, I decided to start with a case and go from there. Noise is a big concern for me, so I chose the Fractal Design Node 804, as it has a number of key features:

  • It has lots of space for disks
  • It comes with some quiet fans
  • The manufacturer appears to be quite popular and reputable

From here, I picked the basic components for the system using PC Part Picker. I haven't actually built an amd64 system from scratch before - I use laptops as my main device (see my recent review of the PC Specialist Proteus VIII), and Raspberry Pis (and an awesome little 2nd hand Netgear GS116v2 switch) currently form the backbone of my server setup.

These components included:

  • An ASUS PRIME B450M-A motherboard: 6 x SATA ports, AM4 CPU socket
  • An AMD Athlon 3000G: I don't need much compute horsepower in this build, since it's for storage (I would have got an Athlon 200GE instead as it's cheaper, but they were all out of stock)
  • 8GB Corsair Vengeance LPX DDR4 2666MHz RAM: The highest frequency the CPU supports - I got a single stick here to start with. I'll add additional sticks as and when I need them.
  • 120GB Gigabyte SSD: For the OS. Don't need a lot of storage here, since all the data is going to be on 3.5 inch HDDs instead
  • 3 x 4TB WD Red Plus WD40EFRX (CMR): These are my main data storage drives. I'm starting with 3 4TB drives, and I'll add more as I need them. The Node 804 case (mentioned above) supports up to 10 disks, apparently - so I should have plenty of space.
  • SeaSonic CORE GM 500 W 80+ Gold PSU: The most efficient PSU I could afford. I would have loved an 80+ titanium (apparently they are at least 94% efficient at 50% load), but at £250+ it's too much for my budget.
  • LG GS40N DVD writer: Apparently the Node 804 case as a slimline DVD drive slot (i.e. like one you might find in a car). It wasn't too expensive and being able to ingest CD/DVDs is appealing.

For the storage there, in particular my (initial) plan is to use OpenZFS in RAIDZ mode, which has a minimum requirement of 3 drives. Using an online calculator suggests that with the above drives I'll have 8TB of usable capacity. Initial research does suggest though that expanding a ZFS storage pool may not be as easy as I thought it was (related, see also), so more research is definitely needed before I commit to a single filesystem / set of settings there.

I've heard of BTRFS too, but I've also heard of some stability and data loss issues too. That was several years ago though, so I'll be reviewing its suitability again before making a decision here.

In future posts, I'm going to talk about my experience assembling the build. I'm also going to look at how I eventually setup the filesystem (as of yet which filesystem I'll choose is still undecided). I'll also be running some tests on the setup to evaluate how well it performs and handles failure. Finally, I may make a bonus post in this series about the challenges I encounter migrating my existing (somewhat complicated) data storage setup to the new NAS I build.

Found this interesting? Got a suggestion? Comment below!

Resizing Encrypted LVM Partitions on Linux

I found recently that I needed to resize some partitions on my new laptop as the Ubuntu installer helpfully decided to create only a 1GB swap partition, which is nowhere near enough for hibernation (you need a swap partition that's at least as big as your computer's RAM in order to hibernate). Unfortunately resizing my swap partition didn't allow me to hibernate successfully in the end, but I thought I'd still document the process here for future reference should I need to do it again in the future.

The key problem with resizing one's root partition is that you can't resize it without unmounting it, and you can't unmount it without turning off your computer. To get around this, we need to use a live distribution of Ubuntu. It doesn't actually matter how you boot into this - personally my preferred method is by using a multiboot USB flash drive, but you could just as well flash the latest ubuntu ISO to a flash drive directly.

Before you start though, it's worth mentioning that you really should have a solid backup strategy. While everything will probably be fine, there is a chance that you'll make a mistake and wind up loosing a lot of data. My favourite website that illustrates this is The Tao of Backup. Everyone who uses a computer (technically minded or not) should read it. Another way to remember it is the 3-2-1 rule: 3 backups, in 2 locations, with 1 off-site (i.e. in a different physical location).

Anyway, once you've booted into a live Ubuntu environment, open the terminal, and start a root shell. Your live distribution should come with LUKS and LVM already, but just in case it doesn't execute the following:

sudo apt update && sudo apt install -y lvm2 cryptsetup

I've talked about LVM recently when I was setting up an LVM-managed partition on an extra data hard drive for my research data. If you've read that post, then the process here may feel a little familiar to you. In this case, we're interacting with a pre-existing LVM setup that's encrypted with LUKS instead of setting up a new one. The overall process look a bit like this:

A flowchart showing the process we're going to follow. In short: open luks → LVM up → make changes → LVM down → close luks → reboot

With this in mind, let's get started. The first order of business is unlocking the LUKS encryption on the drive. This is done like so:

sudo modprobe dm-crypt
sudo cryptsetup luksOpen /dev/nvme0n1p3 crypt1

The first command there ensures that the LUKS kernel module is loaded if it isn't already, and the second unlocks the LUKS-encrypted drive. Replace /dev/nvme0n1p3 with the path to your LVM partition - e.g. /dev/sda1 for instance. The second command will prompt you for the password to unlock the drive.

It's worth mentioning here before continuing the difference between physical partitions and LVM partitions. Physical partitions are those found in the partition table on the physical disk itself, that you may find in a partition manage like GParted.

LVM partitions - for the purpose of this blog post - are those exposed by LVM. They are virtual partitions that don't have a physical counterpart on disk and are handled internally by LVM. As far as I know, you can't ask LVM easily where it stores them on disk - this is calculated and managed automatically for you.

In order to access our logical LVM partitions, the next step is to bring up LVM. To do this, we need to get LVM to re-scan the available physical partitions since we've just unlocked the one we want it to use:

sudo vgscan --mknodes

Then, we activate it:

sudo vgchange -ay

At this point, we can now do our maintenance and make any changes we need to. A good command to remember here is lvdisplay, which lists all the available LVM partitions and their paths:

sudo lvdisplay

In my case, I have /dev/vgubuntu/root and /dev/vgubuntu/swap_1. tldr-pages (for which I'm a maintainer) has a number of great LVM-related pages that were contributed relatively recently which are really helpful here. For example, to resize a logical LVM partition to be a specific size, do something like this:

sudo lvresize -L 32G /dev/vgubuntu/root

To extend a partition to fill all the remaining available free space, do something like this:

sudo lvextend -l +100%FREE /dev/vgubuntu/root

After resizing a partition, don't forget to run resize2fs. It ensures that the ext4 filesystem on top matches the same size as the logical LVM partition:

sudo resize2fs /dev/vgubuntu/root

In all of the above, replace /dev/vgubuntu/root with the path to your logical LVM partition in question of course.

Once you're done making changes, we need to stop LVM and close the LUKS encrypted disk to ensure all the changes are saved properly and to avoid any issues. This is done like so:

sudo vgchange -an
sudo cryptsetup luksClose crypt1

With that, you're done! You can now reboot / shutdown from inside the live Ubuntu environment and boot back into your main operating system. All done!

Found this helpful? Encountering issues? Comment below! It really helps my motivation.

The Spaceship | Why open-source is important

It was a day like any other: Captain Ralph of the small cargo ship SS carryalot was taking a shipment of bavarian noodles to Epsilon V in time for their annual noodle festival with his partner, Egbert the Engineer.

Unfortunately though, he was having some FTL drive trouble. An hour or two previously, he docked with the layover station Alpha 9 to refuel, but yet the FTL drive suddenly abruptly cut out - leaving them stranded in space - and showed an ominous-looking triangle logo on the maintenance screen.

After trying the steps in the troubleshooting guide, he and his engineer were no further forward, so they called the official Shade Co helpline, the manufacturer of his engine and explained the problem. The Shade Co representative offered to organise a tow to the nearest official Shade Co. repair station to investigate the issue, but noted that they would charge a fee for this service.

Grumbling about delays, Ralph accepted the offer. Once they reached the repair station, a Shade Co engineer investigated the issue. A quick fix was not to be found though, as the Shade Co engineer said that there wasn't anything wrong with the engine mechanically, and that Ralph must have installed some unofficial firmware.

Ralph and Egbert had not installed unofficial firmware in fact, as a Shade Co software developer later verified. After a number of days of debugging and testing by an official Shade Co software developer (who had to be flown in all the way from Delta IV), it was discovered that the engine had actually been infected with ransomware that had encrypted taken over and encrypted all the engine's hyperparameters through a bug in the firmware that was introduce 5 years prior - yet nobody had noticed the issue previously.

Since the Shade Co engine was configured to only accept official firmware signed with the Shade Co firmware signing key, the software developer had to get the engineer to extract the engine, and send it back to HQ for analysis.

Fed up with the situation with Shade Co, Ralph got a new Teklacki engine fitted on the double. The new engine had the open-source light-rider firmware installed, and he took out a support contract with Teklacki - just in case anything went wrong.

Thankfully, Ralph and Egbert were able to deliver their cargo of bavarian noodles a few days late, but in time for the finale of the noodle festival.

5 copies of a pixel art spaceship launching from a launch pad, activing as a horizontal rule. Artwork by me - ask for permission before reproducing

It sounds like a problem straight out of a science-fiction novel, but Ralph's issue is one that is unfortunately all too common. I can't remember who said it, but I'll always remember this quote I heard while doing my undergraduate Computer Science course:

All software that is sufficiently complex to be useful, will contain bugs

This is true - but in the case of closed-source software (as in the fictional Shade Co's FTL drive firmware), nobody else is able to audit the software and check it for security issues - or any other kinds of bugs, for that matter. Open software (and open hardware) is a critically important problem and something I'm rather passionate about.

Of course, open-source software can contain bugs too (e.g. the heartbleed bug in OpenSSL), but the difference is both in the probability that such bugs will be found, and in the choice and control that users have over how they use and configure such software.

Support contracts that are popular with commercial close-source software are essential to businesses. These don't have to be exclusive to closed-source software: Indeed, Canonical for example provide a commercial support contract for Ubuntu (a distribution of Linux) called landscape.

Another example of this problem are the recent issues with the Intel Management Engine. For those not in the know, Intel CPUs run a complex piece of close-source firmware called the Intel Management Engine. Bugs in this mean that a clever attacker can access everything running on your computer at all times - yet for all the security audits I'm sure they will have done, nobody else (such as independent security researchers) can see the source code. This leaves it in a race between malicious entities and security researchers doing black-box testing to see who finds a flaw first.

The same goes with hardware as well as software. If the design of the hardware is open, then it makes it possible for others to analyse it, find and fix bugs, and contribute new features.

These are just some of the reasons that I really believe that open-source is the answer to a number of key issues with software and hardware today. It's not a perfect silver-bullet - unmaintained open-source projects can be a nightmare (just not as much of a nightmare as an abandoned closed-source program), and governance of an open-source project can be really challenging (thanks so much to @waldyrious to teaching me about this while I've been a maintainer for tldr-pages!).

All things considered though, I believe it's a better approach than locking software behind paywalls and hoping that nobody will discover a flaw in it. After all, security through obscurity is never a good idea.

Pixel art by me: Ask before copying or reproducing.

Running multiple local versions of CUDA on Ubuntu without sudo privileges

I've been playing around with Tensorflow.js for my PhD (see my PhD Update blog post series), and I had some ideas that I wanted to test out on my own that aren't really related to my PhD. In particular, I've found this blog post to be rather inspiring - where the author sets up a character-based recurrent neural network to generate text.

The idea of transcoding all those characters to numerical values and back seems like too much work and too complicated just for a quick personal project though, so my plan is to try and develop a byte-based network instead, in the hopes that I can not only teach it to generate text as in the blog post, but valid Unicode as well.

Obviously, I can't really use the University's resources ethically for this (as it's got nothing to do with my University work) - so since I got a new laptop recently with an Nvidia GeForce RTX 2060, I thought I'd try and use it for some machine learning instead.

The problem here is that Tensorflow.js requires only CUDA 10.0, but since I'm running Ubuntu 20.10 with all the latest patches installed, I have CUDA 11.1. A quick search of the apt repositories on my system reveals nothing that suggests I can install older versions of CUDA alongside the newer one, so I had to devise another plan.

I discovered some months ago (while working with Viper - my University's HPC - for my PhD) that you can actually extract - without sudo privileges - the contents of the CUDA .run installers. By then fiddling with your PATH and LD_LIBRARY_PATH environment variables, you can get any program you run to look for the CUDA libraries elsewhere instead of loading the default system libraries.

Since this is the second time I've done this, I thought I'd document the process for future reference.

First, you need to download the appropriate .run installer for the CUDA libraries. In my case I need CUDA 10.0, so I've downloaded mine from here:

https://developer.nvidia.com/cuda-10.0-download-archive?target_os=Linux&target_arch=x86_64&target_distro=Ubuntu&target_type=runfilelocal

Next, we need to create a new subdirectory and extract the .run file into it. Do that like so:

cd path/to/runfile_directory;
mkdir cuda-10.0
./cuda_10.0.130_410.48_linux.run --extract=${PWD}/cuda-10.0/

Make sure that the current working directory contains no spaces, no preferably no other special characters either. Also, adjust the file and directory names to suit your situation.

Once done, this will have extract 3 subfiles - which also have the suffix .run. We're only interested in CUDA itself, so we only need to extract the the one that starts with cuda-linux. Do that like so (adjusting file/directory names as before):

cd cuda-10.0;
./cuda-linux.10.0.130-24817639.run -noprompt -prefix=$PWD/cuda;
rm *.run;
mv cuda/* .;
rmdir cuda;

If you run ./cuda-linux.10.0.130-24817639.run --help, it's actually somewhat deceptive - since there's a typo in the help text! I corrected it for this above though. Once done, this should leave the current working directory containing the CUDA libraries - that is a subdirectory next to the original .run file:

+ /path/to/some_directory/
    + ./cuda_10.0.130_410.48_linux.run
    + cuda-10.0/
        + version.txt
        + bin/
        + doc/
        + extras/
        + ......

Now, it's just a case of fiddling with some environment variables and launching your program of choice. You can set the appropriate environment variables like this:

export PATH="/absolute/path/to/cuda-10.0/bin:${PATH}";
if [[ ! -z "${LD_LIBRARY_PATH}" ]]; then
    export LD_LIBRARY_PATH="/absolute/path/to/cuda-10.0/lib64:${LD_LIBRARY_PATH}";
else
    export LD_LIBRARY_PATH="/absolute/path/to/cuda-10.0/lib64";
fi

You could save this to a shell script (putting #!/usr/bin/env bash before it as the first line, and then running chmod +x path/to/script.sh), and then execute it in the context of the current shell for example like so:

source path/to/activate-cuda-10.0.sh

Many deep learning applications that use CUDA also use CuDNN, a deep learning library provided by Nvidia for accelerating deep learning applications. The archived versions of CuDNN can be found here: https://developer.nvidia.com/rdp/cudnn-archive

When downloading (you need an Nvidia developer account, but thankfully this is free), pay attention to the version being requested in the error messages generated by your application. Also take care to download the version of CUDA you're using, and match the CuDNN version appropriately.

When you download, select the "cuDNN Library for Linux" option. This will give you a tarball, which contains a single directory cuda. Extract the contents of this directory over the top of your CUDA directory from following my instructions above, and it should work as intended. I used my graphical archive manager for this purpose.

PhD Update 6: The road ahead

Hey, I'm back early with another post in my PhD series! Turns out there was a bit of mix up last time, and I misplaced update 4 - so I've renamed the duplicate update 4 to update 5. Before we continue, here are all the posts in this series so far:

Earlier today, I had my PhD panel 2. For those reading who don't know, the primary assessments for a PhD (at least in my case) take the form of 6 panels - in which you write a big report, and then your supervisors get together with you and discuss it. The even numbered ones at the end of each year are the more important ones, I'm led to believe - so I was understandably nervous.

Thankfully, it went well, and I ended up writing a report that's a third longer than my undergraduate dissertation at ~9k words O.o Anyway, I wanted to make another post in this series, as the process of writing the report (and research plan) for my panel has made me look at the bigger picture of where my PhD is going, and how it's all going to tie together - and I wanted to share this here.

Temporal CNN

In the last few posts, I've given some insights into the process I've been working through to train a Temporal CNN to predict the output of HAIL-CAESAR. This is starting to reach its conclusion - though there are a number of tasks I have yet to complete to close this chapter of the story. In particular, I've (finally!) got the results from the hyperparameter optimisation I've been doing. Let's check out a heatmap:

The aforementioned heatmap - explained below.

I plotted the above with GNUPlot, but ran into a number of issues (of course) while generating it, which I'd like to briefly discuss here. If you're interested in a more detailed discussion of this, please get in touch (I'll also need to know your real name and where you're working / studying, just in case) via a private communication method - such as my email address on my website.

Firstly, those who are particularly particularly perceptive will notice the rather strange filename for the above image. As indicated, I ran into some instability in my early stopping algorithm. For each epoch, the algorithm I implemented looked at the error from 3 epochs ago - and if it's greater than the error value for the epoch that's just finished it allows training to continue. Unfortunately, due to the nature of the task at hand, this sometimes caused it to stop training too soon. Further analysis of the results revealed that it sometimes allowed it to continue training too long as well (which is reflected in the above chart), but I'm baffled on that one.

For this reason, all hyperparameter combinations that didn't train for quite long enough were omitted in the above graph.

Secondly, there's a large area of the chart that isn't filled in. This is because both increasing the number of filters and the temporal depth increases memory usage - and the area that's unfilled is the area in which it failed to train because it ran out of GPU memory (I used 4 x Nvidia GeForce V100 - thanks very much to my department for providing this!).

Finally, the most important thing to note is that after reviewing other similar models, I discovered that this really isn't the best way to evaluate the performance of this kind of model. Rather, it would be better to consider this as a classification-based task instead. This can be achieved by creating a number of different bins for the water depth values, and assign a probability to each pixel for each water depth bin. The technical name for this is cross-entropy loss as far as I'm aware, which in my case I've been tending to prefix with pixel-based.

In this fashion, I can use things like a confusion matrix (can't find a good explanation of this, so I'll talk about it in a future blog post) to evaluate the performance of the model - which allows me to see the models strengths and weaknesses. What are its strengths? What are its weaknesses? I hope to find out - though I'm a little nervous about it, as I haven't yet looked into how to generate one with Tensorflow.js or tested my initial cross-entropy loss implementation with my kind of dataset yet.

Having a probability-based system like this should also make the output more useful. By this, I mean that having probabilities assigned to different water depths should be easier to interpret than a single value which nobody knows how the model came to that conclusion, or how uncertain the model is about the result.

Let's take look at 1 more set of graphs before we move on:

4 graphs showing the training / validation root mean squared error - see below for explanation.

In this set of 4 graphs, I've taken the training and validation root mean squared error from 4 different combinations of hyperparameters. These rather effectively illustrate the early stopping stability here (top left), and they also demonstrate some instability in the training process (all graphs). I'm guessing the latter is probably caused by insufficient training data - this I can remedy quite easily as I have plenty more besides the 5K time steps I trained on (~1.5M time steps to be precise), but in the interests of time I limited the training dataset so that I could train a variety of different models.

It also shows the shortcomings of the root mean-squared-error approach I've been using so far - is an average error of 2 biased towards deeper or shallower water? What can it predict well, and what does it struggle with?

My supervisor also wants me to write a publication on the Temporal CNN stuff I've done when it's complete, so that's going to be an interesting new experience for me too (I'm both excited and terribly nervous at the same time).

Social Media Analysis

Moving forwards, I'm hoping to complement my Temporal CNN model with some social media analysis (subject to ethical approval of course - filling out the paperwork for this is part of my task list for the rest of this week). If I get the go-ahead, my intention is to bring some natural language processing AI model to the subject of flood mapping using social media - as I've noticed that there's.... limited existing material on this so far.

I have yet to read up properly on the subject, but most recently I've found this paper rather interesting. It uses an unsupervised model to identify the topic(s) a tweet is talking about, which they then follow up with some statistical analysis.

I'm a little fuzzy on how they identify where tweets are talking about (especially since this paper mentions that only a very small percentage of tweets have a geotag attached, and even of these a fraction will be wrong or misleading for 1 reason or another), but the researchers put together a rather nice map by chaining together several statistical techniques techniques I'm currently unfamiliar with. It shows hot spots and cold spots, which indicate where the damage for an earthquake has occurred.

If possible, it would be very nice indeed to plot a flood on a map (stretch goal: in real-time!). I anticipate this to be a key issue I'll need to pay attention to.

In terms of my most immediate starting point, I'm going to be doing some more reading on the subject, and then have a chat with my supervisor about the next steps (she's particularly knowledgable about natural language processing :D).

Conclusion

In conclusion, I'm starting to come to the end of the Temporal CNN chapter of my PhD, although I suspect it's going to keep coming back over and over again to say hello. Moving forwards, I'm hoping to complement work I've done so far with some social media analysis (subject to ethical approval) using AI-based Natural Language Processing - which will probably consist of improving an existing model or something more directly computer science related (my existing work is classified more as a contribution to environmental sciences, apparently).

Look out for more posts in this series in the future, as I'm sure I'll have plenty to talk about soon (I might do another one in a month's time, or it might end up being nearer 2 months depending on circumstances).

Art by Mythdael