Starbeamrainbowlabs

Stardust
Blog


Archive


Mailing List Articles Atom Feed Comments Atom Feed Twitter Reddit Facebook

Tag Cloud

3d 3d printing account algorithms android announcement architecture archives arduino artificial intelligence artix assembly async audio automation backups bash batch blog bookmarklet booting bug hunting c sharp c++ challenge chrome os cluster code codepen coding conundrums coding conundrums evolved command line compilers compiling compression containerisation css dailyprogrammer data analysis debugging demystification distributed computing docker documentation downtime electronics email embedded systems encryption es6 features ethics event experiment external first impressions future game github github gist gitlab graphics hardware hardware meetup holiday holidays html html5 html5 canvas infrastructure interfaces internet interoperability io.js jabber jam javascript js bin labs learning library linux lora low level lua maintenance manjaro minetest network networking nibriboard node.js operating systems own your code pepperminty wiki performance phd photos php pixelbot portable privacy problem solving programming problems project projects prolog protocol protocols pseudo 3d python reddit redis reference releases rendering resource review rust searching secrets security series list server software sorting source code control statistics storage svg talks technical terminal textures thoughts three thing game three.js tool tutorial tutorials twitter ubuntu university update updates upgrade version control virtual reality virtualisation visual web website windows windows 10 xmpp xslt

NAS Backups, Part 1: Overview

After building my nice NAS, the next thing on my sysadmin todo list was to ensure it is backed up. In this miniseries (probably 2 posts), I'm going to talk about the backup NAS that I've built. As of the time of typing, it is sucessfully backing up my Btrfs subvolumes.

In this first post, I'm going to give an overview of the hardware I'm using and the backup system I've put together. In future posts, I'll go into detail as to how the system works, and which areas I still need to work on moving forwards.

Personally, I find that the 3-2-1 backup strategy is a good rule of thumb:

  • 3 copies of the data
  • 2 different media
  • 1 off-site

What this means is tha you should have 3 copies of your data, with 2 local copies and one remote copy in a different geographical location. To achieve this, I have solved first the problem of the local backup copy, since it's a lot easier and less complicated than the remote one. Although I've talked about backups before (see also), in this case my solution is slightly different - partly due to the amount of data involved, and partly due to additional security measures I'm trying to get into place.

Hardware

For hardware, I'm using a Raspberry Pi 4 with 4GB RAM (the same as the rest of my cluster), along with 2 x 4 TB USB 3 external hard drives. This is a fairly low-cost and low-performance solution. I don't particularly care how long it takes the backup to complete, and it's relatively cheap to replace if it fails without being unreliable (Raspberry Pis are tough little things).

Here's a diagram of how I've wired it up:

(Can't see the above? Try a direct link to the SVG. Created with drawio.)

I use USB Y-cables to power the hard rives directly from the USB power supply, as the Pi is unlikely to be able to supply enough power for mulitple external hard drives on it's own.

Important Note: As I've discovered with a different unrelated host on my network, if you do this you can back-power the Pi through the USB Y cable, and potentially corrupt the microSD card by doing so. Make sure you switch off the entire USB power supply at once, rather than unplug just the Pi's power cable!

For a power supply, I'm using an Anker 10 port device (though I bought through Amazon, since I wasn't aware that Anker had their own website) - the same one that powers my Pi cluster.

Strategy

To do the backup itself I'm using the fact that I store my data in Btrfs subvolumes and the btrfs send / btrfs receive commands to send my subvolumes to the remote backup host over SSH. This has a number of benefits:

  1. The host doing the backing up has no access to the resulting backups (so if it gets infected it can't also infect the backups)
  2. The backups are read-only Btrfs snapshots (so if the backup NAS gets infected my backups can't be altered without first creating a read-write snapshot)
  3. Backups are done incrementally to save time, but a full backup is done automatically on the first run (or if the local metadata is missing)

While my previous backup solution using Restic for the server that sent you this web page has point #3 on my list above, it doesn't have points 1 and 2.

Restic does encrypt backups at rest though, which the system I'm setting up doesn't do unless you use LUKS to encrypt the underlying disks that Btrfs stores it's data on. More on that in the future, as I have tentative plans to deal with my off-site backup problem using a similar technique to that which I've used here that also encrypts data at rest when a backup isn't taking place.

In the next post, I'll be diving into the implementation details for the backup system I've created and explaining it in more detail - including sharing the pair of scripts that I've developed that do the heavy lifting.

Automatically downloading emails and extracting their attachments

I have an all-in-one printer that's also a scanner - specifically the Epson Ecotank 4750 (though annoyingly the automated document feeder doesn't support duplex). While it's a great printer (very eco-friendly, and the inks last for ages!), my biggest frustration with it is that it doesn't scan directly to an SMB file share (i.e. a Windows file share). It does support SANE though, which allows you to use it through a computer.

This is ok, but the ability to scan directly from the device itself without needing to use a computer was very convenient, so I set out to remedy this. The printer does have a cloud feature they call "Epson Connect", which allows one to upload to various cloud services such as Google Drive and Box, but I don't want to upload potentially sensitive data to such services.

Fortunately, there's a solution at hand - email! The printer in question also supports scanning to a an email address. Once the scanning process is complete, then it sends an email to the preconfigured email address with the scanned page(s) attached. It's been far too long since my last post about email too, so let's do something about that.

Logging in to my email account just to pick up a scan is clunky and annoying though, so I decided to automate the process to resolve the issue. The plan is as follows:

  1. Obtain a fresh email address
  2. Use IMAP IDLE to instantly download emails
  3. Extract attachments and save them to the output directory
  4. Discard the email - both locally and remotely

As some readers may be aware, I run my own email server - hence the reason why I wrote this post about email previously, so I reconfigured it to add a new email address. Many other free providers exist out there too - just make sure you don't use an account you might want to use for anything else, since our script will eat any emails sent to it.

Steps 2, 3, and 4 there took some research and fiddling about, but in the end I cooked up a shell script solution that uses fetchmail, procmail (which is apparently unmaintained, so I should consider looking for alternatives), inotifywait, and munpack. I've also packaged it into a Docker container, which I'll talk about later in this post.

To illustrate how all of these fit together, let's use a diagram:

A diagram showing how the whole process fits together - explanation below.

fetchmail uses IMAP IDLE to hold a connection open to the email server. When it receives notification of a new email, it instantly downloads it and spawns a new instance of procmail to handle it.

procmail writes the email to a temporary directory structure, which a separate script is watching with inotifywait. As soon as procmail finishes writing the new email to disk, inotifywait triggers and the email is unpacked with munpack. Any attachments found are moved to the output directory, and the original email discarded.

With this in mind, let's start drafting up a script. The first order of the day is configuring fetchmail. This is done using a .fetchmailrc file - I came up with this:

poll bobsrockets.com protocol IMAP port 993
    user "user@bobsrockets.com" with pass "PASSWORD_HERE"
    idle
    ssl

...where user@bobsrockets.com is the email address you want to watch, bobsrockets.com is the domain part of said email address (everything after the @), and PASSWORD_HERE is the password required to login.

Save this somewhere safe with tight file permissions for later.

The other configuration file we'll need is one for procmail. let's do that one now:

CORRECTHOME=/tmp/maildir
MAILDIR=$CORRECTHOME/

:0
Mail/

Replace /tmp/maildir with the temporary directory you want to use to hold emails in. Save this as procmail.conf for later too.

Now we have the mail config files written, we need to install some software. I'm using apt on Debian (a minideb Docker container actually), so you'll need to adapt this for your own system if required.

sudo apt install ca-certificates fetchmail procmail inotify-tools mpack
# or, if you're using minideb:
install_packages ca-certificates fetchmail procmail inotify-tools mpack

fetchmail is for some strange reason extremely picky about the user account it runs under, so let's update the pre-created fetchmail user account to make it happy:

groupadd --gid 10000 fetchmail
usermod --uid 10000 --gid 10000 --home=/srv/fetchmail --uid=10000 --gi=10000 fetchmail
chown fetchmail:fetchmail /srv/fetchmail

fetchmail now needs that config file we created earlier. Let's update the permissions on that:

chmod 10000:10000 path/to/.fetchmailrc

If you're running on bare metal, move it to the /srv/fetchmail directory now. If you're using Docker, keep reading, as I recommend that this file is mounted using a Docker volume to make the resulting container image more reusable.

Now let's start drafting a shell script to pull everything together. Let's start with some initial setup:

#!/usr/bin/env bash

if [[ -z "${TARGET_UID}" ]]; then
    echo "Error: The TARGET_UID environment variable was not specified.";
    exit 1;
fi
if [[ -z "${TARGET_GID}" ]]; then
    echo "Error: The TARGET_GID environment variable was not specified.";
    exit 1;
fi
if [[ "${EUID}" -ne 0 ]]; then
    echo "Error: This Docker container must run as root because fetchmail is a pain, and to allow customisation of the target UID/GID (although all possible actions are run as non-root users)";
    exit 1;
fi

dir_mail_root="/tmp/maildir";
dir_newmail="${dir_mail_root}/Mail/new";
target_dir="/mnt/output";

fetchmail_uid="$(id -u "fetchmail")";
fetchmail_gid="$(id -g "fetchmail")";

temp_dir="$(mktemp --tmpdir -d "imap-download-XXXXXXX")";
on_exit() {
    rm -rf "${temp_dir}";
}
trap on_exit EXIT;

log_msg() {
    echo "$(date -u +"%Y-%m-%d %H:%M:%S") imap-download: $*";
}

This script will run as root, and fetchmail runs as UID 10000 and GID 10000, The reasons for this are complicated (and mostly have to do with my weird network setup). We look for the TARGET_UID and TARGET_GID environment variables, as these define the uid:gid we'll be setting files to before writing them to the output directory.

We also determine the fetchmail UID/GID dynamically here, and create a second temporary directory to work with too (the reasons for which will become apparent).

Before we continue, we need to create the directory procmail writes new emails to. Not because procmail won't create it on its own (because it will), but because we need it to exist up-front so we can watch it with inotifywait:

mkdir -p "${dir_newmail}";
chown -R "${fetchmail_uid}:${fetchmail_gid}" "${dir_mail_root}";

We're running as root, but we'll want to spawn fetchmail (and other things) as non-root users. Technically, I don't think you're supposed to use sudo in non-interactive scripts, and it's also not present in my Docker container image. The alternative is the setpriv command, but using it is rather complicated and annoying.

It's more powerful than sudo, as it allows you to specify not only the UID/GID a process runs as, but also the capabilities the process will have too (e.g. binding to low port numbers). There's a nasty bug one has to work around if one is using Docker too, so given all this I've written a wrapper function that abstracts all of this complexity away:

# Runs a process as another user.
# Ref https://github.com/SinusBot/docker/pull/40
# $1    The UID to run the process as.
# $2    The GID to run the process as.
# $3-*  The command (including arguments) to run
run_as_user() {
    run_as_uid="${1}"; shift;
    run_as_gid="${1}"; shift;
    if [[ -z "${run_as_uid}" ]]; then
        echo "run_as_user: No target UID specified.";
        return 1;
    fi
    if [[ -z "${run_as_gid}" ]]; then
        echo "run_as_user: No target GID specified.";
        return 2;
    fi

    # Ref https://github.com/SinusBot/docker/pull/40
    # WORKAROUND for `setpriv: libcap-ng is too old for "all" caps`, previously "-all" was used here
    # create a list to drop all capabilities supported by current kernel
    cap_prefix="-cap_";
    caps="$cap_prefix$(seq -s ",$cap_prefix" 0 "$(cat /proc/sys/kernel/cap_last_cap)")";

    setpriv --inh-caps="${caps}" --reuid "${run_as_uid}" --clear-groups --regid "${run_as_gid}" "$@";
    return "$?";
}

With this in hand, we can now wrap fetchmail and procmail in a function too:

do_fetchmail() {
    log_msg "Starting fetchmail";

    while :; do
        run_as_user "${fetchmail_uid}" "${fetchmail_gid}" fetchmail --mda "/usr/bin/procmail -m /srv/procmail.conf";

        exit_code="$?";
        if [[ "$exit_code" -eq 127 ]]; then
            log_msg "setpriv failed, exiting with code 127";
            exit 127;
        fi 

        log_msg "Fetchmail exited with code ${exit_code}, sleeping 60 seconds";
        sleep 60
    done
}

In short this spawns fetchmail as the fetchmail user we configured above, and also restarts it if it dies. If setpriv fails, it returns an exit code of 127 - so we catch that and don't bother trying again, as the issue likely needs manual intervention.

To finish the script, we now need to setup that inotifywait loop I mentioned earlier. Let's setup a shell function for that:


do_attachments() {
    while :; do # : = infinite loop
        # Wait for an update
        # inotifywait's non-0 exit code forces an exit for some reason :-/
        inotifywait -qr --event create --format '%:e %f' "${dir_newmail}";

        # Process new email here
    done
}

Processing new emails is not particularly difficult, but requires a sub loop because:

  • More than 1 email could be written at a time
  • Additional emails could slip through when we're processing the last one
while read -r filename; do

    # Process each email

done < <(find "${dir_newmail}" -type f);

Finally, we need to process each email we find in turn. Let's outline the steps we need to take:

  1. Move the email to that second temporary directory we created above (since the procmail directory might not be empty)
  2. Unpack the attachments
  3. chown the attach

Let's do this in chunks. First, let's move it to the temporary directory:

log_msg "Processing email ${filename}";

# Move the email to a temporary directory for processing
mv "${filename}" "${temp_dir}";

The filename environment variable there is the absolute path to the email in question, since we used find and passed it an absolute directory to list the contents of (as opposed to a relative path).

To find the filepath we moved it to, we need to do this:

filepath_temp="${temp_dir}/$(basename "${filename}")"

This is important for the next step, where we unpack it:

# Unpack the attachments
munpack -C "${temp_dir}" "${filepath_temp}";

Now that we've unpacked it, let's do a bit of cleaning up, by deleting the original email file and the .desc description files that munpack also generates:

# Delete the original email file and any description files
rm "${filepath_temp}";
find "${temp_dir}" -iname '*.desc' -delete;

Great! Now we have the attachments sorted, now all we need to do is chown them to the target UID/GID and move them to the right place.

chown -R "${TARGET_UID}:${TARGET_GID}" "${temp_dir}";
chmod -R a=rX,ug+w "${temp_dir}";

I also chmod the temporary directory too to make sure that the permissions are correct, because otherwise the mv command is unable to read the directory's contents.

Now to actually move all the attachments:

# Move the attachment files to the output directory
while read -r attachment; do
    log_msg "Extracted attachment ${attachment}";
    chmod 0775 "${attachment}";
    run_as_user "${TARGET_UID}" "${TARGET_GID}" mv "${attachment}" "${target_dir}";
done < <(find "${temp_dir}" -type f);

This is rather overcomplicated because of an older design, but it does the job just fine.

With that done, we've finished the script. I'll include the whole script at the bottom of this post.

Dockerification

If you're running on bare metal, then you can skip to the end of this post. Because I have a cluster, I want to be able to run this thereon. Since said cluster works with Docker containers, it's natural to Dockerise this process.

The Dockerfile for all this is surprisingly concise:

(Can't see the above? View it on my personal Git server instead)

To use this, you'll need the following files alongside it:

It exposes the following Docker volumes:

  • /mnt/fetchmailrc: The fetchmailrc file
  • /mnt/output: The target output directory

All these files can be found in this directory on my personal Git server.

Conclusion

We've strung together a bunch of different programs to automatically download emails and extract their attachments. This is very useful as for ingesting all sorts of different files. Things I haven't covered:

  • Restricting it to certain source email addresses to handle spam
  • Restricting the file types accepted (the file command is probably your friend)
  • Disallowing large files (most 3rd party email servers do this automatically, but in my case I don't have a limit that I know of other than my hard disk space)

As always, this blog post is both a reference for my own use and a starting point for you if you'd like to do this for yourself.

If you've found this useful, please comment below! I find it really inspiring / motivating to learn how people have found my posts useful and what for.

Sources and further reading

run.sh script

(Can't see the above? Try a this link, or alternatively this one (bash))

Servers demystified

Something I see a lot of around the Internet are people who think that you need to purchase a big (often rack-mounted) "server" in order to host things like websites, email, game servers, and more (exhibit a). Quite often, they turn to ebay to purchase used enterprise rack mounted servers too.

I want to take a moment here to write up my thoughts here on why that is almost never the correct approach for a home user to take to host such applications at home, and what the (much better) alternatives are to serve as a reference post I can direct people to who need educating about this important issue.

What is a "server"?

A server can mean 2 things: a physical computer whose primary role is to act as a server, and server applications, which serve content to other users elsewhere - be it phones, laptops, desktops, etc.

A lot of people new to the field don't realise it, but any computer can take on the role of a server - you don't need any fancy hardware. The things that a computer does is defined by the software it runs - not the hardware that it is built from.

Does a server need a graphics card (GPU)?

No. It really doesn't. It's extremely unlikely that for a general purpose server you would need a GPU. Another related myth here is that you need a GPU in your server if you're running a game server. This is also false. Most of the time a server is going to be running headlessly (i.e. without a monitor) - so it really doesn't need a GPU in order to function effectively.

The following tasks however may require a GPU:

  • Serious Machine Learning / Artificial Intelligence workloads
  • 3D Rendering (e.g. Blender)
  • Live video streaming (video transcoding does not always utilise the GPU, as far as I can tell - make sure you check the documentation for your video editing software before buying any hardware)

Web servers, game servers, email servers, and other application servers do not use and cannot make use of a GPU. Programs need to be specially designed to support GPUs.

I need to purchase a license for Windows Server. Windows 10 isn't enough.

This is false. If you prefer Windows, then a regular old Windows 10 machine will be just fine for most home server use-cases. Windows Server provides additional features for enterprise that you are unlikely to need.

Personally, I recommend running a distribution of Linux though such as Ubuntu Server.

The problems with used hardware

Of particular frustration is the purchasing of old used (often rack mountable) servers from eBay and other auction sites. The low prices might be attractive, but such servers will nearly always have a number of issues:

  1. The CPU and other components will frequently be 10+ years old, and draw lots of electricity
  2. The fans will be very loud - sounding like a jet is taking off inside your house
  3. They often don't come with hard drives, and often have custom drive bays that require purchasing expensive drives to fill

Awkward issues to be sure! Particularly of note here is the electricity problem. Very old devices draw orders of magnitude more power than newer ones - leading to a big electricity bill. It will practically always be cheaper to purchase a newer more expensive machine - it'll pay for itself in dramatically lower electricity bills.

What are the alternatives?

Many far more suitable alternatives exist. They fall into 2 categories:

  1. Renting from a hosting company
  2. Buying a physical device

I'll be talking through both of these options below.

Renting from a hosting company

If you'd rather not have any hardware of your own locally, you can always rent a server from a hosting company. These come in 1 flavours:

  • Virtual Private Servers (VPS): A virtual machine running on the hosting company's infrastructure. Often easier to scale to multiple machines.
  • Dedicated servers: Bare-metal hardware running in a hosting company's datacentre somewhere. Useful if you've outgrown a VPS.

Example providers include OVH, Kimsufi (dedicated servers), Digital Ocean, and many more.

Things to watch out for when choosing one include:

  • How can you get support if you have an issue?
  • What network speeds are provided? Are there any data caps?
  • How much hard drive space do they come with? You often can't get any additional hard drive space once you've bought it without switching to a new host.
  • How many CPU cores does it have (or, if you want to run a game server, what's the clock speed)?
  • How much RAM does it have?
  • How much is it per month?

Buying a physical device

If you'd rather buy a physical device (beware that email servers cannot be effectively hosted on a residential Internet connection), then I can recommend either looking into one of these 2:

  1. An Intel NUC or other Mini PC in the same form factor
  2. A Raspberry Pi (or, for more advanced users, I've heard good things about a Rock Pi, but haven't tried it myself)

Both options are quiet, reasonably priced, and will draw orders of magnitude less power than a big rack mounted server.

A notable caveat here is that if you intend to run a game server, you'll want to check the CPU architecture it runs on, as it may not be compatible with the Raspberry Pi (which has an ARM chip built it - which can be either arm64 or armv7l - I use the official Debian CPU architecture codes here to avoid ambiguity).

Other alternatives here include old laptops and desktops you already have lying around at home. Make sure they aren't too old though, because otherwise you'll run afoul of point #1 in my list of problems there above.

Conclusion

In this post, I've busted some common myths about serves. I've also taken a quick look some appropriate hardware that you can buy or rent to use as a server.

If you're in the market for a server, don't be fooled by low prices for used physical servers. Rather, either rent one from a hosting company, or buy a Mini PC or Raspberry Pi instead. It'll run quieter and use less power too.

Other common questions I see are how to get started with running various different applications on a server. This is out of scope of this article, but there are plenty of tutorials out there on how to do this.

Often you'll need some basic Linux terminal skills to follow along though - I've written a blog post about how you can get started with the terminal already. I also on occasion post tutorials here on this blog on how to setup various applications - these are usually tagged with tutorial and server.

Other sites have excellent tutorials on to setup all manner of different applications - I'll leave a bunch of links at the end of this post.

If this this post has helped demystify servers for you, please consider sharing it with others to clear up their misconceptions too.

Sources and Further Reading

Installing Python, Keras, and Tensorflow from source

I found myself in the interesting position recently of needing to compile Python from source. The reasoning behind this is complicated, but it boils down to a need to use Python with Tensorflow / Keras for some natural language processing AI, as Tensorflow.js isn't going to cut it for the next stage of my PhD.

The target upon which I'm aiming to be running things currently is Viper, my University's high-performance computer (HPC). Unfortunately, the version of Python on said HPC is rather old, which necessitated obtaining a later version. Since I obviously don't have sudo permissions on Viper, I couldn't use the default system package manager. Incredibly, pre-compiled Python binaries are not distributed for Linux either, which meant that I ended up compiling from source.

I am going to be assuming that you have a directory at $HOME/software in which we will be working. In there, there should be a number of subdirectories:

  • bin: For binaries, already added to your PATH
  • lib: For library files - we'll be configuring this correctly in this guide
  • repos: For git repositories we clone

Make sure you have your snacks - this was a long ride to figure out and write - and it's an equally long ride to follow. I recommend reading this all the way through before actually executing anything to get an overall idea as to the process you'll be following and the assumptions I've made to keep this post a reasonable length.

Setting up

Before we begin, we need some dependencies:

  • gcc - The compiler
  • git - For checking out the cpython git repository
  • readline - An optional dependency of cpython (presumably for the REPL)

On Viper, we can load these like so:

module load utilities/multi
module load gcc/10.2.0
module load readline/7.0

Compiling openssl

We also need to clone the openssl git repo and build it from source:

cd ~/software/repos
git clone git://git.openssl.org/openssl.git;    # Clone the git repo
cd openssl;                                     # cd into it
git checkout OpenSSL_1_1_1-stable;              # Checkout the latest stable branch (do git branch -a to list all branches; Python will complain at you during build if you choose the wrong one and tell you what versions it supports)
./config;                                       # Configure openssl ready for compilation
make -j "$(nproc)"                              # Build openssl

With openssl compiled, we need to copy the resulting binaries to our ~/software/lib directory:

cp lib*.so* ~/software/lib;
# We're done, cd back to the parent directory
cd ..;

To finish up openssl, we need to update some environment variables to let the C++ compiler and linker know about it, but we'll talk about those after dealing with another dependency that Python requires.

Compiling libffi

libffi is another dependency of Python that's needed if you want to use Tensorflow. To start, go to the libgffi GitHub releases page in your web browser, and copy the URL for the latest release file. It should look something like this:

https://github.com/libffi/libffi/releases/download/v3.3/libffi-3.3.tar.gz

Then, download it to the target system:

cd ~/software/lib
curl -OL URL_HERE

Note that we do it this way, because otherwise we'd have to run the autogen.sh script which requires yet more dependencies that you're unlikely to have installed.

Then extract it and delete the tar.gz file:

tar -xzf libffi-3.3.tar.gz
rm libffi-3.3.tar.gz

Now, we can configure and compile it:

./configure --prefix=$HOME/software
make -j "$(nproc)"

Before we install it, we need to create a quick alias:

cd ~/software;
ln -s lib lib64;
cd -;

libffi for some reason likes to install to the lib64 directory, rather than our pre-existing lib directory, so creating an alias makes it so that it installs to the right place.

Updating the environment

Now that we've dealt with the dependencies, we now need to update our environment so that the compiler knows where to find them. Do that like so:

export LD_LIBRARY_PATH="$HOME/software/lib:${LD_LIBRARY_PATH:+:$LD_LIBRARY_PATH}";
export LDFLAGS="-L$HOME/software/lib -L$HOME/software/include $LDFLAGS";
export CPPFLAGS="-I$HOME/software/include -I$HOME/software/repos/openssl/include -I$HOME/software/repos/openssl/include/openssl $CPPFLAGS"

It is also advisable to update your ~/.bashrc with these settings, as you may need to come back and recompile a different version of Python in the future.

Personally, I have a file at ~/software/setup.sh which I run with source $HOME/software/setuop.sh in my ~/.bashrc file to keep things neat and tidy.

Compiling Python

Now that we have openssl and libffi compiled, we can turn our attention to Python. First, clone the cpython git repo:

git clone https://github.com/python/cpython.git
cd cpython;

Then, checkout the latest tag. This essentially checks out the latest stable release:

git checkout "$(git tag | grep -ivP '[ab]|rc' | tail -n1)"

Important: If you're intention is to use tensorflow, check the Tensorflow Install page for supported Python versions. It's probable that it doesn't yet support the latest version of Python, so you might need to checkout a different tag here. For some reason, Python is really bad at propagating new versions out to the community quickly.

Before we can start the compilation process, we need to configure it. We're going for performance, so execute the configure script like so:

./configure --with-lto --enable-optimizations --with-openssl=/absolute/path/to/openssl_repo_dir

Replace /absolute/path/to/openssl_repo with the absolute path to the above openssl repo.

Now, we're ready to compile Python. Do that like so:

make -j "$(nproc)"

This will take a while, but once it's done it should have built Python successfully. For a sanity check, we can also test it like so:

make -j "$(nproc)" test

The Python binary compiled should be called simply python, and be located in the root of the git repository. Now that we've compiled it, we need to make a few tweaks to ensure that our shell uses our newly compiled version by default and not the older version from the host system. Personally, I keep my ~/bin folder under version control, so I install host-specific to ~/software, and put ~/software/bin in my PATH like so:

export PATH=$HOME/software/bin

With this in mind, we need to create some symbolic links in ~/software/bin that point to our new Python installation:

cd $HOME/software/bin;
ln -s relative/path/to/python_binary python
ln -s relative/path/to/python_binary python3
ln -s relative/path/to/python_binary python3.9

Replace relative/path/to/python_binary with the relative path tot he Python binary we compiled above.

To finish up the Python installation, we need to get pip up and running, the Python package manager. We can do this using the inbuilt ensurepip module, which can bootstrap a pip installation for us:

python -m ensurepip --user

This bootstraps pip into our local user directory. This is probably what you want, since if you try and install directly the shebang incorrectly points to the system's version of Python, which doesn't exist.

Then, update your ~/.bash_aliases and add the following:

export LD_LIBRARY_PATH=/absolute/path/to/openssl_repo_dir/lib:$LD_LIBRARY_PATH;
alias pip='python -m pip'
alias pip3='python -m pip'

...replacing /absolute/path/to/openssl_repo_dir with the path to the openssl git repo we cloned earlier.

The next stage is to use virtualenv to locally install our Python packages that we want to use for our project. This is good practice, because it keeps our dependencies locally installed to a single project, so they don't clash with different versions in other projects.

Before we can use virtualenv though, we have to install it:

pip install virtualenv

Unfortunately, Python / pip is not very clever at detecting the actual Python installation location, so in order to actually use virtualenv, we have to use a wrapper script - because the [shebang]() in the main ~/.local/bin/virtualenv entrypoint does not use /usr/bin/env to auto-detect the python binary location. Save the following to ~/software/bin (or any other location that's in your PATH ahead of ~/.local/bin):

#!/usr/bin/env bash

exec python ~/.local/bin/virtualenv "$@"

For example:

# Write the script to disk
nano ~/software/bin/virtualenv;
# chmod it to make it executable
chmod +x ~/software/bin/virtualenv

Installing Keras and tensorflow-gpu

With all that out of the way, we can finally use virtualenv to install Keras and tensorflow-gpu. Let's create a new directory and create a virtual environment to install our packages in:

mkdir tensorflow-test
cd tensorflow-test;
virtualenv "$PWD";
source bin/activate;

Now, we can install Tensorflow & Keras:

pip install tensorflow-gpu

It's worth noting here that Keras is a dependency of Tensorflow.

Tensorflow has a number of alternate package names you might want to install instead depending on your situation:

  • tensorflow: Stable tensorflow without GPU support - i.e. it runs on the CPU instead.
  • tf-nightly-gpu: Nightly tensorflow for the GPU. Useful if your version of Python is newer than the version of Python supported by Tensorflow

Once you're done in the virtual environment, exit it like this:

deactivate

Phew, that was a huge amount of work! Hopefully this sheds some light on the maddenly complicated process of compiling Python from source. If you run into issues, you're welcome to comment below and I'll try to help you out - but you might be better off asking the Python community instead, as they've likely got more experience with Python than I have.

Sources and further reading

Saving power in Linux Systems

Hey there! It's an impromptu blog post. Originally I wrote this in response to this Reddit post, but it got rather longer than I anticipated and I ended up expanding on it just a teensy bit more and turning into this blog post.

Saving power in a Linux system can be necessary for a number of reasons, from reducing one's electricity bill to extending battery life.

There are a number of different factors to consider to reduce power usage, which I'll be talking about in this blog post. I will be assuming a headless Linux server for the purposes of this blog post, but these suggestions can be applicable to other systems too (if there's the demand I may write a follow up specifically about Arduino and ESP-based systems, as there are a number of tricks that can be applied there that don't work the same way for a full Linux system).

Of course, power usage is highly situationally dependant, and it's all about trade-offs: less convenience, increased complexity, and so on. The suggestions below are suggestions and rules of thumb that may or may not be applicable to your specific situation.

Hardware: Older hardware is less power efficient than newer hardware. So while using that 10yr old desktop as a server sounds like a great idea to reduce upfront costs, if your electricity is expensive it might be more cost-effective to buy a newer machine such as an Intel NUC or Raspberry Pi.

Even within the realms of Raspberry Pis, not every Raspberry Pi is created equal. If you need a little low-power outpost for counting cows in field with LoRa, then something like a Raspberry Pi Zero as a base might be more suitable than a fully Raspberry Pi 4B+ for example.

CPU architecture: Different CPU architectures have different performance / watt ratios. For example. AMD CPUs are - on the whole - more efficient than Intel CPUs as of 2021. What really matters here is the manufacturing size and density - e.g. a 7nm chip will be more power efficient than a 12nm or 14nm one.

ARM CPUs (e.g. Raspberry Pi and friends) are more efficient again (though the rule-of-thumb about manufacturing size & density does not hold true here). If you haven't yet bought any hardware for your next project, this is definitely worth considering.

Auto-on: Depending on your task, you might only need your device on for a short time each day. Most BIOSes will have a setting to automatically power on at a set time, so you could do this and then set the server to automatically power off when it has completed it's task.

Another consideration is automatically entering standby. This can be done with the rtcwake command. While not as power efficient as turning completely off, it should still net measurable power savings.

Firmware: Tools such as powertop (sudo apt install powertop on Debian-based systems) can help apply a number of optimisations. In the case of powertop, don't forget to add the optimisations you choose to your /etc/rc.local to auto-apply them on boot. Example things that you can optimise using powertop include:

  • Runtime power management for WiFi / Bluetooth
  • SATA power management

Disk activity: Again situationally dependent, but if you have a lot of disks attached to your server, reducing writes can have a positive impact on power usage. Tuning this is generally done with the hdparm command (sudo apt install hdparm). See this Unix Stack Exchange question, and also this Ask Ubuntu answer for more details on how this is done.

Software: Different applications will use different amounts of system resources, which in turn will consume different amounts of power. For example, GitLab is rather resource inefficient, but Gitea is much more efficient with resources. Objectively evaluating multiple possible candidate programs that solve your given problem is important if power savings are critical to your use-case.

Measuring resource usage over time (e.g. checking the CPU Time column in htop for example) is probably the most effective way of measuring this, though you'd want to devise an experiment where you run each candidate program in turn for a defined length of time and measure a given set of metrics - e.g. CPU time.

Measurement: Speaking of metrics, it's worth noting that while all these suggestions are interesting, you should absolutely measure the real power savings you get from implementing these suggestions. Some will give you more of a net gain for less work than others.

The best way I know of to do this is to use a power monitor like this one that I've bought previously and plugging your device into it, and then coming back a given amount of time later to record the total number of watt hours of electricity used. For USB devices such as the Raspberry Pi, if I remember rightly I purchased this device a while back, and it works rather well.

This will definitively tell you whether implementing a given measure will net you a significant decrease in power usage or not, which you can then weight against the effort required.

Users and access control in the Mosquitto MQTT server

A while ago, I blogged about how to setup an MQTT server with Mosquitto. In this one, I want to talk about how to setup multiple user accounts and how to implement access control.

In this post, I'll assume that you've already followed my previous post to which I've linked above.

User accounts

User accounts are a great security measure, as they prevent anyone without a password from accessing your MQTT server. Thankfully, they are pretty easy to do too - you just need a user / password file, and a directive in the main mosquitto.conf file to get it to read from it.

First, let's create a new users file:

sudo touch /etc/mosquitto/mosquitto_users
sudo chown mosquitto:mosquitto /etc/mosquitto/mosquitto_users
sudo chmod 0640 /etc/mosquitto/mosquitto_users

Then you can create new users like this:

sudo mosquitto_passwd /etc/mosquitto/mosquitto_users new_username_1

...replacing new_username_1 with the username of the new account you want to create. Upon executing the above, it will prompt you to enter a new password. Personally I use Keepass2 for this purpose, but you can create good passwords on the command line directly too:

dd if=/dev/urandom bs=1 count=20 | base64 | tr -d '+/='

Now that we have a users file, we can tell mosquitto about it. Add the following to your /etc/mosquitto/mosquitto.conf file:

# Require a username / password to connect
allow_anonymous false
# ....which are stored in the following file
password_file /etc/mosquitto/mosquitto_users

This disables anonymous access, and tells mosquitto where the the username / password file.

In future if you want to delete a user, do that like this:

sudo mosquitto_passwd /etc/mosquitto/mosquitto_users -D new_username_1

Access control

Access control is similar to user accounts. First, we need an access control file - which describes who can access what - and then we need a directive in the mosquitto.conf file to tell Mosquitto about it. Let's start with that access control file. Mine is located at /etc/mosquitto/mosquitto_acls.

# Directives here affect anonymous users, but we've disabled anonymous access

user username_here
topic readwrite foo/#

user bob
topic read rockets/status

There are 2 parts to the ACL file. First, the user directive sets the current user for which any following topic directives apply.

The topic directive allows the current user to read, write, or readwrite (both at the same time) a given topic. MQTT as a protocol is built on the idea of publishing (writing) to or subscribing (reading from) topics. Mosquitto assumes that a user has no access at all unless 1 or more topic directives are present to allow access.

The topic directive is comprised of 3 parts. First, the word topic is the name of the directive.

Next, any 1 of the following words declares what kind of access is being granted:

  • read: Read-only access
  • write: Write-only access
  • readwrite: Both read and write access

Finally, the name of the topic that is being affected by the access rule is given. This may include a hash symbol (#) as a wildcard. For example, rockets/status would affect only that specific topic, but space/# would affect all topics that start with space/.

Here are some more examples:

# Allow read access to "my_app/news"
topic read my_app/news

# Allow write access to "rockets/status"
topic write rockets/status

# Allow read and write access to everything under "another_app/"
topic readwrite another_app/#

Once you've created your ACL file, add this to your mosquitto.conf (being careful to put it before any listener directives if you have TLS / MQTTS support enabled):

acl_file /etc/mosquitto/mosquitto_acls

This will tell Mosquitto about your new access control file.

Reloading changes

After making changes above, you'll want to tell Mosquitto to reload the configuration file. Do that like this:

sudo systemctl reload mosquitto-mqtt.service

If your systemd service file doesn't support reloading, then a restart will do. Alternatively, add this to your systemd service file to the [Service] section:

ExecReload=/bin/kill -s HUP $MAINPID

Conclusion

In this tutorially-kinda post, I've talked through how to manage user accounts for the Mosquitto MQTT. I've also talked about how to enable and manage access control lists too.

This should make your MQTT server more secure. The other thing you can do to make your MQTT server more secure is enable TLS encryption. I'm going to hold off on showing that in this file because I'm still unsure about the best way of doing it (getting Mosquitto to do it vs using Nginx as a reverse proxy - I'm currently testing the former), but if there's the demand I'll post about it in the future.

NAS, Part 4: Time machines | Automatic snapshotting with btrfs-snapshot

In the last part in this series, I compared ZFS with Btrfs. I ended up choosing Btrfs because it was easier to install and came with a number of advantages. Since last time, I've now put Btrfs to work and have about ~1.3 TiB of data stored in it (much of which is from various devices across the network automatically backing up to it). Before we continue, here's a list of the parts in the series so far:

In this post, I'm going to talk about the automatic snapshotting I've setup. Btrfs supports creating snapshots, which are defined as subvolumes that are seeded with data from another subvolume (boundaries between subvolumes are not crossed). Most of the time, these are created to be read-only. In addition because of the copy-on-write system Btrfs uses, a snapshot takes no disk space on its own (other than that required to store the fact that it exists) - it only starts to consume disk space when files that it contains are modified in the original subvolume.

To this end, we can efficiently keep a rotating series of snapshots to serve as an initial safety net should a someone accidentally delete a file. Of course, we can't assume that snapshots will be ok as the only backup (I use Restic for that - I'm in the process of reconfiguring it for my new setup) - but they are still useful things to have.

To take a Btrfs snapshot, you can do this:

sudo btrfs subvolume snapshot -r path/to/source_subvolume path/to/target

The problem here, of course, is that you also need a way to delete old snapshots too. While I could roll my own solution for this, I figured that someone has already solved this problem - so it might save me some effort if I look for a pre-existing solution first.

After doing a bit of searching without success, I asked on Reddit, and the helpful folks there gave me a number of suggestions:

Of these 3, snapper seemed to be the most popular. From some reading, it appeared to be powerful and flexible - at the cost of being easy to understand. btrbk seemed to be feature-packed too, but in the end I decided on btrfs-snapshot.

btrfs-snapshot is designed to be used with cron. For example, I have something like this for one of my subvolumes in root user's crontab:

0 * * * *       /root/btrfs-snapshot-rotation/btrfs-snapshot path/to/subvolume path/to/subvolume/.snapshots hourly 8
0 2 * * *       /root/btrfs-snapshot-rotation/btrfs-snapshot path/to/subvolume path/to/subvolume/.snapshots daily 4
0 2 * * 7       /root/btrfs-snapshot-rotation/btrfs-snapshot path/to/subvolume path/to/subvolume/.snapshots weekly 4

Given a subvolume at path/to/subvolume, it creates the following snapshots in a nested subvolume in path/to/subvolume/.snapshots (which needs to be created manually: sudo btrfs subvolume create path/to/subvolume/.snapshots):

  • 8 x hourly snapshots
  • 4 x daily snapshots
  • 4 x weekly snapshots

I find the system so beautifully simple and easy to understand. This is important for me in a system like this, as it has to be easy for me to understand when I inevitably come back to it months or even years later when I've forgotten how it works. The arguments to btrfs-snapshot are easy to understand, and are in the form path/to/source path/to/target tag_name number_of_snapshots_to_keep.

This has the added bonus that if a user deletes a file accidentally in our shared drive, they can retrieve it on their own from the .snapshots directory - without my intervention.

With this in place and the data (mostly) moved over, my NAS project is almost complete. The final task I have left to do is to setup a proper backup system with Restic to either a remote (e.g. Backblaze B2) or offline location (such as an external HDD).

The latter might prove to be a problem though, since the maximum amount of data I can store right now is 5.5 TiB and is only going to grow from there. Portable external hard drives I've seen online don't appear to go up that high, so I suspect I'll need to choose another plan.

Should I encounter some interesting issues when setting this final backup step up, I'll make an additional post in this series. If not though, this will probably be the last entry in this series. If you have any questions about my setup, please comment below! I'll dod my best to answer any questions.

NAS, Part 3: Decisions | Choosing a Filesystem

It's another entry in my NAS series! It's still 2020 for me as I type this, but I hope that 2021 is going well. Before we continue, I recommend checking out the previous posts in this series:

Part 1 in particular is useful for context as to the hardware I'm using. Part 2 is a review of my experience assembling the system. In this part, we're going to look at my choice of filesystem and OS.

I left off in the last post after I'd booted into the installer for Ubuntu Server 20.04. After running through that installer, I performed my collection of initial setup tasks for any server I manage:

  • Setup an SSH server
  • Enable UFW
  • Setup my personal ~/bin folder
  • Assign a static IP address (why won't you let me choose an IP, Netgear RAX120? Your UI lets me enter a custom IP, but it devices don't ultimately end up with the IP I tell you to assign to them....)
  • Setup Collectd
  • A number of other tasks I forget

With my basic setup completed, I also setup a few things specific to devices that have SMART-enabled storage devices:

  • Setup an email relay (via autossh) for mail delivery
  • Installed smartd (which sends you emails when there's something wrong with 1 your disks)
  • Installed and configured hddtemp, and integrated it with collectd (a topic for another post, I did this for the first time)

With these out of the way and after making a mental note to sort out backups, I could now play with filesystems with a view to making a decision. The 2 contenders:

  • (Open)ZFS
  • Btrfs

Both of these filesystems are designed to be spread across multiple disks in what's known as a pool thereof. The idea behind them is to enable multiple disks to be presented to the user as a single big directory, with the complexities as to which disk(s) a file is/are stored on. They also come with extra nice features, such as checksumming (which allows them to detect corruption), snapshotting (taking snapshots of what the filesystem looks like at a given point in time), automatic data deduplication, compression, snapshot send / receiving, and more!

Overview: ZFS

ZFS is a filesystem originally developed by Sun Microsystems in 2001. Since then, it has been continually developed and improved. After Oracle bought Sun Microsystems in 2010, the source code for ZFS was closed - hence the OpenZFS fork was born. It's licenced under the CDDL, which isn't compatible with the GPLv2 used by the Linux Kernel. This causes some minor installation issues.

As a filesystem, it seems to be widely accepted to be rock solid and mature. It's used across the globe by home users and businesses both large and small to store huge volumes of data. Given its long history, it has proven its capability to store data safely.

It does however have some limitations. For one, it only has limited support for adding drives to a zpool (a pool of disks in the ZFS world), which is a problem for me - as I'd prefer to have the ability to add drives 1 at a time. It also has limited support for changing key options such as the compression algorithm later, as this will only affect new files - and the only way to recompress old files is to copy them in and out of the disk again.

Overview: Btrfs

Btrfs, or B-Tree File System is a newer filesystem that development upon which began in 2007, and was accepted into the Linux Kernel in 2009 with the release of version 1.0. It's licenced under the GPLv2, the same licence as the Linux Kernel. As of 2020, many different distributions of Linux ship with btrfs installed by default - even if it isn't the default filesystem (that's ext4 in most cases).

Unlike ZFS, Btrfs isn't as well-tested in production settings. In particular, it's raid5 and raid6 modes of operation are not well tested (though this isn't a problem, since raid1 operates at file/block level and not disk level as it does with ZFS, which enables us to use interesting setups like raid1 striped across 3 disks). Despite this, it does look to be stable enough - particularly as openSUSE has set it to be the default filesystem.

It has a number of tempting features over ZFS too. For example, it supports adding drives 1 at a time, and you can even convert your entire pool from 1 raid level to another dynamically while it's still mounted! The same goes for converting between compression algorithms - it's all done using a generic filter system.

Such a system is useful when adding new disks to the pool too, as they it can be used to rebalance data across all the disks present - allowing for new disks to be accounted for and faulty disks to be removed, preserving the integrity of the data while a replacement disk is ordered for example.

While btrfs does have a bold list of features that they'd like to implement, they haven't gotten around to all of them yet (the status of existing features can be found here). For example, while ZFS can use an SSD as a dedicated caching device, btrfs doesn't yet have this ability - and nobody appears to have claimed the task on the wiki.

Performance

Inspired by a recent Ars Technica article, I'd like to test the performance of the 2 filesystems at hand. I ran the following tests for reading and writing separately:

  • 4k-random: Single 4KiB random read/write process
  • 64k-random-16p: 16 parallel 64KiB random read/write processes
  • 1m-random: Single 1MiB random write process

I did this for both ZFS in raid5 mode, and Btrfs in raid5 (though if I go with btrfs I'll be using raid1, as I later discovered - which I theorise would yield a minor performance improvement). I tested ZFS twice: once with gzip compression, and again with zstd compression. As far as I can tell, Btrfs doesn't have compression enabled by default. Other than the compression mode, no other tuning was done - all the settings were left at their defaults. Both filesystems were completely empty aside from the test files, which were created automatically in a chowned subdirectory by fio.

Graphs showing the results of the above tests. See the discussion below.

The graph uses a logarithmic scale. My initial impressions are that ZFS benefits from parallelisation to a much greater extent than btrfs - though I suspect that I may be CPU bound here, which is an unexpected finding. I may also be RAM-bound too, as I observed a significant increase in RAM usage when both filesystems were under load. Buying another 8GB would probably go a long way to alleviating that issue.

Other than that, zstd appears to provide a measurable performance improvement over gzip compression. Btrfs also appears to benefit from writing larger blocks over smaller ones.

Overall, some upgrades to my NAS are on the cards should I be unsatisfied with the performance in future:

  • More RAM would assist in heavy i/o loads
  • A better CPU would probably raise the peak throughput speeds - if I can figure out what to do with the old one

But for now, I'm perfectly content with these speeds. Especially since I have a single gigabit ethernet port on my storage NAS, I'm not going to need anything above 1000Mbps - which is 119.2 MiB/s if you'd like to compare against the graph above.

Conclusion

As for my final choice of filesystem, I think I'm going to go with btrfs. While I'm aware that it isn't as 'proven' as ZFS - and slightly less performant too - I have a number of reasons for this decision:

  1. Btrfs allows you to add disks 1 at a time, and ZFS makes this difficult
  2. Btrfs has the ability to convert to a different raid level at a later date if I change my mind
  3. Btrfs is easier to install, since it's already built-in to Ubuntu Server 20.04.

NAS, Part 2: Assembly and Installation

Welcome back! This is part 2 of a series of posts about my new NAS (network attached storage) device I'm building. If you haven't read it yet, I recommend you go back and read part 1, in which I talk about the hardware I'm using.

Since the Fractal Design Node 804 case came first, I was able to install the parts into it as they arrived. First up was the motherboard (an ASUS PRIME B450M-A) and CPU (an AMD Athlon 3000G).

The motherboard was a pain. As I read, the middle panel of the case has some flex in it, so you've got to hold it in place with one hand we you're screwing the motherboard in. This in and of itself wasn't an issue at all, but the screws for the motherboard were really stiff. I think this was just the motherboard, but it was annoying.

Thankfully I managed it though, and then set to work installing the CPU. This went well - the CPU came with thermal paste on top already, so I didn't need to buy my own. The installation process for the stock CPU heatsink + fan was unfamiliar, which took me a moment to decipher how the mechanism worked.

Following this, I connected the front ports from the case up to the motherboard (consulting my motherboard's documentation showed me where I needed to plug these in - I remember this being something I struggled with when I last built an (old) PC when doing some IT technician work experience some years ago). The RAM - while a little stiff (to be expected) - went in fine too. I might buy another stick later if I run into memory pressure, but I thought a single 8GB stick would be a good place to start.

The case came with a dedicated fan controller board that has a high / medium / low switch on the back too, so I wired up the 3 included Noctua case fans to this instead of the slots on the motherboard. The CPU fan (nothing special yet - just the stock fan that came with the CPU) went into the motherboard though, as the fan controller didn't have room - and I thought that the motherboard would be better placed to control the speed of that one.

The inside of the 2 sides of the case.

(Above: The inside of the 2 sides of the case. Left: The 'hot' side, Right: The 'cold' side.)

The case is split into 2 sides: 1 for 'hot' components (e.g. the motherboard and CPU), and another for 'cold' components (e.g. the HDDs and PSU). Next up were the hard disks - so I mounted the SSD for the operating system to the base of the case in the 'hot' side, as the carriage in the cold side fits only 3.5 inch disks, and my SSD is a 2.5 inch disk. While this made the cabling slightly awkward, it all worked out in the end.

For the 3.5 inch HDDs (for data storage), I found I was unable to mount them with the included pieces of bracket metal that allow you to put screws into the bottom set of holes - as the screws wouldn't fit through the top holes. I just left the metal bracket pieces out and mounted the HDDs directly into the carriage, and it seems to have worked well so far.

The PSU was uneventful too. It fit nicely into the space provided, and the semi-modular nature of the cables provided helped tremendously to avoid a mess of cables all over the place as I could remove the cables I didn't need.

Finally, the DVD writer had some stiff screws, but it seemed to mount well enough (just a note: I've been having an issue I need to investigate with this DVD drive whereby I can't take a copy of a disk - e.g. the documentation CD that came with my motherboard - with dd, as it reports an IO error. I need to investigate this further, so more on that in a later post).

The installation of the DVD drive completed the assembly process. To start it up for the first time, I connected my new NAS to my television temporarily so that I could see the screen. The machine booted fine, and I dove straight into the BIOS.

The BIOS that comes with the ASUS motherboard I bought

(Above: The BIOS that comes with the ASUS motherboard, before the clock was set by Ubuntu Server 20.04 - which I had yet to install)

Unlike my new laptop, the BIOS that comes with the ASUS motherboard is positively delightful. It has all the features you'd need, laid out in a friendly interface. I observed some minor input lag, but considering this is a BIOS we're talking about here I can definitely overlook that. It even has an online update feature, where you can plug in an Ethernet cable and download + install BIOS updates from the Internet.

I tweaked a few settings here, and then rebooted into my flash drive - onto which I loaded an Ubuntu Server 20.04 ISO. It booted into this without complaint (unlike a certain laptop I'm rather unhappy with at the moment), and then I selected the appropriate ISO and got to work installing the operating system (want your own multiboot flash drive? I've blogged about that already! :D).

In the next post, I'm going to talk about the filesystem I ultimately chose. I'm also going to show and discuss some performance tests I ran using fio following this Ars Technica guide.

NAS, Part 1: We need a bigger rocket

In my cluster series of posts, I've been talking about how I've built a Raspberry Pi-based cluster for running compute tasks (latest update: I've got Let's Encrypt working with the DNS-01 challenge, stay tuned for a post on that soon). Currently, this has been backed by a Raspberry Pi 3 with a 1TB WD PiDrive attached. This has a number of issues:

  • The Raspberry Pi 3 has a 100mbps network port
  • It's not redundant
  • I'm running out of storage space

I see 2 ways of solving these issues:

  1. Building a clustered file system, with 1 3.5 inch drive per Pi (or Odroid HC2 perhaps)
  2. Building a more traditional monolithic NAS

Personally, my preference here is option #2, but unfortunately due to some architectural issues in my house (read: the wiring needs redoing by an electrician) I don't actually have access to the number of wall sockets I'd need to put together a clustered setup. If I get those issues sorted, I'll certainly take a look at upgrading - but for now I've decided that I'm going to put together a more traditional monolithic NAS (maybe it can become the backup device in future, who knows) as it will only require a single wall socket (the situation is complicated. Let's just move on).

To this end, I decided to start with a case and go from there. Noise is a big concern for me, so I chose the Fractal Design Node 804, as it has a number of key features:

  • It has lots of space for disks
  • It comes with some quiet fans
  • The manufacturer appears to be quite popular and reputable

From here, I picked the basic components for the system using PC Part Picker. I haven't actually built an amd64 system from scratch before - I use laptops as my main device (see my recent review of the PC Specialist Proteus VIII), and Raspberry Pis (and an awesome little 2nd hand Netgear GS116v2 switch) currently form the backbone of my server setup.

These components included:

  • An ASUS PRIME B450M-A motherboard: 6 x SATA ports, AM4 CPU socket
  • An AMD Athlon 3000G: I don't need much compute horsepower in this build, since it's for storage (I would have got an Athlon 200GE instead as it's cheaper, but they were all out of stock)
  • 8GB Corsair Vengeance LPX DDR4 2666MHz RAM: The highest frequency the CPU supports - I got a single stick here to start with. I'll add additional sticks as and when I need them.
  • 120GB Gigabyte SSD: For the OS. Don't need a lot of storage here, since all the data is going to be on 3.5 inch HDDs instead
  • 3 x 4TB WD Red Plus WD40EFRX (CMR): These are my main data storage drives. I'm starting with 3 4TB drives, and I'll add more as I need them. The Node 804 case (mentioned above) supports up to 10 disks, apparently - so I should have plenty of space.
  • SeaSonic CORE GM 500 W 80+ Gold PSU: The most efficient PSU I could afford. I would have loved an 80+ titanium (apparently they are at least 94% efficient at 50% load), but at £250+ it's too much for my budget.
  • LG GS40N DVD writer: Apparently the Node 804 case as a slimline DVD drive slot (i.e. like one you might find in a car). It wasn't too expensive and being able to ingest CD/DVDs is appealing.

For the storage there, in particular my (initial) plan is to use OpenZFS in RAIDZ mode, which has a minimum requirement of 3 drives. Using an online calculator suggests that with the above drives I'll have 8TB of usable capacity. Initial research does suggest though that expanding a ZFS storage pool may not be as easy as I thought it was (related, see also), so more research is definitely needed before I commit to a single filesystem / set of settings there.

I've heard of BTRFS too, but I've also heard of some stability and data loss issues too. That was several years ago though, so I'll be reviewing its suitability again before making a decision here.

In future posts, I'm going to talk about my experience assembling the build. I'm also going to look at how I eventually setup the filesystem (as of yet which filesystem I'll choose is still undecided). I'll also be running some tests on the setup to evaluate how well it performs and handles failure. Finally, I may make a bonus post in this series about the challenges I encounter migrating my existing (somewhat complicated) data storage setup to the new NAS I build.

Found this interesting? Got a suggestion? Comment below!

Art by Mythdael