Starbeamrainbowlabs

Stardust
Blog


Archive


Mailing List Articles Atom Feed Comments Atom Feed Twitter Reddit Facebook

Tag Cloud

3d 3d printing account algorithms android announcement architecture archives arduino artificial intelligence artix assembly async audio automation backups bash batch blog bookmarklet booting bug hunting c sharp c++ challenge chrome os cluster code codepen coding conundrums coding conundrums evolved command line compilers compiling compression containerisation css dailyprogrammer data analysis debugging demystification distributed computing documentation downtime electronics email embedded systems encryption es6 features ethics event experiment external first impressions future game github github gist gitlab graphics hardware hardware meetup holiday holidays html html5 html5 canvas infrastructure interfaces internet interoperability io.js jabber jam javascript js bin labs learning library linux lora low level lua maintenance manjaro network networking nibriboard node.js operating systems own your code pepperminty wiki performance phd photos php pixelbot portable privacy problem solving programming problems projects prolog protocol protocols pseudo 3d python reddit redis reference releases rendering resource review rust searching secrets security series list server software sorting source code control statistics storage svg talks technical terminal textures thoughts three thing game three.js tool tutorial twitter ubuntu university update updates upgrade version control virtual reality virtualisation visual web website windows windows 10 xmpp xslt

Running multiple local versions of CUDA on Ubuntu without sudo privileges

I've been playing around with Tensorflow.js for my PhD (see my PhD Update blog post series), and I had some ideas that I wanted to test out on my own that aren't really related to my PhD. In particular, I've found this blog post to be rather inspiring - where the author sets up a character-based recurrent neural network to generate text.

The idea of transcoding all those characters to numerical values and back seems like too much work and too complicated just for a quick personal project though, so my plan is to try and develop a byte-based network instead, in the hopes that I can not only teach it to generate text as in the blog post, but valid Unicode as well.

Obviously, I can't really use the University's resources ethically for this (as it's got nothing to do with my University work) - so since I got a new laptop recently with an Nvidia GeForce RTX 2060, I thought I'd try and use it for some machine learning instead.

The problem here is that Tensorflow.js requires only CUDA 10.0, but since I'm running Ubuntu 20.10 with all the latest patches installed, I have CUDA 11.1. A quick search of the apt repositories on my system reveals nothing that suggests I can install older versions of CUDA alongside the newer one, so I had to devise another plan.

I discovered some months ago (while working with Viper - my University's HPC - for my PhD) that you can actually extract - without sudo privileges - the contents of the CUDA .run installers. By then fiddling with your PATH and LD_LIBRARY_PATH environment variables, you can get any program you run to look for the CUDA libraries elsewhere instead of loading the default system libraries.

Since this is the second time I've done this, I thought I'd document the process for future reference.

First, you need to download the appropriate .run installer for the CUDA libraries. In my case I need CUDA 10.0, so I've downloaded mine from here:

https://developer.nvidia.com/cuda-10.0-download-archive?target_os=Linux&target_arch=x86_64&target_distro=Ubuntu&target_type=runfilelocal

Next, we need to create a new subdirectory and extract the .run file into it. Do that like so:

cd path/to/runfile_directory;
mkdir cuda-10.0
./cuda_10.0.130_410.48_linux.run --extract=${PWD}/cuda-10.0/

Make sure that the current working directory contains no spaces, no preferably no other special characters either. Also, adjust the file and directory names to suit your situation.

Once done, this will have extract 3 subfiles - which also have the suffix .run. We're only interested in CUDA itself, so we only need to extract the the one that starts with cuda-linux. Do that like so (adjusting file/directory names as before):

cd cuda-10.0;
./cuda-linux.10.0.130-24817639.run -noprompt -prefix=$PWD/cuda;
rm *.run;
mv cuda/* .;
rmdir cuda;

If you run ./cuda-linux.10.0.130-24817639.run --help, it's actually somewhat deceptive - since there's a typo in the help text! I corrected it for this above though. Once done, this should leave the current working directory containing the CUDA libraries - that is a subdirectory next to the original .run file:

+ /path/to/some_directory/
    + ./cuda_10.0.130_410.48_linux.run
    + cuda-10.0/
        + version.txt
        + bin/
        + doc/
        + extras/
        + ......

Now, it's just a case of fiddling with some environment variables and launching your program of choice. You can set the appropriate environment variables like this:

export PATH="/absolute/path/to/cuda-10.0/bin:${PATH}";
if [[ ! -z "${LD_LIBRARY_PATH}" ]]; then
    export LD_LIBRARY_PATH="/absolute/path/to/cuda-10.0/lib64:${LD_LIBRARY_PATH}";
else
    export LD_LIBRARY_PATH="/absolute/path/to/cuda-10.0/lib64";
fi

You could save this to a shell script (putting #!/usr/bin/env bash before it as the first line, and then running chmod +x path/to/script.sh), and then execute it in the context of the current shell for example like so:

source path/to/activate-cuda-10.0.sh

Many deep learning applications that use CUDA also use CuDNN, a deep learning library provided by Nvidia for accelerating deep learning applications. The archived versions of CuDNN can be found here: https://developer.nvidia.com/rdp/cudnn-archive

When downloading (you need an Nvidia developer account, but thankfully this is free), pay attention to the version being requested in the error messages generated by your application. Also take care to download the version of CUDA you're using, and match the CuDNN version appropriately.

When you download, select the "cuDNN Library for Linux" option. This will give you a tarball, which contains a single directory cuda. Extract the contents of this directory over the top of your CUDA directory from following my instructions above, and it should work as intended. I used my graphical archive manager for this purpose.

Proteus VIII Laptop from PC Specialist in Review

Recently I bought a new laptop from PC Specialist. Unfortunately I'm lost the original quote / specs that were sent to me, but it was a Proteus VIII. It has the following specs:

  • CPU: Intel i7-10875H
  • RAM: 32 GiB DDR4 2666MHz
  • Disk: 1 TiB SSD (M.2; nvme)
  • GPU: Nvidia GeForce RTX 2060

In this post, I want to give a review now that I've had the device for a short while. I'm still experiencing some teething issues (more on those later), but I've experienced enough of the device to form an opinion on it. This post will also serve as a sort-of review of the installation process of Ubuntu too.

It arrived in good time - thankfully I didn't have any issues with their choice of delivery service (DPD in my area have some problems). I did have to wait a week or 2 for them to build the system, but I wasn't in any rush so this was fine for me. The packaging it arrived it was ok. It came in a rather large cardboard box, inside which there was some plastic padding (sad face), inside which there was another smaller cardboard box. Work to be done in the eco-friendly department, but on the whole good here.

I ordered without an operating system, as my preferred operating system is Ubuntu (the latest version is currently 20.10 Groovy Gorilla). The first order of business was the OS installation here. This went went fine - but only after I could actually get the machine to boot! It turns out that despite it appearing to have support for booting from USB flash drives as advertised in the boot menu, this feature doesn't actually work. I tried the following:

  • The official Ubuntu ISO flashed to a USB 3 flash drive
  • A GRUB installation on a USB 3 flash drive
  • A GRUB installation on a USB 2 flash drive
  • Ubuntu 20.10 burned to a DVD in an external DVD drive (ordered with the laptop)

....and only the last one worked. I've worked with a diverse range of different devices, but never have I encountered one that completely refused to boot at all from a USB drive. Clearly some serious work is required on the BIOS. The number of different settings in the BIOS were also somewhat limited compared to other systems I've poked around on, but I can't give any specific examples here of things that were missing (other than a setting to toggle the virtualisation extensions, which was on by default) - so I guess it doesn't matter all that much. The biggest problem is the lack of USB flash drive boot support - that was really frustrating.

When installing Ubuntu this time around, I decided to try enabling LVM (Logical Volume Management, it's very cool I've discovered) and a LUKS encrypted hard drive. Although I've encountered these technologies before, this will be my first time using them regularly myself. Thankfully, the Ubuntu installer did a great job of setting this up automatically (except the swap partition, which was too small to hibernate, but I'll talk about that in a moment).

Once installed, I got to doing the initial setup. I'm particularly picky here - I use the Unity 7.5 Desktop (yes, I know Ubuntu now uses the GNOME shell, and no I haven't yet been able to get along with it). I'll skip over the details of the setup here, as it's not really relevant to the review. I will mention though that I'm also using X11, not Wayland at the moment - and that I have the propriety Nvidia driver installed (version 450 at the time of typing).

Although I've had a discrete graphics card before (most recently an AMD Radeon R7 M445, and an Nvidia 525M), this is the first time I've had one that's significantly more powerful than the integrated graphics that's built into the CPU. My experience with this so far is mostly positive (it's rather good at rendering in Blender, but I have yet to stress it significantly), and in some graphical tests it gives significantly higher frame rates than the integrated graphics. If you use the propriety graphics drivers, I recommend going into the Nvidia X server settings (accessed through the launcher) → PRIME Profiles, changing it to "On-Demand", and then rebooting. This will prolong your battery life and reduce the noise from the fans by using the integrated graphics by default, but allow you to run select applications on the GPU (see my recent post on how to do this).

It's not without its teething issues though. I think I'm just unlucky, but I have yet to setup a system with an Nvidia graphics card where I haven't had some kind of problem. In this case, it's screen flickering. To alleviate this somewhat, I found and followed the instructions in this Ask Ubuntu Answer. I also found I had to enable the Force synchronization between X and GLX workaround (and maybe another one as well, I can't remember). Even with these enabled, sometimes I still get flickering after it resumes from suspension / stand by.

Speaking of stand by mode, I've found that this laptop does not like hibernation at all. I'm unsure as to whether this is just because I'm using LVM + LUKS, or whether it's an issue with the device more generally, but if I try sudo pm-hibernate from the terminal, the screen flashes a bit, the mouse cursor disappears, and then the fan spins up - with the screen still on and all my windows apparently still open.

I haven't experimented with the quirks / workarounds provided yet, but I guess ties into the early issues with the BIOS, in that there are some clear issues with the BIOS that need to be resolved.

This hibernation issue also ties into the upower subsystem, in that even if you tell it (in both the Unity and GNOME desktop shells) to "do nothing" on low battery, it will forcefully turn the device off - even if you're in the middle of typing a sentence! I think this is because upower doesn't seem to have an option for suspend or "do nothing" in /etc/Upower/UPower.conf or something? I'm still investigating this issue (if you have any suggestions, please do get in touch!).

Despite these problems, the build quality seems good. It's certainly nice having a metal frame, as it feels a lot more solid than my previous laptop. The keyboard feels great too - the feedback from pressing the keys enhances the feeling of a solid frame. The keyboard is backlit too, which makes more a more pleasant experience in dimly lit rooms (though proper lighting is a must in any workspace).

The layout of the keyboard feels a little odd to me. It's a UK keyboard yes (I use a UK keyboard myself), but it doesn't have dedicated Home / End / Page Up / Page Down keys - these are built into the number pad at the right hand side of the keyboard. It's taken some getting used to toggling the number lock every time I want to use these keys, which increases cognitive load.

It does have a dedicated SysRq key though (which my last laptop didn't have), so now I can articles like this one and use the SysRq feature to talk to the Linux Kernel directly in case of a lock-up or crash (I have had the screen freeze on me once or twice - I later discovered this was because it had attempted to hibernate and failed, and I also ran into this problem, which I have yet to find a resolution to), or in case I accidentally set off a program that eats all of the available RAM.

The backlight of the keyboard goes from red at the left-hand side to green in the middle, and blue at the right-hand side. According to the PC Specialist forums, there's a driver that you can install to control this, but the installation seems messy - and would probably need recompiling every time you install a new kernel since DKMS (Dynamic Kernel Module System, I think) isn't used. I'm ok with the default for now, so I haven't bothered with this.

The touchpad does feel ok. It supports precision scrolling, has a nice feel to it, and isn't too small, so I can't complain about it.

The laptop doesn't have an inbuilt optical drive, which is another first for me. I don't use optical disks often, but it was nice having a built-in drive for this in previous laptops. An external one just feels clunky - but I guess I can't complain too much because of the extra components and power that are built-in to the system.

The airflow of the system - as far as I can tell so far, is very good. Air comes in through the bottom, and is then pushed out again through the back and the back of the sides by 2 different fans. These fans are, however, rather noisy at times - and have taken some getting used to as my previous Dell laptop's fans were near silent until I started to stress the system. The noise they make is also slightly higher pitched too, which makes it more noticeable - and sound like a jet engine (though I admit I've never heard a real one in person, and I'm also somewhat hypersensitive to sound) when at full blast. Curiously, there's a dedicated key on the keyboard that - as far as I can tell - toggles between the normal on-demand fan mode and locking the fans at full blast. Great to quickly cool down the system if the fans haven't kicked in yet, but not so great for your ears!

I haven't tested the speakers much, but from what I can tell they are appropriately placed in front of the keyboard just before the hinge for the screen - which is a much better placement than on the underside at the front in my last laptop! Definitely a positive improvement there.

I wasn't sure based on the details on the PC specialist website, but the thickness of the base is 17.5mm at the thickest point, and 6mm for the screen - making ~23.5mm in total (although my measurements may not be completely accurate).

To summarise, the hardware I received was great - overlooking a few pain points such as the BIOS and poor keyboard layout decisions. Some work is still needed on environmental issues and sustainability, but packaging was on the whole ok. Watch out for the delivery service, as my laptop was delivered by DPD who don't have a great track record in my area.

Overall, the hardware build quality is excellent. I'm not sure if I can recommend them yet, but if you want a new PC or laptop they are certainly not a bad place to look.

Found this helpful? Got a suggestion? Want to say hi? Comment below!

Run a Program on your dedicated Nvidia graphics card on Linux

I've got a new laptop, and in it I have an Nvidia graphics card. The initial operating system installation went ok (Ubuntu 20.10 Groovy Gorilla is my Linux distribution of choice here), but after I'd done a bunch of configuration and initial setup tasks it inevitably came around to the point in time that I wanted to run an application on my dedicated Nvidia graphics card.

Doing so is actually really quite easy - it's figuring out the how that was the hard bit! To that end, this is a quick post to document how to do this so I don't forget next time.

Before you continue, I'll assume here that you're using the Nvidia propriety drivers. If not, consult your distribution's documentation on how to install and enable this.

With Nvidia cards, it depends greatly on what you want to run as to how you go about doing this. For CUDA applications, you need to install the nvidia-cuda-toolkit package:

sudo apt install nvidia-cuda-toolkit

...and then there will be an application-specific menu you'll need to navigate to tell it which device to use.

For traditional graphical programs, the process is different.

In the NVIDIA X Server Settings, go to PRIME Profiles, and then ensure NVIDIA On-Demand mode is selected. If not, select it and then reboot.

The NVIDIA X Server Settings dialog showing the correct configuration as described above.

Then, to launch a program on your Nvidia dedicated graphics card, do this:

__NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia command_name arguments

In short, the __NV_PRIME_RENDER_OFFLOAD environment variable must be set to 1 , and the __GLX_VENDOR_LIBRARY_NAME environment variable must be set to nvidia. If you forget the latter here or try using DRI_PRIME=1 instead of both of these (as I advise in my previous post about AMD dedicated graphics cards), you'll get an error like this:

libGL error: failed to create dri screen
libGL error: failed to load driver: nouveau

The solution is to set the above environment variables instead of DRI_PRIME=1. Thanks to this Ask Ubuntu post for the answer!

Given that I know I'll be wanting to use this regularly, I've created a script in my bin folder for it. It looks like this:

#!/usr/bin/env bash
export __NV_PRIME_RENDER_OFFLOAD=1;
export __GLX_VENDOR_LIBRARY_NAME=nvidia;

$@;

Save the above to somewhere in your PATH (e.g. your bin folder, if you have one). Don't forget to run chmod +x path/to/file to mark it executable.

Found this useful? Does this work on other systems and setups too? Comment below!

Stitching videos from frames with ffmpeg (and audio/video editing tricks)

ffmpeg is awesome. In case you haven't heard, ffmpeg is a command-line tool for editing and converting audio and video files. It's taken me a while to warm up to it (the command-line syntax is pretty complicated), but since I've found myself returning to it again and again I thought I'd blog about it so you can use it too (and also for my own reference :P).

I have 2 common tasks I perform with ffmpeg:

  1. Converting audio/video files to a more efficient format
  2. Stitching PNG images into a video file

I've found that most tasks will fall into 1 of the 2 boxes. Sometimes I want to do something more complicated, but I'll usually just look that up and combine the flags I find there with the ones I usually use for one of the 2 above tasks.

horizontal rule made up of an orange rotating maze cube

Stitching PNG images into a video

I don't often render an animation, but when I do I always render to individual images - that way if it crashes half way through I can more easily restart where I left off (and also divide the workload more easily across multiple machines if I need it done quickly).

Individual image files are all very well, but they have 2 problems:

  1. They take up lots of space
  2. You can't easily watch them like a video

Solving #1 is relatively easy with optipng (sudo apt install optipng), for example:

find . -iname "*.png" -print0 | xargs -0 -P4 -n1 optipng -preserve

I cooked that one up a while ago and I've had it saved in my favourites for ages. It finds all png images in the current directory (recursively) and optimises them with optipng.

To resolve #2, we can use ffmpeg as I mentioned earlier in this post. Here's one I use:

ffmpeg -r 24 -i path/to/frame_%04d.png -c:v libvpx-vp9 -b:v 2000k -crf 31 path/to/output.webm

A few points on this one:

  • -r 24: This sets the frames per second.
  • -i path/to/frame_%04d.png: The path to the png images to stitch. %04d refers to 4 successive digits in a row that count up from 1 - e.g. 0001, 0002, etc. %d would mean 1, 2, ...., and %02d for example would mean 01, 02, 03....
  • -c:v libvpx-vp9: The codec. We're exporting to webm, and vp9 is the latest available codec for webm video files.
  • -b:v 2000k: The bitrate. Higher values mean more bits will be used per second of video to encode detail. Raise to increase quality.
  • -crf 31: The encoding quality. Should be anumber betwee 0 and 63 - higher means lower quality and a lower filesize.

Read more about encoding VP9 webm videos here: Encode/VP9- ffmpeg

Converting audio / video files

I've talked about downmuxing audio before, so in this post I'm going to be focusing on video files instead. The above command for encoding to webm can be used with minimal adjustment to transcode videos from 1 format to another:

ffmpeg -hide_banner -i "path/to/input.avi" -c:v libvpx-vp9 -c:a libopus -crf 31 -b:v 0 "path/to/output/webm";

In this one, we handle the audio as well as the video all in 1 go, encoding the audio to use the opus codec, and the video as before. This is particularly useful if you've got a bunch of old video files generated by an old camera. I've found that often old cameras like to save videos as raw uncompressed AVI files, and that I can reclaim a sigificant amount of disk space if I transcode them to something more efficient.

Naturally, I cooked up a one-liner that finds all relevant video files recursively in a given directory and transcodes them to 1 standard efficient format:

find . -iname "*.AVI" -print0 | nice -n20 xargs --verbose -0 -n1 -I{} sh -c 'old="{}"; new="${old%.*}.webm"; ffmpeg -hide_banner -i "${old}" -c:v libvpx-vp9 -c:a libopus -crf 30 -b:v 0 "${new}" && rm "${old}"';

Let's break this down. First, let's look at the commands in play:

  • find: Locates the target files to transcode
  • nice: Runs the following command in the background
  • xargs: Runs the following command over and over again for every line of input
  • sh: A shell, to execute multiple command in sequence
  • ffmpeg: Does the transcoding
  • rm: Deletes the old file if transcoding completes successfully

The old="{}"; new="${old%.*}.webm"; bit is a bit of gymnastics to pull in the path to the target file (the -I {} tells xargs where to substitute it in) and determine the new filepath by replacing the file extension with .webm.

I've found that this does the job quite nicely, and I can set it running and go off to do something else while it happily sits there and transcodes all my old videos, reclaiming lots of disk space in the process.

Found this useful? Got a cool command for processing audio/video/image files in bulk? Comment below!

Monitoring latency / ping with Collectd and Bash

I use Collectd as the monitoring system for the devices I manage. As part of this, I use the Ping plugin to monitor latency to a number of different hosts, such as GitHub, the raspberry pi apt repo, 1.0.0.1, and this website.

I've noticed for a while that the ping plugin doesn't always work: Even when I check to ensure that a host is pingable before I add it to the monitoring list, Collectd doesn't always manage to ping it - showing it as NaN instead.

Yesterday I finally decided that enough was enough, and that I was going to do something about it. I've blogged about using the exec plugin with a bash script before in a previous post, in which I monitor HTTP response times with curl.

Using that script as a base, I adapted it to instead parse the output of the ping command and translate it into something that Collectd understands. If you haven't already, you'll want to go and read that post before continuing with this one.

The first order of business is to sort out the identifier we're going to use for the data in question. Collectd assigns an identifier to all the the data coming in, and it uses this to determine where it is stored on disk - and subsequently which graph it will appear in on screen in the front-end.

Such identifiers follow this pattern:

host/plugin-instance/type-instance

This can be broken down into the following parts:

Part Meaning
host The hostname of the machine from which the data was collected
plugin The name of the plugin that collected the data (e.g. memory, disk, thermal, etc)
instance The instance name of the plugin, if the plugin is enabled multiple times
type The type of reading that was collected
instance If multiple readings for a given type are collected, this instance differentiates between them.

Of note specifically here are the type, which must be one of a number of pre-defined values, which can be found in a text file located at /usr/share/collectd/types.db. In my case, my types.db file contains the following definitions for ping:

  • ping: The average latency
  • ping_droprate: The percentage of packets that were dropped
  • ping_stddev: The standard deviation of the latency (lower = better; a high value here indicates potential network instability and you may encounter issues in voice / video calls for example)

To this end, I've decided on the following identifier strings:

HOSTNAME_HERE/ping-exec/ping-TARGET_NAME
HOSTNAME_HERE/ping-exec/ping_droprate-TARGET_NAME
HOSTNAME_HERE/ping-exec/ping_stddev-TARGET_NAME

I'm using exec for the first instance here to cause it to store my ping results separately from the internal ping plugin. The 2nd instance is the name of the target that is being pinged, resulting in multiple lines on the same graph.

To parse the output of the ping command, I've found it easiest if I push the output of the ping command to disk first, and then read it back afterwards. To do that, a temporary directory is needed:

temp_dir="$(mktemp --tmpdir="/dev/shm" -d "collectd-exec-ping-XXXXXXX")";

on_exit() {
    rm -rf "${temp_dir}";
}
trap on_exit EXIT;

This creates a new temporary directory in /dev/shm (shared memory in RAM), and automatically deletes it when the script terminates by scheduling an exit trap.

Then, we can create a temporary file inside the new temporary directory and call the ping command:

tmpfile="$(mktemp --tmpdir="${temp_dir}" "ping-target-XXXXXXX")";
ping -O -c "${ping_count}" "${target}" >"${tmpfile}";

A number of variables in that second command. Let me break that down:

  • ${ping_count}: The number of pings to send (e.g. 3)
  • ${target}: The target to ping (e.g. starbeamrainbowlabs.com)
  • ${tmpfile}: The temporary file to which to write the output

For reference, the output of the ping command looks something like this:

PING starbeamrainbowlabs.com (5.196.73.75) 56(84) bytes of data.
64 bytes from starbeamrainbowlabs.com (5.196.73.75): icmp_seq=1 ttl=55 time=28.6 ms
64 bytes from starbeamrainbowlabs.com (5.196.73.75): icmp_seq=2 ttl=55 time=15.1 ms
64 bytes from starbeamrainbowlabs.com (5.196.73.75): icmp_seq=3 ttl=55 time=18.9 ms

--- starbeamrainbowlabs.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 15.145/20.886/28.574/5.652 ms

We're only interested in the last 2 lines of output. Since this is a long-running script that is going to be executing every 5 minutes, to minimise load (it will be running on a rather overburdened Raspberry Pi 3B+ :P), we will be minimising the number of subprocesses we spawn. To read in the last 2 lines of the file into an array, we can do this:

mapfile -s "$((ping_count+3))" -t file_data <"${tmpfile}"

The -s here tells mapfile (a bash built-in) to skip a given number of lines before reading from the file. Since we know the number of ping requests we sent and that there are 3 additional lines that don't contain the ping request output before the last 2 lines that we're interested in, we can calculate the number of lines we need to skip here.

Next, we can now parse the last 2 lines of the file. The read command (which is also a bash built-in, so it doesn't spawn a subprocess) is great for this purpose. Let's take it 1 line at a time:

read -r _ _ _ _ _ loss _ < <(echo "${file_data[0]}")
loss="${loss/\%}";

Here the read command splits the input on whitespace into multiple different variables. We are only interested in the packet loss here. While the other values might be interesting, Collectd (at least by default) doesn't have a definition in types.db for them and I don't see any huge benefits from adding them anyway, so I use an underscore _ to indicate, by common convention, that I'm not interested in those fields.

We then strip the percent sign % from the end of the packet loss value here too.

Next, let's extract the statistics from the very last line:

read -r _ _ _ _ _ _ min avg max stdev _ < <(echo "${file_data[1]//\// }");

Here we replace all forward slashes in the input with a space to allow read to split it properly. Then, we extract the 4 interesting values (although we can't actually log min and max).

With the values extracted, we can output the statistics we've collected in a format that Collectd understands:

echo "PUTVAL \"${COLLECTD_HOSTNAME}/ping-exec/ping_droprate-${target}\" interval=${COLLECTD_INTERVAL} N:${loss}";
echo "PUTVAL \"${COLLECTD_HOSTNAME}/ping-exec/ping-${target}\" interval=${COLLECTD_INTERVAL} N:${avg}";
echo "PUTVAL \"${COLLECTD_HOSTNAME}/ping-exec/ping_stddev-${target}\" interval=${COLLECTD_INTERVAL} N:${stdev}";

Finally, we mustn't forget to delete the temporary file:

rm "${tmpfile}";

Those are the major changes I made from the earlier HTTP response time monitor. The full script can be found at the bottom of this post. The settings that control the operation of the script are the top, which allow you to change the list of hosts to ping, and the number of ping requests to make.

Save it to something like /etc/collectd/collectd-exec-ping.sh (don't forget to sudo chmod +x /etc/collectd/collectd-exec-ping.sh it), and then append this to your /etc/collectd/collectd.conf:

<Plugin exec>
        Exec    "nobody:nogroup"        "/etc/collectd/collectd-exec-ping.sh"
</Plugin>

Final script

#!/usr/bin/env bash
set -o pipefail;

# Variables:
#   COLLECTD_INTERVAL   Interval at which to collect data
#   COLLECTD_HOSTNAME   The hostname of the local machine

declare targets=(
    "starbeamrainbowlabs.com"
    "github.com"
    "reddit.com"
    "raspbian.raspberrypi.org"
    "1.0.0.1"
)
ping_count="3";

###############################################################################

# Pure-bash alternative to sleep.
# Source: https://blog.dhampir.no/content/sleeping-without-a-subprocess-in-bash-and-how-to-sleep-forever
snore() {
    local IFS;
    [[ -n "${_snore_fd:-}" ]] || exec {_snore_fd}<> <(:);
    read ${1:+-t "$1"} -u $_snore_fd || :;
}

# Source: https://github.com/dylanaraps/pure-bash-bible#split-a-string-on-a-delimiter
split() {
    # Usage: split "string" "delimiter"
    IFS=$'\n' read -d "" -ra arr <<< "${1//$2/$'\n'}"
    printf '%s\n' "${arr[@]}"
}

# Source: https://github.com/dylanaraps/pure-bash-bible#use-regex-on-a-string
regex() {
    # Usage: regex "string" "regex"
    [[ $1 =~ $2 ]] && printf '%s\n' "${BASH_REMATCH[1]}"
}

# Source: https://github.com/dylanaraps/pure-bash-bible#get-the-number-of-lines-in-a-file
# Altered to operate on the standard input.
count_lines() {
    # Usage: count_lines <"file"
    mapfile -tn 0 lines
    printf '%s\n' "${#lines[@]}"
}

# Source https://github.com/dylanaraps/pure-bash-bible#get-the-last-n-lines-of-a-file
tail() {
    # Usage: tail "n" "file"
    mapfile -tn 0 line < "$2"
    printf '%s\n' "${line[@]: -$1}"
}

###############################################################################

temp_dir="$(mktemp --tmpdir="/dev/shm" -d "collectd-exec-ping-XXXXXXX")";

on_exit() {
    rm -rf "${temp_dir}";
}
trap on_exit EXIT;

# $1 - target name
# $2 - url
check_target() {
    local target="${1}";

    tmpfile="$(mktemp --tmpdir="${temp_dir}" "ping-target-XXXXXXX")";

    ping -O -c "${ping_count}" "${target}" >"${tmpfile}";

    # readarray -t result < <(curl -sS --user-agent "${user_agent}" -o /dev/null --max-time 5 -w "%{http_code}\n%{time_total}\n" "${url}"; echo "${PIPESTATUS[*]}");
    mapfile -s "$((ping_count+3))" -t file_data <"${tmpfile}"

    read -r _ _ _ _ _ loss _ < <(echo "${file_data[0]}")
    loss="${loss/\%}";
    read -r _ _ _ _ _ _ min avg max stdev _ < <(echo "${file_data[1]//\// }");


    echo "PUTVAL \"${COLLECTD_HOSTNAME}/ping-exec/ping_droprate-${target}\" interval=${COLLECTD_INTERVAL} N:${loss}";
    echo "PUTVAL \"${COLLECTD_HOSTNAME}/ping-exec/ping-${target}\" interval=${COLLECTD_INTERVAL} N:${avg}";
    echo "PUTVAL \"${COLLECTD_HOSTNAME}/ping-exec/ping_stddev-${target}\" interval=${COLLECTD_INTERVAL} N:${stdev}";

    rm "${tmpfile}";
}

while :; do
    for target in "${targets[@]}"; do
        # NOTE: We don't use concurrency here because that spawns additional subprocesses, which we want to try & avoid. Even though it looks slower, it's actually more efficient (and we don't potentially skew the results by measuring multiple things at once)
        check_target "${target}"
    done

    snore "${COLLECTD_INTERVAL}";
done

Partitioning and mounting a new disk using LVM

As I've been doing my PhD, I've been acquiring quite a lot of data that needs storing. To this end, I have acquired a new 2 TiB hard drive in my Lab PC. Naturally, this necessitates formatting it so that I can use it. Since I've been using LVM (Logical Volume Management) for my OS disk - so I decided to use it for my new disk too.

Unfortunately, I don't currently have GUI access to my Lab PC - instead for the past few months I've been limited to SSH access (which is still much better than nothing at all), so I can't really use any GUI tool to do this for me.

This provided me with a perfect opportunity to get into LVM through the terminal instead. As it turns out, it's not actually that bad. In this post, I'm going to take you through the process of formatting a fresh disk: from creating a partition table to mounting the LVM logical volume.

Let's start by partitioning the disk. For this, we'll use the fdisk CLI tool (install it on Debian-based systems with sudo apt install fdisk if it's not available already). It should be obvious, but for this tutorial root access is required to execute pretty much all the commands we'll be using.

Start fdisk like so:

sudo fdisk /dev/sdX

Replace X with the index of your disk (try lsblk - no sudo required - to disk your disks). fdisk works a bit like a shell. You enter letters (or short sequences) followed by hitting enter to give it commands. Enter the following sequence of commands:

  • g: Create new GPT partition table
  • n: Create new partition (allow it to fill the disk)
  • t: Change partition type
    • L: List all known partition types
    • 31: Change to a Linux LVM partition
  • p: Preview final partition setup
  • w: Write changes to disk and exit

Some commands need additional information - fdisk will prompt you here.

With our disk partitioned, we now need to get LVM organised. LVM has 3 different key concepts:

  • Physical Volumes: The physical disk partitions it should use as a storage area (e.g. we created an appropriate partition type above)
  • Volume Groups: Groups of 1 or more Physical Volumes
  • Logical Volumes: The volumes that you use, format, and mount - they are stored in Volume Groups.

To go with these, there are also 3 different classes of commands that LVM exposes: pv* commands for Physical Volumes, vg* for Volume Groups, and lv* for Logical Volumes.

With respect to Physical Volumes, these are physical partitions on disk. For legacy MSDOS partition tables, these must have a partition type of 8e. For newer GPT partition tables (such as the one we created above), these need the partition id 31 (Linux LVM) - as described above.

Next, we create a new volume group that holds our physical volume:

sudo vgcreate vg-pool-name /dev/sdXY

....replacing /dev/sdXY with the partition you want to add (again, lsblk is helpful here). Don't forget to change the name of the Volume Group to something more descriptive than vg-pool-name - though keeping the vg prefix is recommended.

If you like, you can display the current Volume Groups with the following command:

sudo vgdisplay

Then, create a new logical volume that uses all of the space in the new volume group:

sudo lvcreate -l 100%FREE -n lv-rocket-blueprints vg-pool-name

Again, replace vg-pool-name with the name of your Volume Group, and lv-rocket-blueprints with the desired name of your new logical volume. tldr (for which I review pull requests) has a nice page on lvcreate. If you have a tldr client installed, simply do this to see it:

tldr lvcreate

With our logical volume created, we can now format it. I'm going to format it as ext4, but you can format it as anything you like.

sudo mkfs.ext4 /dev/vg-pool-name/lv-rocket-blueprints

As before, replace vg-pool-name and lv-rocket-blueprints with your Volume Group and Logical Volume names respectively.

Finally, mount the newly formatted partition:

sudo mkdir /mnt/rocket-blueprints
sudo mount /dev/vg-extra-data/lv-rocket-blueprints /mnt/rocket-blueprints

You can mount it anywhere - though I'd recommend mounting it to somewhere in /mnt.

Auto-mounting LVM logical volumes

A common thing many (myself included) want to do is automatically mount an LVM partition on boot. This is fairly trivial to do with /etc/fstab.

To start, find the logical volume's id with the following command:

sudo blkid

It should be present as UUID="THE_UUID_HERE". Pick out only the UUID of the logical volume you want to automount here. As a side note, using the UUID is generally a better idea than the name, because the name of the partition (whether it's an LVM partition or a physical /dev/sdXY partition) might change, while the UUID always stays the same.

Before continuing, ensure that the partition is unmounted:

sudo umount /mnt/rocket-blueprints

Now, edit /etc/fstab (e.g. with sudo nano /etc/fstab), and append something like this following to the bottom of the file:

UUID=THE_UUID_YOU_FOUND_HERE    /mnt/rocket-blueprints  ext4    defaults,noauto 0   2

Replace THE_UUID_YOU_FOUND_HERE with the UUID you located with blkid, and /mnt/rocket-blueprints with the path to where you want to mount it to. If an empty directory doesn't already exist at the target mount point, don't forget to create it (e.g. with sudo mkdir /mnt/rocket-blueprints).

Save and close /etc/fstab, and then try mounting the partition using the /etc/fstab definition:

sudo mount /mnt/rocket-blueprints

If it works, edit /etc/fstab again and replace noauto with auto to automatically mount it on boot.

That's everything you need to know to get up and running with LVM. I've included my sources below - in particular check out the howtogeek.com tutorial, as it's not only very detailed, but it also has a cheat sheet containing most of the different LVM commands that are available.

Found this useful? Still having issues? Got a suggestion? Comment below!

Sources and further reading

Cluster, Part 9: The Border Between | Load Balancing with Fabio

Hello again! It's been a while since the last one (mainly since I've been unsure about a few architectural things), but I'm now ready to continue writing about my setup. Before we continue, here's a refresher of everything we've done so far:

In this post, we're going to look at tying off our primary pipeline. So far, we've got job scheduling with Nomad, (superglue!) service discovery with Consul, and shared storage backed with NFS (although I'm going to revisit this eventually), with everything underpinned by a WireGuard mesh VPN with wesher.

In order to allow people to interact with services that are running on the cluster, we need something that will translate from the weird and strange world of anything running somewhere anywhere, and everywhere in-between into something that makes sense from an outside perspective. We want to have a single gateway by which we can control and manage access.

It is for these purposes that we're going to add Fabio to our stack. Its configuration is backed by Consul, and it is relatively simple and easy to understand. Having the config backed by Consul nets us multiple benefits:

  • It can run anywhere on the cluster we like in a pinch
  • We can configure new routes directly from a Nomad job spec file (although we still need to update the Unbound config)
  • The configuration of Vault gains additional data redundancy being stored on multiple nodes in the cluster

Like in previous parts of this series, Fabio isn't available to install with apt directly, so I've packaged it into my apt repository. If you haven't yet set up my apt repository, up-to-date instructions on how to do so can be found at the top of its main page - just click the aforementioned link (I'm not going to include instructions here, as they may go out of date at a later time).

Once you've set up my apt repository (or downloaded the Fabio binary manually, though I don't recommend that as it's more difficult to keep up-to-date), we can install Fabio like so:

sudo apt install fabio

This should be done on your primary (controller) node in your cluster. You can also do it on a secondary node too if you like to increase redundancy. To do this, just follow these instructions on both nodes one at a time. I'll be doing this soon myself: I've just been distracted with other things :P

Next, we need a service file. For systemd users (I'm using Raspbian at the moment), I have an apt package:

sudo apt install fabio-systemd

With this installed, we need to create a (very) minimal configuration file. Here it is:

proxy.addr = :80;proto=http
proxy.auth = name=admin;type=basic;file=/etc/fabio/auth.admin.htpasswd

Pretty short, right? This does 2 things:

  1. Tells Fabio to listen on port 80 for HTTP requests (we'll be tackling HTTPS in a separate post - we need Vault for that)
  2. Tells Fabio about the admin auth realm and where it can find the .htpasswd file that corresponds with it

Fabio's password authentication uses HTTP Basic Auth - which is insecure over unencrypted HTTP. Note that we'll be working towards improving the situation here and I'll insert a reminder when we arrive to change all your passwords where we do, but there are quite a number of obstacles between here and there we have to deal with first.

With this in mind, Take a copy of the above Fabio config file and write it to /etc/fabio/fabio.properties. Next, we need to generate that htpasswd file we reference in the config file. There are many tools out there that can be used for this purpose - for example the htpasswd tool in the apache2-utils package:

htpasswd /etc/fabio/auth.admin.htpasswd username

I like this authentication setup for Fabio, as it allows one to have a single easily configurable set of realms for different purposes if desired.

If you're setting up Fabio on multiple servers, you'll want to put your config file in your shared NFS storage and create a symlink at /etc/fabio/fabio.properties instead. Do that like this:

sudo ln -s /etc/fabio/fabio.properties /mnt/shared/config/fabio/fabio.properties

....update the /mnt/... path accordingly. Don't forget to adjust the /etc/fabio/auth.admin.htpasswd path too in fabio.properties as well.

Now that we've got the configuration file out of the way, we can start Fabio for the first time! Do that like this:

sudo systemctl start fabio.service
sudo systemctl enable fabio.service

Don't forget to punch a hole in the firewall:

sudo ufw allow 80/tcp comment fabio

Fabio is running - but it's not particularly useful, as we haven't configured any routes! Let's add some routes now.The first few routes we're going to add will be manual routes, which will allow us to tell Fabio about a static route we want it to add to it's routing table.

Fabio itself actually has a web interface, which will make a good first target for testing out our new cool toy. I mentioned earlier that Fabio gets its configuration from Consul - and it's now that we're going to take advantage of that. Consul isn't just a service discovery tool you see - it's a shared configuration manager too via a fancy hierarchical distributed key-value data store.

In this datastore Fabio looks in particular at the keys in the fabio directory. Create a new key under here with the Consul CLI like so:

consul kv put "fabio/fabio" 'route add fabio fabio.bobsrockets.com/ http://NODE_NAME.node.mooncarrot.space:9998 tags "mission-control" opts "auth=admin"'

Replace NODE_NAME with the name of the node you're running Fabio on, and yourdomain.com with a domain name you've bought. Once done, update your DNS config to point fabio.bobsrockets.com to the node that's running Fabio (you might want to refer back to my earlier post on Unbound - don't forget to restart unbound with sudo systemctl restart unbound).

When you have your DNS server updated, you should be able to point your browser at fabio.bobsrockets.com. No reloading of Fabio is needed - it picks up changes dynamically and automagically! It should prompt you for your password, and then you should see your the Fabio web interface. It should look something like this:

The Fabio web interface

As you can see, I've got a number of services running - including a few that I'm going to be blogging about soon-ish, such as Vault (but I haven't yet learnt how to use it :P) and Docker Registry UI (which is useful but has some issues - I'm going to see if HTTPS helps fix some of them up as I'm getting some errors in the dev tools about the SubtleCrypto API, which is only available in secure contexts).

Those services with IP addresses as the destination are defined through Nomad, and auto-update based on the host upon which they are running.

In the web interface you can click on overrides on the top bar to view and edit the configuration for the static routes you've got configured. You can't create new ones though, which is a shame.

Using the same technique as described above, you can create manual routes for Nomad and Consul - as they have web interfaces too! If you haven't already you'll need to enable it though with ui = true the Nomad and Consul server configuration files respectively though. For example, you could use these definitions:

route add nomad nomad.seanssatellites.io/ http://nomad.service.seanssatellites.io:4646 tags "mission-control" opts "auth=admin"
route add consul consul.billsboosters.space/ http://consul.service.billsboosters.space:8500 tags "mission-control" opts "auth=admin"

If you do the Consul one first, you can use the web interface to create the definition for Nomad :D

It's perhaps worth making a quick note of some parts of the above route definitions:

  • opts "auth=admin": This bit activates HTTP Basic Auth with the specified realm
  • consul.billsboosters.space/: This is the domain through which outside users will access the service. The trailing slash is very important.

From here, the last item on the list for this post are automatic routes via Nomad jobs. Since it's the only job we've got running on Nomad so far, let's use that as an example. Adding a Fabio route in this manner requires 3 steps:

  1. Find the service stanza in your Docker Registry Nomad job file, and edit the tags list to include a pair of tags something like urlprefix-registry.tillystelescopes.fr/ and auth=admin (again, the trailing slash is important, and the urlprefix- bit instructs Fabio that it's the domain name to route traffic from to the container).
  2. Save the edits to the Nomad job file and re-run it with nom job run path/to/file.nomad
  3. Update your DNS with a new record pointing registry.tillystelescopes.fr at the IP address(es) of the node(s) running Fabio

Also pretty simple to get used to, right? From here, step 4 of the official quickstart guide is useful. It explains about the different service tags (like the urlprefix- and auth=admin ones we created above) that are supported. Apparently raw TCP forwarding is also supported - though personally I'm waiting eagerly on UDP forwarding myself for some services I would like to run.

The rest of the Fabio docs are a bit of a mess, but I've found them more understandable than that of Traefik - the solution I investigated before turning to Fabio upon a recommendation from someone over in the r/selfhosted subreddit in frustration (whoever says "Traefik is simple!" is lying - I can't make sense of anything - it might as well be written in hieroglyphs.....).

Looking into the future, our path is diverging into 2 clear routes:

  • Getting services up and running on our new cluster
  • Securing said cluster to avoid attack

While relatively separate goals, they do intertwine at intervals. Moving forwards, we're going to be oscillating between these 2 goals. Likely topics include Vault (though it'll take several blog posts to realise any benefit from it at this point), and getting some Docker container infrastructure setup.

Speaking of Docker container infrastructure, if anyone has any ideas as to how to auto-rebuild docker containers and/or auto-restart Nomad jobs to keep them up-to-date, I'd love to know in a comment below. I'm currently scratching my head over that one....

Found this interesting? Got an idea that would improve on my setup? Confused about something? Comment below!

Making an auto-updated downmuxed copy of my music

I like to buy and own music. That way, if the service goes down, I still get to keep both my music and the rights thereto that I've paid for.

To this end, I maintain an offline collection of music tracks that I've purchased digitally. Recently, it's been growing quite large (~15GiB at the moment) - which is quite a bit of disk space. While this doesn't matter too much on my laptop, on my phone it's quite a different story.

For this reason, I wanted to keep a downmuxed copy of my music collection on my Raspberry Pi 3B+ file server that I can sync to my phone. Said Raspberry Pi already has ffmpeg installed, so I decided to write a script to automate the process. In this blog post, I'm going to walk you through the script itself and what it does - and how you can use it too.

I've decided on a standard downmuxed format of 256kbps MP3. You can choose anything you like - you just need to tweak the appropriate lines in the script.

First, let's outline what we want it to do:

  1. Convert anything that isn't an mp3 to 256kbps mp3 (e.g. ogg, flac)
  2. Downmux mp3 files that are at a bitrate higher than 256kbps
  3. Leave mp3s that are at a bitrate lower than (or equal to) 256kbps alone
  4. Convert and optimise album art to 256x256
  5. Copy any unknown files as-is
  6. If the file exists in the target directory already, don't re-convert it again
  7. Max out the system resources when downmuxing to get it done as fast as possible

With this in mind, let's start outlining a script:

#!/usr/bin/env bash

input="${DIR_INPUT:-/absolute/path/to/Music}"
output="${DIR_OUTPUT:-/absolute/path/to/Music-Portable}"

export input;
export output;

temp_dir="$(mktemp --tmpdir -d "portable-music-copy-XXXXXXX")";

on_exit() {
    rm -rf "${temp_dir}";
}
trap on_exit EXIT;

# Library functions go here

# $1    filename
process_file() {
    filename="${1}";
    extension="${filename##*.}";

    # Process file here
}

export temp_dir;
export -f process_file;

cd "${input}" || { echo "Error: Failed to cd to input directory"; exit 1; };
find  -type f -print0 | nice -n20 xargs -P "$(nproc)" -0 -n1 -I{} bash -c 'process_file "{}"';

# Cleanup here....

Very cool. At the top of the script, we define the input and output directories we're going to work on. We use the ${VARIABLE_NAME:-default_value} syntax to allow for changing the input and output directories on the fly with the DIR_INPUT and DIR_OUTPUT environment variables.

Next, we create a temporary directory, and define an exit trap to ensure it gets deleted when the script exits (regardless of whether the exit is clean or not).

Then, we define the main driver function that will process a single file. This is called by xargs a little further down - which takes the file list in from a find call. The cd is important there, because we want the file paths from find to be relative to the input directory for easier mangling later. The actual process_file call is wrapped in bash -c '', because being a bash function it can't be called by xargs directly - so we have to export -f it and wrap it as shown.

Next, we need to write some functions to handle converting different file types. First, let's write a simple copy function:

# $1    Source
# $2    Target
do_copy() {
    source="${1}";
    target="${2}";

    echo -n "cp ";
    cp "${source}" "${target}";
}

All it does is call cp, but it's nice to abstract like this so that if we wanted to add extra features (e.g. uploading via sftp or something) later, it's not as much of a bother.

We also need to downmux audio files and convert them to mp3. Let's write a function for that too:


# $1    Source
# $2    Target
do_downmux() {
    source="${1}";
    target="${2}";

    set +e;
    ffmpeg -hide_banner -loglevel warning -nostats -i "${source}" -vn -ar 44100 -b:a 256k -f mp3 "${target}";
    exit_code="${?}";
    if [[ "${exit_code}" -ne 0 ]] && [[ -f "${target}" ]]; then
        rm "${target}";
    fi
    return "${exit_code}";
}

It's got the same arguments signature as do_copy, but it downmuxes instead of copying directly. The line that does the magic is highlighted. It looks complicated, but it's actually pretty logical. Let's break down all those arguments:

Argument Purpose
-hide_banner Hides the really rather wordy banner at the top when ffmpeg starts up
-loglevel warning Hides everything but warning messages to avoid too much unreadable output when converting many tracks at once
-nostats As above
-i "${source}" Specifies the input file
-vn Strips any video tracks found
-ar 44100 Force the sampling rate to 44.1KHz, just in case it's sampled higher
-b:a 256k Sets the output bitrate to 256kbps (change this bit if you like)
-f mp3 Output as mp3
"${target}" Write the output to the target location

That's not so bad, right? After calling it, we also need to capture the exit code. If it's not 0, then ffmpeg encountered some kind of issue. If so, we delete any output files it creates and return the same exit code - which we handle elsewhere.

Finally, we need a function to optimise images. For this I'm using optipng and jpegoptim to handle optimising JPEGs and PNGs respectively, and ImageMagick for the resizing operation.


# $1    Source
compress_image() {
    source="${1}";

    temp_file_png="$(mktemp --tmpdir="${temp_dir}" XXXXXXX.png)";
    temp_file_jpeg="$(mktemp --tmpdir="${temp_dir}" XXXXXXX.jpeg)";

    convert "${source}" -resize 256x256\> "${temp_file_jpeg}" >&2 &
    convert "${source}" -resize 256x256\> "${temp_file_png}" >&2 &
    wait

    jpegoptim --quiet --all-progressive --preserve "${temp_file_jpeg}" >&2 &
    optipng -quiet -fix -preserve "${temp_file_png}" >&2 &
    wait

    read -r size_png _ < <(wc --bytes "${temp_file_png}");
    read -r size_jpeg _ < <(wc --bytes "${temp_file_jpeg}");
    if [[ "${size_png}" -gt "${size_jpeg}" ]]; then
        # JPEG is smaller
        rm -rf "${temp_file_png}";
        echo "${temp_file_jpeg}";
    else
        # PNG is smaller
        rm -rf "${temp_file_jpeg}";
        echo "${temp_file_png}";
    fi
}

Unlike the previous functions, this one only takes a source file in. It converts it using that temporary directory we created earlier, and echos the filename of the smallest format found.

It's done in 2 stages. First, the source file is resized to 256x256 (maintaining aspect ratio, and avoiding upscaling smaller images) and written as both a JPEG and a PNG.

Then, jpegoptim and optipng are called on the resulting files. Once done, the filesizes are compared and the filepath to the smallest of the 2 is echoed.

With these in place, we can now write the glue that binds them to the xargs call by filling out process_file. Before we do though, we need to tweak the export statements from earlier to export our library functions we've written - otherwise process_file won't be able to access them since it's wrapped in bash -c '' and xargs. Here's the full list of export directives (directly below the end of process_file):

export temp_dir;
export -f process_file;
export -f compress_image;
export -f do_downmux;
export -f do_copy;
# $1    filename
process_file() {
    filename="${1}";
    extension="${filename##*.}";

    orig_destination="${output}/${filename}";
    destination="${orig_destination}";

    echo -n "[file] ${filename}: ";

    do_downmux=false;
    # Downmux, but only the bitrate is above 256k
    if [[ "${extension}" == "flac" ]] || [[ "${extension}" == "ogg" ]] || [[ "${extension}" == "mp3" ]]; then
        probejson="$(ffprobe -hide_banner -v quiet -show_format -print_format json "${filename}")";
        is_above_256k="$(echo "${probejson}" | jq --raw-output '(.format.bit_rate | tonumber) > 256000')";
        exit_code="${?}";
        if [[ "${exit_code}" -ne 0 ]]; then
            echo -n "ffprobe failed; falling back on ";
            do_downmux=false;
        elif [[ "${is_above_256k}" == "true" ]]; then
            do_downmux=true;
        fi
    fi

    if [[ "${do_downmux}" == "true" ]]; then
        echo -n "downmuxing/";
        destination="${orig_destination%.*}.mp3";
    fi

    # ....
}

We use 2 variables to keep track of the destination location here, because we may or may not successfully manage to convert any given input file to a different format with a different file extension.

We also use ffprobe (part of ffmpeg) and jq (a JSON query and manipulation tool) on audio files to detect the bitrate of input files so that we can avoid remuxing files with a bitrate lower than 256kbps. Once we're determined that, we rewrite the destination filename to include the extension .mp3.

Next, we need to deal with the images. We do this in a preprocessing step that comes next:

case "${extension}" in 
    png|jpg|jpeg|JPG|JPEG )
        compressed_image="$(compress_image "${filename}")";
        compressed_extension="${compressed_image##*.}";
        destination="${orig_destination%.*}.${compressed_extension}";
        ;;
esac

If the file is an image, we run it through the image optimiser. Then we look at the file extension of the optimised image, and alter the destination filename accordingly.

if [[ -f "${destination}" ]] || [[ -f "${orig_destination}" ]]; then
    echo "exists in destination; skipping";
    return 0;
fi

destination_dir="$(dirname "${destination}")";
if [[ ! -d "${destination_dir}" ]]; then
    mkdir -p "${destination_dir}";
fi

Next, we look to see if there's a file in the destination already. If so, then we skip out and don't continue processing the file. If not, we make sure that the parent directory exists to avoid issues later.

case "${extension}" in
    flac|mp3|ogg )
        # Use ffmpeg, but only if necessary
        if [[ "${do_downmux}" == "false" ]]; then
            do_copy "${filename}" "${orig_destination}";
        else
            echo -n "ffmpeg ";
            do_downmux "${filename}" "${destination}";
            exit_code="$?";
            if [[ "${exit_code}" -ne 0 ]]; then
                echo "failed, exit code ${exit_code}: falling back on ";
                do_copy "${filename}" "${orig_destination}";
            fi
        fi
        ;;

    png|jpg|jpeg|JPG|JPEG )
        mv "${compressed_image}" "${destination}";
        ;;

    * )
        do_copy "${filename}" "${destination}";
        ;;
esac
echo "done";

Finally, we get to the main case statement that handles the different files. If it's an audio file, we run it through do_downmux (which we implemented earlier) - but only if it would benefit us. If it's an image, we move the converted image from the temporary directory that was optimised earlier, and if we can't tell what it is, then we just copy it over directly.

That's process_file completed. Now all we're missing are a few clean-up tasks that make it more cron friendly:

echo "[ ${SECONDS} ] Setting permissions";
chown -R root:root "${output}";
chmod -R 0644 "${output}";
chmod -R ugo+X "${output}";

echo "[ ${SECONDS} ] Portable music copy update complete";

This goes at the end of the file, and it reset the permissions on the output directory to avoid issues. This ensures that everyone can read it, but only root can write to it - as any modifications should be made it to the original version, and not the portable copy.

That completes this script. By understanding how it works, hopefully you'll be able to apply it to your own specific circumstances.

For example, you could call it via cron. Edit your crontab:

sudo crontab -e

...and paste in something like this:

5 4 * * *   /absolute/path/to/script.sh

This won't work if your device isn't turned on at the time, however. In that case, there is alternative. Simply drop the script (without an extension) into /etc/cron.daily or /etc/cron.weekly and mark it executable, and anacron will run your job every day or week respectively.

Anyway, here's the complete script:

Sources and Further Reading

Automatically organising & optimising photos and videos with Bash

As I promised recently, this post is about a script I implemented a while back that automatically organises and optimises the photos and videos that I take for me. Since I've been using it a while now and it seems stable, I thought I'd share it here in the hopes that it might be useful to someone else too.

I take quite a few photos and the odd video or two with my phone. These are automatically uploaded to a Raspberry Pi 3B+ that's acting as a file server on my home network with FolderSync (yes, it has ads, but it's the best I could find that does the job). Once uploaded to a folder, I then wanted a script that would automatically sort the uploaded images and videos into folders by year and month according to their date taken.

To do this, I implemented a script that uses exiftool (sudo apt install libimage-exiftool-perl I believe) to pull out the date taken from JPEGs and sort based on that. For other formats that don't support EXIF data, I take the last modified time with the date command and use that instead.

Before I do this though, I run my images through a few preprocessing tools:

  • PNGs are optimised with optipng (sudo apt install optipng)
  • JPEGs are optimised with jpegoptim (sudo apt install jpegoptim)
  • JPEGs are additionally automatically reoriented with mogrify -auto-orient from ImageMagick, as many cameras will set an EXIF tag for the rotation of an image without bothering to physically rotate the image itself

It's worth noting here that these preprocessing optimisation steps are lossless. In other words, no quality lost by performing these actions - it simply encodes the images more efficiently such that they use less disk space.

Once all these steps are complete, images and videos are sorted according to their date taken / last modified time as described above. That should end up looking a bit like this:

images
    + 2019
        + 07-July
            + image1.jpeg
    + 2020
        + 05-May
            + image2.png
            + image3.jpeg
        + 06-June
            + video1.mp4

Now that I've explained how it works, I can show you the script itself:

(Can't see the above? Check out the script directly on GitLab here: organise-photos)

The script operates on the current working directory. All images directly in the working directory will be sorted as described above. Once you've put it in a directory that is in your PATH, simply call it like this:

organise-photos

The script can be divided up into 3 distinct sections:

  1. The setup and initialisation
  2. The function that sorts individual files themselves into the right directory (handle_file - it's about half-way down)
  3. The preprocessing steps and the driver code that calls the above function.

So far, I've found that it's been working really rather well. During development and testing I did encounter a number of issues with the sorting system in handle_file that caused it to sort files into the wrong directory - which took me a while finally squash.

I'm always tweaking and improving it though. To that end, I have several plans to improve it.

Firstly, I want to optimise videos too. I'd like to store them in a standard format if possible. It's not that simple though, because some videos don't take well to being transcoded into a different format - indeed they can even take up more space than they did previously! In those cases it's probably worth discarding the attempt at transcoding the video to a more efficient format if it's larger than the original file.

I'd also like to reverse-geocode (see also the usage policy) the (latitude, longitude) geotags in my images to the name of the place that I took them, and append this data to the comment EXIF tag. This will make it easier to search for images based on location, rather than having to remember when I took them.

Finally, I'd also like to experiment with some form of AI object recognition with a similar goal as reverse-geocoding. By detecting the objects in my images and appending them to the comment EXIF tag, I can do things like search for "cat", and return all the images of cats I've taken so far.

I haven't started to look into AI much yet, but initial search results indicate that I might have an interesting time locating an AI that can identify a large number of different objects.

Anyway, my organise-photos script is available on GitLab in my personal bin folder that I commit to git if you'd like to take a closer look - suggestions and merge requests are welcome if you've got an idea that would make it even better :D

Sources and further reading

Website change detection with headless Firefox and ImageMagick

This wasn't the script I had in mind in the previous blog post (so you can look forward to another blog post about it), but have you ever wanted to know when a web page changes? If it does change, it's almost impossible to tell where on the page it's changed. Recently, I was thinking about the problem, and realised a few things:

  • Firefox can be operated headlessly (with --headless) to take screenshots
  • ImageMagick must be advanced enough to diff images

With this in mind, I set about implementing a script. Before we continue, here's an example diff image:

It's rather tall because of the webpage I chose, but the bits that have changed appear in red. The script I've written also generates an animated PNG showing the difference too:

Again, it's very tall because of the page I tested with, but I think it's pretty cool!

If you'd like to check the script out for yourself, you find it in the following git repository: sbrl/url-diff

For the curious, the script in question is written in Bash. It uses apcalc (available in Debian / Ubuntu based Linux distributions with sudo apt install apcalc) to crunch the numbers, and headless Firefox + Imagemagick as described above to take the screenshots and do the image processing. It should in theory work on Windows, but you'll need to jump through a number of hoops:

  • Install call url-diff.sh from [git bash]()
  • Install [ImageMagick]() and make sure the binaries are in your PATH
  • Install Firefox and make sure firefox is in your PATH
  • Explicitly set the URLDIFF_STORAGE_DIR environment variable when calling the script (do this by prefixing the command at the bottom of this post with URLDIFF_STORAGE_DIR=path/to/directory)

With my fancy new embed system, I can show you the code behind it:

(Can't see the above? Check it out in the git repository.)

I'm working on line numbers (sadly the author of highlight.js doesn't like them, so an alternative solution is required).

Anyway, the basic layout of the script is as follows:

  1. First, the settings are read in and the default values set
  2. Then, I define some utility functions.
    • The calculate_percentage_colour function is integral to the image change detection algorithm. It counts percentage of an image that is a given colour.
  3. Next, the help text is displayed if necessary
  4. The case statement that follows allows multiple subcommands to be implemented. Currently I only have a check subcommand, but you never know!
  5. Inside this case statement, the screenshots are taken and compared.
    • A new screenshot is taken with headless Firefox
    • If we don't have a screenshot stored away already, we stash the new screenshot and exit
    • If we do have a pre-existing screenshot, we continue with the comparison, starting by generating a diff image where pixels that have changed are given 1 colour, and pixels that haven't changed another
    • It's at this point that calculate_percentage_colour is called to calculate how much of the image has changed - the diff image is passed in and the changed pixels are counted
    • If more than 2% (by default) has changed, then we continue on to generate the output images
    • The first output image consists of the new screenshot with the diff image overlaid - this is generated with some ImageMagick wizardry: -compose over -composite
    • The second is an animated PNG comprised of the old and new screenshots. This is generated with ffmpeg - which supports animated PNGs
    • Finally, the old screenshot that we have stored away is replaced with the new one

It sounds more complicated than it is - hopefully my above explanation makes sense (post a comment below if you're confused about something!).

You can call the script like so:

git clone https://git.starbeamrainbowlabs.com/sbrl/url-diff.git
cd url-diff;
./url-diff.sh check URL_HERE path/to/output_diff.png path/to/output.apng

....replacing URL_HERE with the URL to check, and the paths with the places you'd like to write the output images to.

Art by Mythdael