Starbeamrainbowlabs

Stardust
Blog


Archive

Mailing List Articles Atom Feed Comments Atom Feed Twitter

Tag Cloud

3d account algorithms announcement archives arduino artificial intelligence assembly async audio bash batch blog bookmarklet booting c sharp c++ challenge chrome os code codepen coding conundrums coding conundrums evolved command line compiling css dailyprogrammer debugging demystification distributed computing downtime electronics email embedded systems encryption es6 features event experiment external first impressions future game github github gist graphics hardware hardware meetup holiday html html5 html5 canvas infrastructure interfaces internet io.js jabber javascript js bin labs learning library linux low level lua maintenance manjaro network networking node.js operating systems performance photos php pixelbot portable privacy programming problems project projects prolog protocol protocols pseudo 3d python reddit reference release releases resource review rust secrets security series list server software sorting source code control statistics svg technical terminal textures three thing game three.js tool tutorial tutorials twitter ubuntu university update updates upgrade version control virtualisation visual web website windows windows 10 xmpp xslt

Manjaro OpenRC Cheat Sheet

Amidst preparations for my third year at university, I've put together a sort of reference sheet to help me remember all the common commands needed when using Manjaro with OpenRC. It's not complete, but I'll continue to update it with various useful commands I stumble upon. You can find it below.

If you have any that you find useful, post a comment below! I'd love to see what you come up with - I might even add it to this list (crediting you of course)!

Cast List

  • sudo pacman: Main package manager
  • yaourt: pacman wrapper, also has AUR support. Swap out pacman for yaourt to include the AUR.
  • sudo rankmirrors: Finds and selects the fastest repository mirrors.
  • sudo rc-update - Enable and disable services
  • sudo service - Start, stop, and query the status of services

Commands

Package Management

Command Description
rankmirrors -i -m rank -d Interactively re-find the fastest mirrors
pacman -Sy Synchronise local repository metadata
pacman -Syy Redownload all repository metadata
pacman -Syua Sync with mirrors and update all packages
pacman -Fs filename Search repositories for packages that contain filename
pacman -Ss search_string Search repositories for package name or description that contain a search string
pacman -S package_name Install package_name and any dependencies required
pacman -Rs package_name Remove package_name and all dependencies not needed by anything else
pacman -Dk Check that all required dependencies are installed
pacman -Q List all installed packages and their versions
pacman -Qe List all packages that were installed manually
pacman -Qd List all packages that were isntalled automatically
pacman -Sii package_name See which packages require package_name to be installed

Services

Command Description
rc-update List all services and their runlevels
rc-update add service_name default Add service_name to the default runlevel
rc_update delete service_name default Remove service_name from the default runlevel
service service_name start Start service_name
service service_name stop Stop service_name
service service_name status Query the status of service_name

Sources and Further Reading

Run a program on your dedicated AMD graphics card on Linux

I've recently figured out how to run a program on my dedicated AMD R7 M445 graphics card in Ubuntu 17.04, and since it's taken me far too long to around figuring it out, I thought I'd note it down here for future reference - if it helps you too, let me know in the comments below!

It's actually really simple. First, check that your dedicated AMD graphics card shows up with lspci:

lspci

If it's anything like my setup, you'll get a pair of rows like this (though they might not be next to each other):

00:02.0 VGA compatible controller: Intel Corporation HD Graphics 620 (rev 02)
01:00.0 Display controller: Advanced Micro Devices, Inc. [AMD/ATI] Topaz XT [Radeon R7 M260/M265 / M340/M360 / M440/M445] (rev c3)

Thankfully, my dedicated AMD card is showing (better than it did in previous versions of ubuntu, too, which thought it was an M225!). Next, we need to check that the amdgpu kernel module is loaded with a quick lsmod:

lsmod | grep -i amd

On my laptop, I get this:

amdkfd                139264  1
amd_iommu_v2           20480  1 amdkfd
amdgpu               1564672  1
i2c_algo_bit           16384  2 amdgpu,i915
ttm                    98304  1 amdgpu
drm_kms_helper        151552  2 amdgpu,i915
drm                   352256  9 amdgpu,i915,ttm,drm_kms_helper

Yay! It's loaded. Now to do a test to see if we can run anything on it:

glxinfo | grep "OpenGL renderer"
DRI_PRIME=1 glxinfo | grep "OpenGL renderer"

The above runs glxinfo twice: Once on the integrated graphics card, and once on the dedicated graphics card. The key here is the DRI_PRIME=1 environment variable - this tells the amdgpu driver that this process should run on the dedicated graphics and not the integrated graphics card. On my machine, I get this output:

OpenGL renderer string: Mesa DRI Intel(R) HD Graphics 620 (Kabylake GT2) 
OpenGL renderer string: Gallium 0.4 on AMD ICELAND (DRM 3.9.0 / 4.10.0-33-generic, LLVM 4.0.0)

As you can see, the latter invocation of the command ran on the dedicated AMD graphics card, and the former on the integrated graphics. So simple!

Now that we've verified that it works, we can do it with any program:

DRI_PRIME=1 inkscape

Did this you find this helpful? Did it work (or not)? Let me know in the comments!

Sources

Deep dive: Email, Trust, DKIM, SPF, and more

Lots of parcels (Above: Lots of parcels. Hopefully you won't get this many through the door at once..... Source)

Now that I'm on holiday, I've got some time to write a few blog posts! As I've promised a few people a post on the email system, that's what I'll look at this this post. I'm going to take you on a deep dive through the email system and trust. We'll be journeying though the fields of DKIM signatures, and climb the SPF mountain. We'll also investigate why the internet needs to take this journey in the first place, and look at some of the challenges one faces when setting up their own mail server.

Hang on to your hats, ladies and gentlemen! If you get to the end, give yourself a virtual cookie :D

Before we start though, I'd like to mention that I'll be coming at this from the perspective of my own email server that I set up myself. Let me introduce to you the cast: Postfix (the SMTP MTA), Dovecot (the IMAP MDA), rspamd (the spam filter), and OpenDKIM (the thing that deals with DKIM signatures).

With that out of the way, let's begin! We'll start of our journey by mapping out the journey a typical email undertakes.

The path a typical email takes. See the explanation below.

Let's say Bob Kerman wants to send Bill an email. Here's what happens:

  1. Bill writes the email and hits send. His email client connects to his email server, logs in, and asks the server to deliver a message for him.
  2. The server takes the email and reads the From header (in this case it's bill@billsboosters.com), figures out where the mail server is located, connects to it, and asks it to deliver Bob's message to Bill. mail.billsboosters.com takes the email and files it in Bill's inbox.
  3. Bill connects to his mail server and retrieves Bob's message.

Of course, this is simplified in several places. mail.bobsrockets.com will obviously need to do a few DNS lookups to find billsboosters.com's mail server and fiddle with the headers of Bob's message a bit (such as adding a Received header etc.), and smtp.billsboosters.com won't just accept the message for delivery without checking out the server it came from first. How does it check though? What's preventing seanssatellites.net pretending to be bobsrockets.com and sending an imposter?

Until relatively recently, the answer was, well, nothing really. Anyone could send an email to anyone else without having to prove that they could indeed send email in the name of a domain. Try it out for yourself by telnetting to a mail server on port 25 (unencrypted SMTP) and trying in something like this:

HELO mail.bobsrockets.com
MAIL From: <frank@franksfuel.io>
RCPT TO <bill@billsboosters.com>
DATA
From: sean@seanssatellites.net
To: bill@billsboosters.com

Hello! This is a email to remind you.....
.
QUIT

Oh, my! Frank at franksfuel.io can connect to any mail server and pretend that sean@seanssatellites.net is sending a message to bill@billsboosters.com! Mail servers that allow this are called open relays, and today they usually find themselves on several blacklists within minutes. Ploys like these are easy to foil, thankfully (by only accepting mail for your own domains), but it still leaves the problem of what to do about random people connecting to your mail server delivering spam to your inbox that claims to be from someone they aren't supposed to be sending mail for.

In response, some mail servers demanded things like the IP that connects to send an email must reverse to the domain name that they want to send email from. Clever, but when you remember that anyone can change their own PTR records, you realise that it's just a minor annoyance to the determined spammer, and another hurdle to the legitimate person in setting up their own mail server!

Clearly, a better solution is needed. Time to introduce our first destination: SPF. SPF stands for sender policy framework, and defines a mechanism by which a mail server can determine which IP addresses a domain allows mail to be sent from in it's name. It's a TXT record that sites at the root of a domain. It looks something like this:

v=spf1 a mx ptr ip4:5.196.73.75 ip6:2001:41d0:e:74b::1 a:starbeamrainbowlabs.com a:mail.starbeamrainbowlabs.com -all

The above is my SPF TXT record for starbeamrainbowlabs.com. It's quite simple, really - let's break it down.

v=spf1

This just defines the version of the SPF standard. There's only one version so far, so we include this to state that this record is an SPF version 1 record.

a mx ptr

This says that the domain that the sender claims to be from must have an a and an mx record that matches the IP address that's sending the email. It also says that the ptr record associated with the sender's IP must resolve to the domain the sender claims to be sending from, as described above (it does help with dealing with infected machines and such).

ip4:5.196.73.75 ip6:2001:41d0:e:74b::1

This bit says that the IP addresses 5.196.73.75 and 2001:41d0:e:74d::1 are explicitly allowed to send mail in the name of starbeamrainbowlabs.com.

a:starbeamrainbowlabs.com a:mail.starbeamrainbowlabs.com

After all of the above, this bit isn't strictly necessary, but it says that all the IP addresses found in the a records for starbeamrainbowlabs.com and mail.starbeamrainbowlabs.com are allowed to send mail in the name of starbeamrainbowlabs.com.

-all

Lastly, this says that if you're not on the list, then your message should be rejected! Other variants on this include ~all (which says "put it in the spam box instead"), and +all (which says "accept it anyway", though I can't see how that's useful :P).

As you can see, SPF allows a mail server to verify if a given client is indeed allowed to send an email in the name of any particular domain name. For a while, this worked a treat - until a new problem arose.

Many of the mail servers on the internet don't (and probably still don't!) support encryption when connecting to and delivering mail, as certificates were expensive and difficult to get hold of (nowadays we've got LetsEncrypt who give out certificates for free!). The encryption used when mail servers connect to one another is practically identical to that used in HTTPS - so if done correctly, the identity of the remote server can be verified and the emails exchanged encrypted, if the world's certification authorities aren't corrupted, of course.

Since most emails weren't encrypted when in transit, a new problem arose: man-in-the-middle attacks, whereby an email is altered by one or more servers in the delivery chain. Thinking about it - this could still happen today even with encryption, if any one server along an email's route is compromised. To this end, another mechanism was desperately needed - one that would allow the receiving mail server to verify that an email's content / headers hadn't been surreptitiously altered since it left the origin mail server - potentially preventing awkward misunderstandings.

Enter stage left: DKIM! DKIM stands for Domain Keys Identified Mail - which, in short, means that it provides a method by which a receiving mail server can cryptographically prove that a message hasn't been altered during transit.

It works by having a public-private keypair, in which the public key can only decrypt things, but the private key is capable of encrypting things. A hash of the email's headers / content is computed and encrypted with the private key. Then the encrypted hash is attached to the email in the DKIM-Signature header.

The receiving mail server does a DNS lookup to find the public key, and decrypts the hash. It then computes it's own hash of the email headers / content, and compares it against the decrypted hash. If it matches, then the email hasn't been fiddled with along the way!

Of course, not all the headers in the email are hashed - only a specific subset are included in the hash, since some headers (like Received and X-Spam-Result) are added and altered during transit. If you're interested in implementing DKIM yourself - DigitalOcean have a smashing tutorial on the subject, which should adapt easily to whatever system you're running yourself.

With both of those in place, billsboosters.com's mail server can now verify that mail.bobsrockets.com is allowed to send the email on behalf of bobsrockets.com, and that the message content hasn't been tampered with since it left mail.bobsrockets.com. mail.billsboosters.com can also catch franksfuel.io in the act of trying to deliver spam from seanssatellites.net!

There is, however, one last piece of the puzzle left to reveal. With all this in place, how do you know if your mail was actually delivered? Is it possible to roll SPF and DKIM out gradually so that you can be sure you've done it correctly? This can be a particular issue for businesses and larger email server setups.

This is where DMARC comes in. It's a standard that lets you specify an email address you'd like to receive DMARC reports at, which contain statistics as to how many messages receiving mail servers got that claimed to be from you, and what they did with them. It also lets you specify what percentage of messages should be subject to DMARC filtering, so you can roll everything out slowly. Finally, it lets you specify what should happen to messages that fail either SPF, DKIM, or both - whether they should be allowed anyway (for testing purposes), quarantined, or rejected.

DMARC policies get specified (yep, you guessed it!) in a DNS record. unlike SPF though, they go in _dmarc.megsmicroprocessors.org as a TXT record, substituting megsmicroprocessors.org for your domain name. Here's an example:

v=DMARC1; p=none; rua=mailto:dmarc@megsmicroprocessors.org

This is just a simple example - you can get much more complex ones than this! Let's go through it step by step.

v=DMARC1;

Nothing to see here - just a version number as in SPF.

p=none;

This is the policy of what should happen to messages that fail. In this example we've used none, so messages that fail will still pass right on through. You can set it to quarantine or even reject as you gain confidence in your setup.

rua=mailto:dmarc@megsmicroprocessors.org

This specifies where you want DMARC reports to be sent. Each mail server that receives mail from your mail server will bundle up statistics and send them once a day to this address. The format is in XML (which won't be particularly easy to read), but there are free DMARC record parsers out there on the internet that you can use to decode the reports, like dmarcian.

That completes the puzzle. If you're still reading, then congratulations! Post in the comments and say hi :D We've climbed the SPF mountain and discovered how email servers validate who is allowed to send mail in the name of another domain. We've visited the DKIM signature fields and seen how the content of email can be checked to see if it's been altered during transit. Lastly, we took a stroll down DMARC lane to see how it's possible to be sure what other servers are doing with your mail, and how a large email server setup can implement DMARC, DKIM, and SPF more easily.

Of course, I'm not perfect - if there's something I've missed or got wrong, please let me know! I'll try to correct it as soon as possible.

Lastly, this is, as always, a starting point - not an ending point. An introduction if you will - it's up to you to research each technology more thoroughly - especially if you're thinking of implementing them yourself. I'll leave my sources at the bottom of this post if you'd like somewhere to start looking :-)

Sources and Further Reading

Unmounting NFS Shares on Shutdown in OpenRC Manjaro

A cool SVG of a server. (Above: A clipart image of a server. Source)

Since I've been using Manjaro with OpenRC when I'm out and about, I've been steadily fixing little issues and niggles I've been encountering one by one (such as finding the option to let you move the windows on the taskbar panel around yourself).

One of the first issues I encountered was that OpenRC would generously take the network down before my NFS (network file system) shares have been unmounted. This results in lengthly delays when shutting down as each of the components of the NFS mounting system have to be waited upon by OpenRC and finally killed after taking too long to shut down.

Initially I attempted to investigate reordering the shutdown process, but that quickly grew out of hand as I was investigating, and I discovered that it was not a particularly practical or, indeed, stable solution to my particular problem. Next, I found autofs which looked like it would solve the problem by automatically mounting and unmounting my NFS shares as and when they are needed, but despite assisance from someone far more experienced in the Manjaro world than I (thank you!) couldn't get it to work reliably. In addition, it started exhibiting some odd behaviour like hiding all my other mounts in my /media folder, so I went on the hunt for better solution.

Quite by chance (all thanks to Duck Duck Go Instant Answers!) I stumbled upon NetworkManager dispatcher scripts. NetworkManager is the service / application that manages, surprisingly, the network connections on several major linux distributions - including Ubuntu (which I've used before), and, crucially, Manjaro. Although the answer said that the functionality I wanted had been removed, upon looking into the amtter it appeared to be an artifact of the way systemd shutdown the system, and so I gave it a whirl anyway just to see if it would work.

Thankfully it did end up working! To that end, I thought I'd (re)post the solution I found here for future reference, and in case it helps anyone else :-)

Assuming you already have your shares set up and working in your /etc/fstab, you can create a file in the folder /etc/NetworkManager/dispatcher.d/pre-down.d with the contents something like this:

#!/bin/sh

logger "Unmounting NFS shares gracefully before the network goes down...";

umount /media/bob/rocket-diagrams-nas;
umount /media/sean/satellite-schematics;

logger "Unmounted NFS shares successfully.";

Once done, you'll need to make it executable with a quick sudo chmod +x, and try rebooting to test it!

In theory, this could be used to do other things that need to be done before the network is taken down, like making a sekret tracking request to your web server for anti-theft purposes, or uploading a backup of your laptop's /etc directory automagically in case it comes to a sticky end.

Sources and Further Reading

Semi-automated backups with duplicity and an external drive

A bunch of hard drives. (Above: A bunch of hard drives. The original can be found here.)

Since I've recently got myself a raspberry pi to act as a server, I naturally needed a way to back it up. Not seeing anything completely to my tastes, I ended up putting something together that did the job for me. For this I used an external hard drive, duplicity, sendxmpp (sudo apt install sendxmpp), and a bit of bash.

Since it's gone rather well for me so far, I thought I'd write a blog post on how I did it. It still needs some tidying up, of course - but it works in it's current state, and perhaps it will help someone else put together their own system!

Step 1: Configuring the XMPP server

I use XMPP as my primary instant messaging server, so it's only natural that I'd want to integrate the system in with it to remind me when to plug in the external drive, and so that it can tell me when it's done and what happened. Since I use prosody as my XMPP server, I can execute the following on the server:

sudo prosodyctl adduser rasperrypi@bobsrockets.com

...and then enter a random password for the new account. From there, I set up a new private persistent multi-user chatroom for the messages to filter into, and set my client to always notify when a message is posted.

After that, it was a case of creating a new config file in a format that sendxmpp will understand:

rasperrypi@bobsrockets.com:5222 thesecurepassword

Step 2: Finding the id of the drive partition

With the XMPP side of things configured, next I needed a way to detect if the drie was plugged in or not. Thankfully all partitions have a unique id built-in, which you can use to see if it's plugged in or not. It's easy to find, too:

sudo blkid

The above will list all available partitions and their UUID - the unique id I mentioned. With that in hand, we can now check if it's plugged in or not with a cleverly crafted use of the readlink command:

readlink /dev/disk/by-uuid/${partition_uuid} 1>/dev/null 2>&2;
partition_found=$?
if [[ "${partition_found}" -eq "0" ]]; then
    echo "It's plugged in!";
else
    echo "It's not plugged in :-(";
fi

Simple, right? readlink has an exit code of 0 if it managed to read the symbolik link in /dev/disk/by-uuid ok, and 1 if it didn't. The symbolic links in /deve/disk/by-uuid are helpfuly created automatically for us :D From here, we can take it a step further to wait until the drive is plugged in:

# Wait until the drive is available
while true
do
    readlink "${partition_uuid}";

    if [[ "$?" -eq 0 ]]; then
        break
    fi

    sleep 1;
done

Step 3: Mounting and unmounting the drive

Raspberry Pis don't mount drive automatically, so we'll have do that ourselves. Thankfully, it's not so tough:

# Create the fodler to mount the drive into
mkdir -p ${backup_drive_mount_point};
# Mount it in read-write mode
mount "/dev/disk/by-uuid/${partition_uuid}" "${backup_drive_mount_point}" -o rw;

# Do backup thingy here

# Sync changes to disk
sync
# Unmount the drive
umount "${backup_drive_mount_point}";

Make sure you've got the ntfs-3g package installed if you want to back up to an NTFS volume (Raspberry Pis don't come with it by default!).

Step 4: Backup all teh things!

There are more steps involved in getting to this point than I thought there were, but if you've made it this far, than congrats! Have a virtual cookie :D 🍪

The next part is what you probably came here for: duplicity itself. I've had an interesting time getting this to work so far, actually. It's probably easier if I show you the duplicity commands I came up with first.

# Create the archive & temporary directories
mkdir -p /mnt/data_drive/.duplicity/{archives,tmp}/{os,data_drive}
# Do a new backup
PASSPHRASE=${encryption_password} duplicity --full-if-older-than 2M --archive-dir /mnt/data_drive/.duplicity/archives/os --tempdir /mnt/data_drive/.duplicity/tmp/os --exclude /proc --exclude /sys --exclude /tmp --exclude /dev --exclude /mnt --exclude /var/cache --exclude /var/tmp --exclude /var/backups / file://${backup_drive_mount_point}/duplicity-backups/os/
PASSPHRASE=${data_drive_encryption_password} duplicity --full-if-older-than 2M --archive-dir /mnt/data_drive/.duplicity/archives/data_drive --tempdir /mnt/data_drive/.duplicity/tmp/data_drive /mnt/data_drive --exclude '**.duplicity/**' file://${backup_drive_mount_point}/duplicity-backups/data_drive/

# Remove old backups
PASSPHRASE=${encryption_password} duplicity remove-older-than 6M --force --archive-dir /mnt/data_drive/.duplicity/archives/os file:///${backup_drive_mount_point}/duplicity-backups/os/
PASSPHRASE=${data_drive_encryption_password} duplicity remove-older-than 6M --force --archive-dir /mnt/data_drive/.duplicity/archives/data_drive file:///${backup_drive_mount_point}/duplicity-backups/data_drive/

Path names have been altered for privacy reasons. The first duplicity command in the above was fairly straight forward - backup everything, except a few folders with cache files / temporary / weird stuff in them (like /proc).

I ended up having to specify the archive and temporary directories here to be on another disk because the Raspberry Pi I'm running this on has a rather... limited capacity on it's internal micro SD card, so the default location for both isn't a good idea.

The second duplicity call is a little more complicated. It backs up the data disk I have attached to my Raspberry Pi to the external drive I've got plugged in that we're backing up to. The awkward bit comes when you realise that the archive and temporary directories are located on this same data-disk that we're trying to back up. To this end, I eventually found (through lots of fiddling) that you can exclude a folder duplicity via the --exclude '**.duplicity/**' syntax. I've no idea why it's different when you're not backing up the root of the filesystem, but it is (--exclude ./.duplicity/ didn't work, and neither did /mnt/data_drive/.duplicity/).

The final two duplicity calls just clean up and remove old backups that are older than 6 months, so that the drive doesn't fill up too much :-)

Step 5: What? Where? Who?

We've almost got every piece of the puzzle, but there's still one left: letting us know what's going on! This is a piece of cake in comparison to the above:

function xmpp_notify {
        echo $1 | sendxmpp --file "${xmpp_config_file}" --resource "${xmpp_resource}" --tls --chatroom "${xmpp_target_chatroom}"
}

Easy! All we have to do is point sendxmpp at our config file we created waaay in step #1, and tell it where the chatroom is that we'd like it to post messages in. With that, we can put all the pieces of the puzzle together:

#!/usr/bin/env bash

source .backup-settings

function xmpp_notify {
    echo $1 | sendxmpp --file "${xmpp_config_file}" --resource "${xmpp_resource}" --tls --chatroom "${xmpp_target_chatroom}"
}

xmpp_notify "Waiting for the backup disk to be plugged in.";

# Wait until the drive is available
while true
do
    readlink "${backup_drive_dev}";

    if [[ "$?" -eq 0 ]]; then
        break
    fi

    sleep 1;
done

xmpp_notify "Backup disk detected - mounting";

mkdir -p ${backup_drive_mount_point};

mount "${backup_drive_dev}" "${backup_drive_mount_point}" -o rw

xmpp_notify "Mounting complete - performing backup";

# Create the archive & temporary directories
mkdir -p /mnt/data_drive/.duplicity/{archives,tmp}/{os,data_drive}

echo '--- Root Filesystem ---' >/tmp/backup-status.txt
# Create the archive & temporary directories
mkdir -p /mnt/data_drive/.duplicity/{archives,tmp}/{os,data_drive}
# Do a new backup
PASSPHRASE=${encryption_password} duplicity --full-if-older-than 2M --archive-dir /mnt/data_drive/.duplicity/archives/os --tempdir /mnt/data_drive/.duplicity/tmp/os --exclude /proc --exclude /sys --exclude /tmp --exclude /dev --exclude /mnt --exclude /var/cache --exclude /var/tmp --exclude /var/backups / file://${backup_drive_mount_point}/duplicity-backups/os/ 2>&1 >>/tmp/backup-status.txt
echo '--- Data Disk ---' >>/tmp/backup-status.txt
PASSPHRASE=${data_drive_encryption_password} duplicity --full-if-older-than 2M --archive-dir /mnt/data_drive/.duplicity/archives/data_drive --tempdir /mnt/data_drive/.duplicity/tmp/data_drive /mnt/data_drive --exclude '**.duplicity/**' file://${backup_drive_mount_point}/duplicity-backups/data_drive/ 2>&1 >>/tmp/backup-status.txt

xmpp_notify "Backup complete!"
cat /tmp/backup-status.txt | sendxmpp --file "${xmpp_config_file}" --resource "${xmpp_resource}" --tls --chatroom "${xmpp_target_chatroom}"
rm /tmp/backup-status.txt

xmpp_notify "Performing cleanup."

PASSPHRASE=${encryption_password} duplicity remove-older-than 6M --force --archive-dir /mnt/data_drive/.duplicity/archives/os file:///${backup_drive_mount_point}/duplicity-backups/os/
PASSPHRASE=${data_drive_encryption_password} duplicity remove-older-than 6M --force --archive-dir /mnt/data_drive/.duplicity/archives/data_drive file:///${backup_drive_mount_point}/duplicity-backups/data_drive/

sync;
umount "${backup_drive_mount_point}";

xmpp_notify "Done! Backup completed. You can now remove the backup disk."

I've tweaked a few of the pieces to get them to work better together, and created a separate .backup-settings file to store all the settings in.

That completes my backup script! Found this useful? Got an improvement? Use a different strategy? Post a comment below!

The other side of the fence: A Manjaro review

Oen of the default Manjaro wallpapers. (Above: One of the default Manjaro wallpapers.)

Sorry for the delay! I've had rather a lot to do recently - including set up the machine I'm using to write this blog post.

For a while now, I've been running Ubuntu on my main laptop. After making the switch from Windows 7, I haven't looked back. Recently though, a friend of mine suggested I check out Manjaro - another distribution of Linux based on Arch Linux . After setting it up on a secondary machine and playing around with it, I rather like it, actually - and I've decided to write a post about my experiences coming from Ubuntu.

Like most things, I've got multiple different reasons for playing around with Manjaro. Not least of which is to experience a different ecosystem and a different way of doing things - namely the Arch Linux ecosystem. To that end, I've selected the OpenRC init system - since I've got experience with Systemd already, I feel it's essential to gain experience with other technologies.

With my preferences selected, I fired up manjaro-architect (available on the Manjaro website, which is linked above) and began the installation. I quickly found that the installation was not a simple process - requiring several reboots to get the options just right. In particular, the partitioning tools available are somewhat limited - such that I had to boot into a live Ubuntu environment to sort them out to get a dual boot setup working correctly.

On the other side, the installer allows the configuration of so many more options, like the mount options of the partitions, the kernel to use and it's associated modules, the init system that is used, and the desktop environment you want to use (I've picked XFCE). During the install process I've learnt about a bunch of different things that I had no idea about before.

After installation, I then started on the long task of configuring it to my liking. I'm still working on that, but I'm constantly amazed at the level of flexibility it offers. Nearly everything can be customised - including all the title bar graphics and the ordering and position of everything on the task bar (called a panel in XFCE.

I've found OpenRC an interesting learning experience too. It's very similar to upstart - another init system I used before UBuntu switched to systemd. As a result, it's so uch simpler to get my head around. It feels a lot more.... transparent than systemd, which is a good thing I think. I do miss a few of the features that systemd offers, however. In time, though, I'm sure that I'll find alternative ways of doing things - different projects do have different ways of thinking, after all!

The concept of the [AUR]() (The Arch User Repository) is possibly one of my faviourite things out of all the things I've encountered so far. It's a community-driven archive of packages, but instead of containing the package binaries themselves, each package contains instructions to fetch build, and install said package.

This way requires much less maintenance I suspect, and makes it much easier to stay up to date with things. The install process for a package from the AUR is a little complex, sure, but so much easier and more automated than doing it by hand. It's like taking the benefits of downloading an installer manually from a program's website like you have to on Windows, and combining it with the ease of use and automation that comes with package managers like apt (Debian-based distrubutions) and pacman / yaourt (Arch Linux-based distributions).

In short, Manjaro is a breath of fresh air, and very different to what I've tried before. While it's certainly not for the linux beginner (try Ubuntu or Linux Mint if you're a beginner!) - especially the installer - I think it fulfills a different purpose for me at least - as platform from which to explore the Arch Linux ecosystem in relative comfort and dive deeper into the way that all the different parts in a linux system interact with each other.

Share files from your host machine with virtual machine guests with 9p and virtual machine manager

The virtual machine manager logo, with a stylised infinity symbol behind it. (Infinity symbol source, Virtual Machine Manager logo traced from the logo on the official site automatically with Inkscape)

Recently I've been looking at Virtual Machine Manager with qemu and KVM to set up a few virtual machines for my next year at university. During this learning and planning process, I've discovered a way (source) to setup an environment such that you can share a folder on your host machine (optionally read-only) with a guest machine running inside a virtual machine with a technology called 9p, and I wanted to share how I did it here.

To start make sure your virtual machine is powered off, and go to Add HardwareFilesystem, and fill in the boxes:

The add new filesystem passthrough box in vmm

  • Source path - The path on the host system to the folder you can to share.
  • Target path - The path at which the guest will see it. Note that this isn't a place on the guest file system, from what I can tell - see below.
  • Export filesystem as readonly mount - Check this box to make the share read-only.

Next, click "Finish", and start your virtual machine. Next, open a terminal on the guest machine and type something similar to this:


sudo mkdir /mnt/host_files
sudo mount -t 9p -o trans=virtio,version=9p2000.L /target /mnt/host_files

...this should mount the host share at /tmp/share to the path /mnt/host_files on the guest machine.

Found this different sort of post useful? Got something to add? Post about it in the comments!

Debug your systemd services with journalctl

Some nice bulbs and bits of wood.

The chances are that if you're using linux, you will probably have run into systemd. If you find yourself in the situation where you've got a systemd service that keeps dying and you don't know why (I've been there before several times!), and there's nothing helpful in /var/log, before you give up, you might want to give journalctl a try. It's systemd's way of capturing the output of a service and storing it in it's logging system (or something).

When I first found out about it, I read that apparently journalctl -xe servicename would show me the logs for any given service. It turned out that it wasn't the case (it just threw a nasty error), so I went trawling through the man pages and found the correct command-line switch. If you've got a service called rocketbooster.service, and you want to see if systemd has any logs stored for it, then you can execute this command:

journalctl --unit rocketbooster.service

...or for short

journalctl -u rocketbooster.service

It should open the logs (if there are any) in less - with the oldest logs at the top, so you might need to scroll all the way down to the bottom to see anything that's relevant to your problem (shift + G will take you to the bottom of the file).

I've found that systemd has a habit of rotating the logs too - and journalctl doesn't appear to know how to access the rotated logs, so it's best if you use this command as soon as possible after failure (suggestions on how to access these rotated logs are welcome! Post down in the comment :D).

I thought I'd document it here in case it was useful to anyone - and so I don't forget myself! :P

Access your home linux box from anywhere with SSH tunnels

An abstract tunnel that doesn't hold much relevant to the blog post :P

(Header by GDJ from openclipart.org. Source page)

....and other things! Recently, I bought a Raspberry Pi 3. Now that the rest of the components have arrived, I've got a rather nice little home server that's got a 1 terabyte WD PiDrive attached to it to provide lots of lovely shared storage, which is rather nice.

However, within a few weeks I was faced with a problem. How do I access my new box to configure it from my internship when I'm on lunch? Faced with such a challenge, I did what anyone would, and took to the internet to find a solution.

It didn't take long. A while ago I heard about these things called 'SSH tunnels', which, while not designed for a high throughput, are more than adequate for a low-intensity SSH connection that runs a few kilobytes a second in either direction. After reading this excellent answer by erik on the Unix & Linux StackExchange, I had an understanding of how SSH tunnels work, and was ready to put together a solution. You should go and read that answer if you'd like to understand SSH tunnels too - it explains it much better than I ever could :P

With that knowledge in hand, I went about planning the SSH tunnel. I already have a server a public IP address (it's hosting this website!), so I needed a reverse tunnel to allow me to access a port local to my linux box at home (called elessar - a virtual cookie for anyone who gets the reference!) from starbeamrainbowlabs.com.

Important! Ask yourself whether it's moral and ethical to set up an ssh tunnel before you think about following along with this article! If you find yourself behind a firewall or something similar, then the chances are that it's there for a good reason - and you might get into trouble if you try and circumvent it. I won't be held responsible for any loss or damages of any description caused by the reading of this post.

First job: create a limited account on starbeamrainbowlabs.com for elessar to SSH into. That's easy:

sudo useradd --system ssh-tunnel

Then, with a few quick lines in /etc/ssh/sshd_config:

Match User ssh-tunnel
    ForceCommand echo 'This account can only be used for ssh tunnelling.'

....we can prevent the ssh-tunnel user from being abused to gain shell access to the server (let me know if there are any further measures I can put in place here).

Now that I had a user account to ssh in as, I could set up a public / private keypair to authenticate with starbeamrainbowlabs.com, and cook up an SSH command for elessar that would set up the appropriate tunnel. After fiddling around a bit, I came up with this that did the job:

ssh -TN -R30582:localhost:5724 ssh-tunnel@starbeamrainbowlabs.com

Very cool. So with that command executing on elessar, I could ssh into elessar from starbeamrainbowlabs.com! In short, it sets up a tunnel that will make port 30582 on starbeamrainbowlabs.com tunnel through to port 5724 on elessar - the port on elessar that has SSH running on it, without allocating a pseudo-tty to save resources. explainshell.com can, well, explain it in more detail if you're interested.

Having an SSH command that would set up the tunnel is nice, but it's not very useful, since I have to execute it first before I can actually SSH into elessar from afar.

The solution was actually a little bit complicated. First, I wrote a simple systemd service file (systemd is what I have installed, since it's vanilla raspbian - this should be easily adaptable to other systems and setups) to start the SSH tunnel automagically on boot:

[Unit]
Description=SSH tunnel from starbeamrainbowlabs.com to local ssh server.

[Service]
Type=simple
ExecStart=/usr/bin/ssh -TN -R30582:localhost:5724 ssh-tunnel@starbeamrainbowlabs.com

[Install]
WantedBy=network-online.target

I quickly realised that there were a few flaws with this approach. Firstly, it tried to start the SSH connection before my router had connected to the internet, since my router starts faster than the box that initialises the fibre connection to my ISP. Secondly, it fails to retry when the connection dies.

The first problem can be solved relatively easily, by wrapping the ssh command in a clever bit of shell scripting:

/bin/sh -c 'until ping -c1 starbeamrainbowlabs.com &>/dev/null && sleep 5; do :; done && /usr/bin/ssh -TN -R30582:localhost:5724 ssh-tunnel@starbeamrainbowlabs.com

The above tries to ping starbeamrainbowlabs.com every 5 seconds until it succeeds, and only then does it attempt to open the SSH connection. This solves the first problem. To solve the second, we need to look at autossh. Autossh is a small tool that monitors an ssh connection in a variety of configurable ways and restarts the connection if ever dies for whatever reason. You can install it with your favourite package manager:

sudo apt install autossh

Substitute apt with whatever package manager you use on your system. With it installed, we can use a command like this:

autossh -o "UserKnownHostsFile /home/ssh-tunnel/.ssh/known_hosts" -o "IdentityFile /home/ssh-tunnel/.ssh/ssh-tunnel_ed25519" -o "PubkeyAuthentication=yes" -o "PasswordAuthentication=no" -o "ServerAliveInterval 900" -TN -R30582:localhost:5724 -p 7261 ssh-tunnel@starbeamrainbowlabs.com

to automatically start our ssh tunnel, and restart it if anything goes wrong. Note all the extra settings I had to specify here. This is because even though I had many of them specified in ~/.ssh/config for the ssh-tunnel user, because of systemd's weird environment when it starts a service, I found I had to specify everything in the command line with absolute paths (ugh).

Basically, the above tells autossh where the known_hosts file is (important for automation!), that it should only attempt public / private keypair authentication and not password authentication, that it should check the server's still there every 15 minutes, and all the other things we figured out above.

Finally, I combined the solutions I came up with for both problems, which left me with this:

[Unit]
Description=SSH tunnel from starbeamrainbowlabs.com to local ssh server.

[Service]
Type=simple
ExecStart=/bin/sh -c 'until ping -c1 starbeamrainbowlabs.com &>/dev/null && sleep 5; do :; done && /usr/bin/autossh -o "UserKnownHostsFile /home/pi/.ssh/known_hosts" -o "IdentityFile /home/pi/.ssh/ssh-tunnel_ed25519" -o "PubkeyAuthentication=yes" -o "PasswordAuthentication=no" -o "ServerAliveInterval 900" -TN -R30582:localhost:5724 -p 7261 ssh-tunnel@starbeamrainbowlabs.com'

[Install]
WantedBy=network-online.target

Here's a version that utilises the -f parameter of autossh to put the autossh into the background, which eliminates the sh parent process:

[Unit]
Description=SSH tunnel from starbeamrainbowlabs.com to local ssh server.

[Service]
Type=forking
Environment=AUTOSSH_PIDFILE=/var/run/sbrl-ssh-tunnel/ssh-tunnel.pid
PIDFile=/var/run/sbrl-ssh-tunnel/ssh-tunnel.pid
ExecStartPre=/bin/mkdir -p /var/run/sbrl-ssh-tunnel
ExecStartPre=-/bin/chown ssh-tunnel:ssh-tunnel /var/run/sbrl-ssh-tunnel
ExecStart=/bin/sh -c 'until ping -c1 starbeamrainbowlabs.com &>/dev/null && sleep 5; do :; done && /usr/bin/autossh -f -o "UserKnownHostsFile /home/pi/.ssh/known_hosts" -o "IdentityFile /home/pi/.ssh/ssh-tunnel_ed25519" -o "PubkeyAuthentication=yes" -o "PasswordAuthentication=no" -o "ServerAliveInterval 900" -TN -R30582:localhost:5724 -p 7261 ssh-tunnel@starbeamrainbowlabs.com'

[Install]
WantedBy=network-online.target

I ended up further modifying the above to set up an additional tunnel to allow elessar to send emails via the postfix email server that's running on starbeamrainbowlabs.com. Let me know if you'd be interested in a tutorial on this!

Sources and Futher Reading

Make your linux learning experience painless with tldr-pages!

If you've been learning linux for a little while, you'll probably have encountered man pages. They are the complete documentation of all the tools, commands(, and kernel functions) available on the system you're currently on (read them online here!). If you have encountered them, you'll also know that they usually are somewhat... verbose.

Enter stage left: tldr-pages!

tldr-pages is an ongoing effort to create a repository of simplified man pages, that document the most common usages of a command. How about this, for the tar command?

# tar

> Archiving utility.
> Often combined with a compression method, such as gzip or bzip.

- Create an archive from files:

`tar cf {{target.tar}} {{file1 file2 file3}}`

- Create a gzipped archive:

`tar czf {{target.tar.gz}} {{file1 file2 file3}}`

- Extract an archive in a target folder:

`tar xf {{source.tar}} -C {{folder}}`

- Extract a gzipped archive in the current directory:

`tar xzf {{source.tar.gz}}`

- Extract a bzipped archive in the current directory:

`tar xjf {{source.tar.bz2}}`

- Create a compressed archive, using archive suffix to determine the compression program:

`tar caf {{target.tar.xz}} {{file1 file2 file3}}`

- List the contents of a tar file:

`tar tvf {{source.tar}}`

...or this for git reset?

# git reset

> Undo commits or unstage changes, by resetting the current git HEAD to the specified state.
> If a path is passed, it works as "unstage"; if a commit hash or branch is passed, it works as "uncommit".

- Unstage everything:

`git reset`

- Unstage specific file(s):

`git reset {{path/to/file(s)}}`

- Unstage portions of a file:

`git reset -p {{path/to/file}}`

- Undo the last commit, keeping its changes (and any further uncommitted changes) in the filesystem:

`git reset HEAD~`

- Undo the last two commits, adding their changes to the index, i.e. staged for commit:

`git reset --soft HEAD~2`

- Discard any uncommitted changes, staged or not (for only unstaged changes, use `git checkout`):

`git reset --hard`

- Reset the repository to a given commit, discarding committed, staged and uncommitted changes since then:

`git reset --hard {{commit}}`

For those learning linux and the terminal, I think it's an invaluable tool. It helps you out by showing you how to perform common tasks. As you get more experienced though, it becomes useful in another way: showing you how to do those things that you don't do often enough to remember off the top of your head.

I'm probably a bit biased, since I've been contributing to the project for a while (and the nice folks over there recently promoted me to the rank of maintainer :D), so you should check it out for yourself! There's even an online client that you can use without installing anything :-) Once you're ready to install a client directly in your terminal, there's an extensive list of clients documented on the repository wiki, with one available for every environment and platform.

If you encounter a command that hasn't been documented yet, then they've also made it easy to contribute a page yourself.

I think the idea is rather cool, actually - as you've probably guessed by now! Let me know what you think of it in the comments.

Art by Mythdael