Mailing List Articles Atom Feed Comments Atom Feed Twitter Reddit Facebook

Tag Cloud

3d account algorithms android announcement architecture archives arduino artificial intelligence artix assembly async audio bash batch blog bookmarklet booting c sharp c++ challenge chrome os code codepen coding conundrums coding conundrums evolved command line compilers compiling compression css dailyprogrammer debugging demystification distributed computing documentation downtime electronics email embedded systems encryption es6 features event experiment external first impressions future game github github gist graphics hardware hardware meetup holiday holidays html html5 html5 canvas infrastructure interfaces internet io.js jabber javascript js bin labs learning library linux low level lua maintenance manjaro network networking node.js operating systems performance photos php pixelbot portable privacy problem solving programming problems project projects prolog protocol protocols pseudo 3d python reddit reference release releases resource review rust secrets security series list server software sorting source code control statistics storage svg technical terminal textures three thing game three.js tool tutorial tutorials twitter ubuntu university update updates upgrade version control virtual reality virtualisation visual web website windows windows 10 xmpp xslt

Securing a Linux Server Part 2: SSH

Wow, it's been a while since I posted something in this series! Last time, I took a look at the Uncomplicated Firewall, and how you can use it to control the traffic coming in (and going out) of your server. This time, I'm going to take a look at steps you can take to secure another vitally important part of most servers: SSH. Used by servers and their administrators across the world to talk to one another, if someone manages to get in who isn't supposed to, they could do all kinds of damage!

The first, and easiest thing we can do it improve security is to prevent the root user logging in. If you haven't done so already, you should create a new user on your server, set a good password, and give it superuser privileges. Login with the new user account, and then edit /etc/ssh/sshd_config, finding the line that says something like

PermitRootLogin yes

....and change it to

PermitRootLogin no

Once done, restart the ssh server. Your config might be slightly different (e.g. it might be PermitRootLogin without-password) - but the principle is the same. This adds an extra barrier to getting into your server, as now attackers must not only guess your password, but your username as well (some won't even bother, and keep trying to login to the root account :P).

Next, we can move SSH to a non-standard port. Some might argue that this isn't a good security measure to take and that it doesn't actually make your server more secure, but I find that it's still a good measure to take for 2 reasons: defence in depth, and preventing excessive CPU load from all the dumb bots that try to get in on the default port. With that, it's make another modification to /etc/ssh/sshd_config. Make sure you test at every step you take, as if you lock yourself out, you'll have a hard time getting back in again....

Port 22

Change 22 in the above to any other number between about 1 and 65535. Next, make sure you've allowed the new port through your firewall! If you're using ufw, my previous post (link above) gives a helpful guide on how to do this. Once done, restart your SSH server again - and try logging in before you close your current session. That way if you make a mistake, you can fix through your existing session.

Once you're confident that you've got it right, you can close port 22 on your firewall.

So we've created a new user account with a secure password (tip: use a password manager if you have trouble remembering it :-)), disabled root login, and moved the ssh port to another port number that's out of the way. Is there anything else we can do? Turns out there is.

Passwords are not the only we can authenticate against an SSH server. Public private keypairs can be used too - and are much more secure - and convenient - than passwords if used correctly. You can generate your own public-private keypair like so:

ssh-keygen -t ed25519

It will ask you a few questions, such as a password to encrypt the private key on disk, and where to save it. Once done, we need to tell ssh to use the new public-private keypair. This is fairly easy to do, actually (though it took me a while to figure out how!). Simply edit ~/.ssh/config (or create it if it doesn't exist), and create (or edit) an entry for your ssh server, making it look something like this:

    Port            {port_name}
    IdentityFile    {path/to/private/keyfile}

It's the IdentityFile line that's important. The port line simply makes it such that you can type ssh (or whatever your server is called) and it will figure out the port number for you.

With a public-private keypair now in use, there's just one step left: disable password-based logins. I'd recommend trailing it for a while to make sure you haven't messed anything up - because once you disable it, if you lose your private key, you won't be getting back in again any time soon!

Again, open /etc/ssh/sshd_config for editing. Find the line that starts with PasswordAuthentication, and comment it out with a hash symbol (#), if it isn't already. Directly below that line, add PasswordAuthentication no.

Once done, restart ssh for a final time, and check it works. If it does, congratulations! You've successfully secured your SSH server (to the best of my knowledge, of course). Got a tip I haven't covered here? Found a mistake? Let me know in a comment below!

Rendering LaTeX documents to PDF: Attempt #2

It was all going rather well, actually - until I discovered that pandoc doesn't support regular bibliographies / references. Upon discovering this, I ended up with a bit of problem. Thankfully, the answer lay in pdflatex - but getting to the point where I could use it without having it crash on me (which, by the way it can't accomplish properly - it gives an exit code of 0 when crashing! O.o) was not a trivial journey.

This blog post is a follow up to my first post on rendering LaTeX documents with pandoc, and is my attempt to document what I did to get it to work. To start with, I installed texlive properly. Here's how to do that on apt-based systems:

sudo apt install texlive-latex-extra --no-install-recommends

The no-install-recommends is useful here to avoid ~450MiB of useless documentation (in PDF form, apparently) being dumped to your hard drive. I've also got an arch-based system (it's actually Artix Linux, that I've blogged about) which I've done this on, so here's the install command for those kind of systems:

sudo pacman -S texlive-latexextra

Once that's installed, we can use it to render our LaTeX document to PDF. Upon discussing my issues with my Lecturer at University, I discovered that you actually have to run 3 commands in succession in order to render a single PDF. Here they are:

bibtex filename
pdflatex --output-directory=. filename.tex
pdflatex --output-directory=. filename.tex

The first one compiles the bibliography using BiBTeX. If it isn't installed already, you might need to search your distribution's repositories and install it. Next, we run the LaTeX file through pdflatex from TeXLive not once but twice - as it apparently needs to resolve the references on the first pass (why it can't do them all in once pass I have no idea :P).

It's also worth noting that the bibtex command doesn't like you to append the filename extension - it does it automatically, apparently.

That's about everything I've got on the process so far. If you've got anything else to add, please let me know in the comments below (I'm rather new to this whole LaTeX thing....)

Further Reading

Rendering LaTeX documents to PDF on Linux (and maybe Windows too)

I'm starting to write another report for University, and unlike other reports, this one apparently has to be a rather specific format. To that end, I've got two choices, apparently: Use the provided Word / LibreOffice template, or use a LaTeX template instead. After the trouble and frustration I had with LibreOffice for my previous report, I've naturally decided that using the LaTeX template might be a good idea.

After downloading it, I ended up doing some research and troubleshooting to get it to render properly to a PDF. Now that I've figured it out, I thought I'd share it here for anyone else who ends up experiencing difficulties or is unsure on how it's done.

The way I'm going to be using it is with a tool called Pandoc. First, install it like so:

sudo apt install pandoc texlive-fonts-recommended

Adjust as necessary for your distribution - Windows users will need to read the download instructions. The texlive-font-recommended package is ~66MiB(!), but it contains a bunch of fonts that are needed when you're rendering LaTeX documents, apparently.

With the dependencies installed, here's the command to convert a LaTeX document to a PDF:

pandoc -s input.tex -o output.pdf

Replace index.tex with the path to your input file, and output.pdf with the desired path to the output file. I haven't figured out how to set the font to sans-serif yet, but I'll probably make another post about it when I do.

Found this helpful? Still having issues? Let me know below! I don't have analytics on here, so that's the only way I'll know if anyone reads this :-)

Sources and Further Reading

Jump around a filesystem with a bit of bash

Your shell in the middle of a teleporter.

(Banner remixed from images found on openclipart)

I've seen things like jump, which allow you to bookmark places on your system so that you can return to them faster. The trouble is, I keep forgetting to use it. I open the terminal and realise that I need to be in a specific directory, and forget to bookmark it once I cd to it - or I forget that I bookmarked it and cd my way there anyway :P

To solve the problem, I thought I'd try implementing my own simplified system, under the name teleport, telepeek, and telepick. Obviously, we'll have to put these scripts in something like .bash_aliases as functions - otherwise it won't cd in the terminal itself. Let's start with teleport:

function teleport() {
    cd $(find . -type d | grep -iP "$1" | head -n1);

Not bad for a first attempt! Basically, it does a find to list all the subdirectories in the current directory, filters the results with the specified regex, and changes directory to the first result returned. Here's an example of how it's used:

~ $ teleport 'pep.*mint'
~/Documents/code/some/path/pepperminty-wiki/ $ 

We can certainly improve it though. Let's start by removing that head call:

function teleport() {
    cd $(find . -type d | grep -m1 -iP "$1");

What about all those Permission denied messages that pop up when you're jumping around places that you might not have permission to go everywhere? Let's suppress those too:

function teleport() {
    cd $(find . -type d 2>/dev/null | grep -m1 -iP "$1");

Much better. With a teleport command in hand, it might be nice to inspect the list of directories the find + grep combo finds. For that, let's invent a telepeek variant:

function telepeek() {
    find . -type d 2>/dev/null | grep -iP "$1" | less

Very cool. It doesn't have line numbers though, and they're useful. Let's fix that:

function telepeek() {
    find . -type d 2>/dev/null | grep -iP "$1" | less -N

Better, but I'd prefer them to be highlighted so that I can tell them apart from the directory paths. For that, we've got to change our approach to the problem:

function telepeek() {
    find . -type d 2>/dev/null | grep -iP "$1" | cat -n | sed 's/^[ 0-9]*[0-9]/\o033[34m&\o033[0m/' | less -R

By using a clever combination of cat -n to add the line numbers and a strange sed recipe (which I found in a comment on this Stack Overflow answer) to highlight the numbers themselves, we can get the result we want.

This telepeek command has given me an idea. Why not ask for an index to jump to after going to the trouble of displaying line numbers and jump to that directory? Let's cook up a telepick command!

function telepick() {
    telepeek $1;
    read -p "jump to index: " line_number;
    cd $(find . -type d 2>/dev/null | grep -iP "$1" | sed "${line_number}q;d");

That wasn't too hard. By using a few different commands rather like lego bricks, we can very easily create something that does what we want with minimal effort. The read -p "jump to index: " line_number bit fetches the index that the user wants to jump to, and sed comes to the rescue again to pick out the line number we're interested in with sed "${line_number}q;d".

Getting an updated version of MonoDevelop on Ubuntu

The old and new versions of MonoDevelop overlaid on top of one another.

Thanks to the AUR (The Arch User Repository), an updated version of MonoDevelop is never very far away on Manjaro, Artix, Arch Linux and friends. Unfortunately, such delights can't be found so easily in the Ubuntu and Debian ecosystem. On a check inspection of the official repositories, the latest version available is 5.10, while the AUR is offering 7.1.2!

In addition, the newer versions offered via this 'flatpak' thing are sandboxed, so you can't combined multiple technologies like Node.JS and the amazing npm in your C♯ applications with it.

Thankfully, all is not lost. A clever GitHub user by the name of cra0zy has put together a script that will install the latest version of MonoDevelop for us - without all the hassle of sandboxes or manually compiling from source. Here's a link to this installer:


This post is mainly for my own reference, but I'd love to know if it helped you out too. If it did, leave a comment below!

The Great Migration of Manjaro

The artix linux logo. It's relevance will become clear by the end! :P

It was just before lunch in the library, and I was checking my university emails on my travelling laptop that runs Manjaro OpenRC. While that was going on, I was inducing a few updates that it notified me about - and I started to install them with yaourt -Syua. First mistake.

During the installation, it decided to upgrade OpenRC to the version in the AUR (Arch User Repository), but I didn't think anything of it particularly - I knew that Manjaro OpenRC was dying deprecated. Second mistake.

Once the updates were complete, I shut it down and sent on my way - or at least I tried to - it wouldn't shut down, instead proceeding to log out and leave it at that. I resolved to investigate the problem when I got home. Third mistake.

By the time I came to use it again, I was greeted with an ominous message:

[Firmware Bug]: TSC_DEADLINE disabled due to Errata; please update microcode to version: 0x52 (or later)
Failed to execute /init (error -2)
Kernel panic - not syncing: No working init found. Try passing init= option to kernel. See Linux Documentation/admin-guide/init.rst
CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.13.2-1-MANJARO #1
Hardware name: Entroware Apollo/Apollo, BIOS 1.05.05 04/27/2017

Hrm. That's odd. Maybe something went wrong in the update? Linux has what's called kernel parameters that tell it how to boot. They specify things like "here's the root partition of the system", and "please let me edit files on the system after booting". To understand how this fits into the next part of the story, it's first necessary to take a quick retour and look at how, precisely the linux kernel goes about booting a system. This is best explained with a diagram:

The linux kernel boot process.

(Rendered with Ascidia. Textual diagram source available here)

  1. BIOS / UEFI POST - The starting point of the boot process. The BIOS / UEFI turns on all the devices, runs some basic hardware checks, and (usually) gives the user a choice of what they want to boot from.
  2. rEFInd - grub may be used instead of rEFInd, but the basic principle is the same: it asks the user how they want to boot from the hard drive. Kernel parameters are decided on here.
  3. Initialisation: The Linux kernel is executed by the bootloader, and it proceeds to initialise itself and the connected devices.
  4. Mount initial RAM disk: The Linux encounters a chicken-and-egg problem rather early on: How can it start talking to the connected devices if it doesn't know how to talk to them? The initial RAM disk solves that problem: It contains a bunch of drives and other such components that the kernel needs to initialise all the connected devices. It's like a cut-down root file system, in a sense.
  5. Load drivers: The Linux kernel loads the drivers from the initial RAM disk (aka initrd) and starts initialising all the connected devices.
  6. Mount root (read-only): The main root file system is mounted next, but only in read-only mode while the boot process finishes.
  7. Execute init: It is at this point that the very first process is executed. It usually presides at /sbin/init, but this can be changed through the init kernel parameter.
  8. Mount root (read-write): The init process (under SysVinit at least) then remounts the root filesystem such that it is writeable.
  9. Mount other partitions: The next job is the mounting of the other partitions in /etc/fstab. This is also done by SysVinit if I recall correctly.
  10. Reach runlevels: The main runlevels managed by the service manager (e.g. OpenRC) are now executed in order by the service manager.

Phew, that took more explaining than I thought! And to think it all happens in the span of about 10 seconds....! With that out of the way, let's continue with the story.

Let's try specifying the init kernel parameter - maybe the update cleared it for some random reason....? I had no idea what I was getting myself into :P

Unexpectedly, specifying init=/sbin/init didn't work. Neither did specifying init=/bin/sh. At this point, I suspected that there was something seriously wrong. I (correctly) guessed that it was the update I performed that morning that was to blame. After a bunch of backing and forthing, I managed to get hold of a previous copy of the openrc package that was replaced by the 0.27 version from the AUR. After doing a full backup, I tried installing it and removing the new openrc-sysvinit package that was also installed.

Before we continue further, I should probably explain how I managed to install the previous package version. Didn't I just explain that my system wasn't bootable? Well, yes. But I also had the original manjaro-architect installation media that I used to build the system in the first place. With that in hand, I could use rEFInd to boot from that (my UEFI firmware makes it a bit of a pain otherwise!), and then mount the root partition of the broken system and chroot into it. This process allows me to pretend that the system is actually booted, while piggybacking off the live installation media of the boot process. It works a bit like this:

lsblk # Find the root partition
mkdir /mnt/os;
mount /dev/sdZY /mnt/os # Mount the root partition
mount /dev/sdAB /mnt/os/boot/efi # Mount the EFI partition
manjaro-chroot /mnt/os bash # Enter the chroot and execute bash

Back to the story. Sadly, valiant though my effort was to replace the openrc and openrc-sysvinit packages was, it did not solve the problem. Eventually, I ended up having to perform a blind migration to Artix Linux, the spiritual successor to both Manjaro OpenRC and Arch OpenRC (apparently the developers of both came together to create Artix Linux).

Eventually, I ended up with a successful migration that I performed inside the chroot, and the system was bootable again! Next time, I'll always run pacman -Syu before yaourt -Syua. I'll also set up a temporary backup solution for my system files (I've already got one in place for my personal files) while I figure out a more permanent one that backs up across the network.

Sources and Further Reading

Manjaro OpenRC Cheat Sheet

Amidst preparations for my third year at university, I've put together a sort of reference sheet to help me remember all the common commands needed when using Manjaro with OpenRC. It's not complete, but I'll continue to update it with various useful commands I stumble upon. You can find it below.

If you have any that you find useful, post a comment below! I'd love to see what you come up with - I might even add it to this list (crediting you of course)!

Cast List

  • sudo pacman: Main package manager
  • yaourt: pacman wrapper, also has AUR support. Swap out pacman for yaourt to include the AUR.
  • sudo rankmirrors: Finds and selects the fastest repository mirrors.
  • sudo rc-update - Enable and disable services
  • sudo service - Start, stop, and query the status of services


Package Management

Command Description
rankmirrors -i -m rank -d Interactively re-find the fastest mirrors
pacman -Sy Synchronise local repository metadata
pacman -Syy Redownload all repository metadata
pacman -Syua Sync with mirrors and update all packages
pacman -Fs filename Search repositories for packages that contain filename
pacman -Ss search_string Search repositories for package name or description that contain a search string
pacman -S package_name Install package_name and any dependencies required
pacman -Rs package_name Remove package_name and all dependencies not needed by anything else
pacman -Dk Check that all required dependencies are installed
pacman -Q List all installed packages and their versions
pacman -Qe List all packages that were installed manually
pacman -Qd List all packages that were isntalled automatically
pacman -Sii package_name See which packages require package_name to be installed


Command Description
rc-update List all services and their runlevels
rc-update add service_name default Add service_name to the default runlevel
rc_update delete service_name default Remove service_name from the default runlevel
service service_name start Start service_name
service service_name stop Stop service_name
service service_name status Query the status of service_name

Sources and Further Reading

Run a program on your dedicated AMD graphics card on Linux

I've recently figured out how to run a program on my dedicated AMD R7 M445 graphics card in Ubuntu 17.04, and since it's taken me far too long to around figuring it out, I thought I'd note it down here for future reference - if it helps you too, let me know in the comments below!

It's actually really simple. First, check that your dedicated AMD graphics card shows up with lspci:


If it's anything like my setup, you'll get a pair of rows like this (though they might not be next to each other):

00:02.0 VGA compatible controller: Intel Corporation HD Graphics 620 (rev 02)
01:00.0 Display controller: Advanced Micro Devices, Inc. [AMD/ATI] Topaz XT [Radeon R7 M260/M265 / M340/M360 / M440/M445] (rev c3)

Thankfully, my dedicated AMD card is showing (better than it did in previous versions of ubuntu, too, which thought it was an M225!). Next, we need to check that the amdgpu kernel module is loaded with a quick lsmod:

lsmod | grep -i amd

On my laptop, I get this:

amdkfd                139264  1
amd_iommu_v2           20480  1 amdkfd
amdgpu               1564672  1
i2c_algo_bit           16384  2 amdgpu,i915
ttm                    98304  1 amdgpu
drm_kms_helper        151552  2 amdgpu,i915
drm                   352256  9 amdgpu,i915,ttm,drm_kms_helper

Yay! It's loaded. Now to do a test to see if we can run anything on it:

glxinfo | grep "OpenGL renderer"
DRI_PRIME=1 glxinfo | grep "OpenGL renderer"

The above runs glxinfo twice: Once on the integrated graphics card, and once on the dedicated graphics card. The key here is the DRI_PRIME=1 environment variable - this tells the amdgpu driver that this process should run on the dedicated graphics and not the integrated graphics card. On my machine, I get this output:

OpenGL renderer string: Mesa DRI Intel(R) HD Graphics 620 (Kabylake GT2) 
OpenGL renderer string: Gallium 0.4 on AMD ICELAND (DRM 3.9.0 / 4.10.0-33-generic, LLVM 4.0.0)

As you can see, the latter invocation of the command ran on the dedicated AMD graphics card, and the former on the integrated graphics. So simple!

Now that we've verified that it works, we can do it with any program:

DRI_PRIME=1 inkscape

Did this you find this helpful? Did it work (or not)? Let me know in the comments!


Deep dive: Email, Trust, DKIM, SPF, and more

Lots of parcels (Above: Lots of parcels. Hopefully you won't get this many through the door at once..... Source)

Now that I'm on holiday, I've got some time to write a few blog posts! As I've promised a few people a post on the email system, that's what I'll look at this this post. I'm going to take you on a deep dive through the email system and trust. We'll be journeying though the fields of DKIM signatures, and climb the SPF mountain. We'll also investigate why the internet needs to take this journey in the first place, and look at some of the challenges one faces when setting up their own mail server.

Hang on to your hats, ladies and gentlemen! If you get to the end, give yourself a virtual cookie :D

Before we start though, I'd like to mention that I'll be coming at this from the perspective of my own email server that I set up myself. Let me introduce to you the cast: Postfix (the SMTP MTA), Dovecot (the IMAP MDA), rspamd (the spam filter), and OpenDKIM (the thing that deals with DKIM signatures).

With that out of the way, let's begin! We'll start of our journey by mapping out the journey a typical email undertakes.

The path a typical email takes. See the explanation below.

Let's say Bob Kerman wants to send Bill an email. Here's what happens:

  1. Bill writes the email and hits send. His email client connects to his email server, logs in, and asks the server to deliver a message for him.
  2. The server takes the email and reads the From header (in this case it's, figures out where the mail server is located, connects to it, and asks it to deliver Bob's message to Bill. takes the email and files it in Bill's inbox.
  3. Bill connects to his mail server and retrieves Bob's message.

Of course, this is simplified in several places. will obviously need to do a few DNS lookups to find's mail server and fiddle with the headers of Bob's message a bit (such as adding a Received header etc.), and won't just accept the message for delivery without checking out the server it came from first. How does it check though? What's preventing pretending to be and sending an imposter?

Until relatively recently, the answer was, well, nothing really. Anyone could send an email to anyone else without having to prove that they could indeed send email in the name of a domain. Try it out for yourself by telnetting to a mail server on port 25 (unencrypted SMTP) and trying in something like this:

MAIL From: <>

Hello! This is a email to remind you.....

Oh, my! Frank at can connect to any mail server and pretend that is sending a message to! Mail servers that allow this are called open relays, and today they usually find themselves on several blacklists within minutes. Ploys like these are easy to foil, thankfully (by only accepting mail for your own domains), but it still leaves the problem of what to do about random people connecting to your mail server delivering spam to your inbox that claims to be from someone they aren't supposed to be sending mail for.

In response, some mail servers demanded things like the IP that connects to send an email must reverse to the domain name that they want to send email from. Clever, but when you remember that anyone can change their own PTR records, you realise that it's just a minor annoyance to the determined spammer, and another hurdle to the legitimate person in setting up their own mail server!

Clearly, a better solution is needed. Time to introduce our first destination: SPF. SPF stands for sender policy framework, and defines a mechanism by which a mail server can determine which IP addresses a domain allows mail to be sent from in it's name. It's a TXT record that sites at the root of a domain. It looks something like this:

v=spf1 a mx ptr ip4: ip6:2001:41d0:e:74b::1 -all

The above is my SPF TXT record for It's quite simple, really - let's break it down.


This just defines the version of the SPF standard. There's only one version so far, so we include this to state that this record is an SPF version 1 record.

a mx ptr

This says that the domain that the sender claims to be from must have an a and an mx record that matches the IP address that's sending the email. It also says that the ptr record associated with the sender's IP must resolve to the domain the sender claims to be sending from, as described above (it does help with dealing with infected machines and such).

ip4: ip6:2001:41d0:e:74b::1

This bit says that the IP addresses and 2001:41d0:e:74d::1 are explicitly allowed to send mail in the name of

After all of the above, this bit isn't strictly necessary, but it says that all the IP addresses found in the a records for and are allowed to send mail in the name of


Lastly, this says that if you're not on the list, then your message should be rejected! Other variants on this include ~all (which says "put it in the spam box instead"), and +all (which says "accept it anyway", though I can't see how that's useful :P).

As you can see, SPF allows a mail server to verify if a given client is indeed allowed to send an email in the name of any particular domain name. For a while, this worked a treat - until a new problem arose.

Many of the mail servers on the internet don't (and probably still don't!) support encryption when connecting to and delivering mail, as certificates were expensive and difficult to get hold of (nowadays we've got LetsEncrypt who give out certificates for free!). The encryption used when mail servers connect to one another is practically identical to that used in HTTPS - so if done correctly, the identity of the remote server can be verified and the emails exchanged encrypted, if the world's certification authorities aren't corrupted, of course.

Since most emails weren't encrypted when in transit, a new problem arose: man-in-the-middle attacks, whereby an email is altered by one or more servers in the delivery chain. Thinking about it - this could still happen today even with encryption, if any one server along an email's route is compromised. To this end, another mechanism was desperately needed - one that would allow the receiving mail server to verify that an email's content / headers hadn't been surreptitiously altered since it left the origin mail server - potentially preventing awkward misunderstandings.

Enter stage left: DKIM! DKIM stands for Domain Keys Identified Mail - which, in short, means that it provides a method by which a receiving mail server can cryptographically prove that a message hasn't been altered during transit.

It works by having a public-private keypair, in which the public key can only decrypt things, but the private key is capable of encrypting things. A hash of the email's headers / content is computed and encrypted with the private key. Then the encrypted hash is attached to the email in the DKIM-Signature header.

The receiving mail server does a DNS lookup to find the public key, and decrypts the hash. It then computes it's own hash of the email headers / content, and compares it against the decrypted hash. If it matches, then the email hasn't been fiddled with along the way!

Of course, not all the headers in the email are hashed - only a specific subset are included in the hash, since some headers (like Received and X-Spam-Result) are added and altered during transit. If you're interested in implementing DKIM yourself - DigitalOcean have a smashing tutorial on the subject, which should adapt easily to whatever system you're running yourself.

With both of those in place,'s mail server can now verify that is allowed to send the email on behalf of, and that the message content hasn't been tampered with since it left can also catch in the act of trying to deliver spam from!

There is, however, one last piece of the puzzle left to reveal. With all this in place, how do you know if your mail was actually delivered? Is it possible to roll SPF and DKIM out gradually so that you can be sure you've done it correctly? This can be a particular issue for businesses and larger email server setups.

This is where DMARC comes in. It's a standard that lets you specify an email address you'd like to receive DMARC reports at, which contain statistics as to how many messages receiving mail servers got that claimed to be from you, and what they did with them. It also lets you specify what percentage of messages should be subject to DMARC filtering, so you can roll everything out slowly. Finally, it lets you specify what should happen to messages that fail either SPF, DKIM, or both - whether they should be allowed anyway (for testing purposes), quarantined, or rejected.

DMARC policies get specified (yep, you guessed it!) in a DNS record. unlike SPF though, they go in as a TXT record, substituting for your domain name. Here's an example:

v=DMARC1; p=none;

This is just a simple example - you can get much more complex ones than this! Let's go through it step by step.


Nothing to see here - just a version number as in SPF.


This is the policy of what should happen to messages that fail. In this example we've used none, so messages that fail will still pass right on through. You can set it to quarantine or even reject as you gain confidence in your setup.

This specifies where you want DMARC reports to be sent. Each mail server that receives mail from your mail server will bundle up statistics and send them once a day to this address. The format is in XML (which won't be particularly easy to read), but there are free DMARC record parsers out there on the internet that you can use to decode the reports, like dmarcian.

That completes the puzzle. If you're still reading, then congratulations! Post in the comments and say hi :D We've climbed the SPF mountain and discovered how email servers validate who is allowed to send mail in the name of another domain. We've visited the DKIM signature fields and seen how the content of email can be checked to see if it's been altered during transit. Lastly, we took a stroll down DMARC lane to see how it's possible to be sure what other servers are doing with your mail, and how a large email server setup can implement DMARC, DKIM, and SPF more easily.

Of course, I'm not perfect - if there's something I've missed or got wrong, please let me know! I'll try to correct it as soon as possible.

Lastly, this is, as always, a starting point - not an ending point. An introduction if you will - it's up to you to research each technology more thoroughly - especially if you're thinking of implementing them yourself. I'll leave my sources at the bottom of this post if you'd like somewhere to start looking :-)

Sources and Further Reading

Unmounting NFS Shares on Shutdown in OpenRC Manjaro

A cool SVG of a server. (Above: A clipart image of a server. Source)

Since I've been using Manjaro with OpenRC when I'm out and about, I've been steadily fixing little issues and niggles I've been encountering one by one (such as finding the option to let you move the windows on the taskbar panel around yourself).

One of the first issues I encountered was that OpenRC would generously take the network down before my NFS (network file system) shares have been unmounted. This results in lengthly delays when shutting down as each of the components of the NFS mounting system have to be waited upon by OpenRC and finally killed after taking too long to shut down.

Initially I attempted to investigate reordering the shutdown process, but that quickly grew out of hand as I was investigating, and I discovered that it was not a particularly practical or, indeed, stable solution to my particular problem. Next, I found autofs which looked like it would solve the problem by automatically mounting and unmounting my NFS shares as and when they are needed, but despite assisance from someone far more experienced in the Manjaro world than I (thank you!) couldn't get it to work reliably. In addition, it started exhibiting some odd behaviour like hiding all my other mounts in my /media folder, so I went on the hunt for better solution.

Quite by chance (all thanks to Duck Duck Go Instant Answers!) I stumbled upon NetworkManager dispatcher scripts. NetworkManager is the service / application that manages, surprisingly, the network connections on several major linux distributions - including Ubuntu (which I've used before), and, crucially, Manjaro. Although the answer said that the functionality I wanted had been removed, upon looking into the amtter it appeared to be an artifact of the way systemd shutdown the system, and so I gave it a whirl anyway just to see if it would work.

Thankfully it did end up working! To that end, I thought I'd (re)post the solution I found here for future reference, and in case it helps anyone else :-)

Assuming you already have your shares set up and working in your /etc/fstab, you can create a file in the folder /etc/NetworkManager/dispatcher.d/pre-down.d with the contents something like this:


logger "Unmounting NFS shares gracefully before the network goes down...";

umount /media/bob/rocket-diagrams-nas;
umount /media/sean/satellite-schematics;

logger "Unmounted NFS shares successfully.";

Once done, you'll need to make it executable with a quick sudo chmod +x, and try rebooting to test it!

In theory, this could be used to do other things that need to be done before the network is taken down, like making a sekret tracking request to your web server for anti-theft purposes, or uploading a backup of your laptop's /etc directory automagically in case it comes to a sticky end.

Sources and Further Reading

Art by Mythdael