Starbeamrainbowlabs

Stardust
Blog


Archive

Mailing List Articles Atom Feed Comments Atom Feed Twitter

Tag Cloud

3d account algorithms announcement archives arduino artificial intelligence assembly async audio bash batch blog bookmarklet booting c sharp c++ challenge chrome os code codepen coding conundrums coding conundrums evolved command line compiling css dailyprogrammer debugging demystification distributed computing downtime electronics embedded systems encryption es6 features event experiment external first impressions future game github github gist graphics hardware hardware meetup holiday html html5 html5 canvas interfaces internet io.js jabber javascript js bin labs learning library linux low level lua maintenance network networking node.js operating systems performance photos php pixelbot portable privacy programming problems project projects prolog protocol protocols pseudo 3d python reddit reference release releases resource review rust secrets security series list server software sorting source code control statistics svg technical terminal textures three thing game three.js tool tutorial tutorials twitter ubuntu university update updates upgrade version control virtualisation visual web website windows windows 10 xmpp xslt

Semi-automated backups with duplicity and an external drive

A bunch of hard drives. (Above: A bunch of hard drives. The original can be found here.)

Since I've recently got myself a raspberry pi to act as a server, I naturally needed a way to back it up. Not seeing anything completely to my tastes, I ended up putting something together that did the job for me. For this I used an external hard drive, duplicity, sendxmpp (sudo apt install sendxmpp), and a bit of bash.

Since it's gone rather well for me so far, I thought I'd write a blog post on how I did it. It still needs some tidying up, of course - but it works in it's current state, and perhaps it will help someone else put together their own system!

Step 1: Configuring the XMPP server

I use XMPP as my primary instant messaging server, so it's only natural that I'd want to integrate the system in with it to remind me when to plug in the external drive, and so that it can tell me when it's done and what happened. Since I use prosody as my XMPP server, I can execute the following on the server:

sudo prosodyctl adduser rasperrypi@bobsrockets.com

...and then enter a random password for the new account. From there, I set up a new private persistent multi-user chatroom for the messages to filter into, and set my client to always notify when a message is posted.

After that, it was a case of creating a new config file in a format that sendxmpp will understand:

rasperrypi@bobsrockets.com:5222 thesecurepassword

Step 2: Finding the id of the drive partition

With the XMPP side of things configured, next I needed a way to detect if the drie was plugged in or not. Thankfully all partitions have a unique id built-in, which you can use to see if it's plugged in or not. It's easy to find, too:

sudo blkid

The above will list all available partitions and their UUID - the unique id I mentioned. With that in hand, we can now check if it's plugged in or not with a cleverly crafted use of the readlink command:

readlink /dev/disk/by-uuid/${partition_uuid} 1>/dev/null 2>&2;
partition_found=$?
if [[ "${partition_found}" -eq "0" ]]; then
    echo "It's plugged in!";
else
    echo "It's not plugged in :-(";
fi

Simple, right? readlink has an exit code of 0 if it managed to read the symbolik link in /dev/disk/by-uuid ok, and 1 if it didn't. The symbolic links in /deve/disk/by-uuid are helpfuly created automatically for us :D From here, we can take it a step further to wait until the drive is plugged in:

# Wait until the drive is available
while true
do
    readlink "${partition_uuid}";

    if [[ "$?" -eq 0 ]]; then
        break
    fi

    sleep 1;
done

Step 3: Mounting and unmounting the drive

Raspberry Pis don't mount drive automatically, so we'll have do that ourselves. Thankfully, it's not so tough:

# Create the fodler to mount the drive into
mkdir -p ${backup_drive_mount_point};
# Mount it in read-write mode
mount "/dev/disk/by-uuid/${partition_uuid}" "${backup_drive_mount_point}" -o rw;

# Do backup thingy here

# Sync changes to disk
sync
# Unmount the drive
umount "${backup_drive_mount_point}";

Make sure you've got the ntfs-3g package installed if you want to back up to an NTFS volume (Raspberry Pis don't come with it by default!).

Step 4: Backup all teh things!

There are more steps involved in getting to this point than I thought there were, but if you've made it this far, than congrats! Have a virtual cookie :D 🍪

The next part is what you probably came here for: duplicity itself. I've had an interesting time getting this to work so far, actually. It's probably easier if I show you the duplicity commands I came up with first.

# Create the archive & temporary directories
mkdir -p /mnt/data_drive/.duplicity/{archives,tmp}/{os,data_drive}
# Do a new backup
PASSPHRASE=${encryption_password} duplicity --full-if-older-than 2M --archive-dir /mnt/data_drive/.duplicity/archives/os --tempdir /mnt/data_drive/.duplicity/tmp/os --exclude /proc --exclude /sys --exclude /tmp --exclude /dev --exclude /mnt --exclude /var/cache --exclude /var/tmp --exclude /var/backups / file://${backup_drive_mount_point}/duplicity-backups/os/
PASSPHRASE=${data_drive_encryption_password} duplicity --full-if-older-than 2M --archive-dir /mnt/data_drive/.duplicity/archives/data_drive --tempdir /mnt/data_drive/.duplicity/tmp/data_drive /mnt/data_drive --exclude '**.duplicity/**' file://${backup_drive_mount_point}/duplicity-backups/data_drive/

# Remove old backups
PASSPHRASE=${encryption_password} duplicity remove-older-than 6M --force --archive-dir /mnt/data_drive/.duplicity/archives/os file:///${backup_drive_mount_point}/duplicity-backups/os/
PASSPHRASE=${data_drive_encryption_password} duplicity remove-older-than 6M --force --archive-dir /mnt/data_drive/.duplicity/archives/data_drive file:///${backup_drive_mount_point}/duplicity-backups/data_drive/

Path names have been altered for privacy reasons. The first duplicity command in the above was fairly straight forward - backup everything, except a few folders with cache files / temporary / weird stuff in them (like /proc).

I ended up having to specify the archive and temporary directories here to be on another disk because the Raspberry Pi I'm running this on has a rather... limited capacity on it's internal micro SD card, so the default location for both isn't a good idea.

The second duplicity call is a little more complicated. It backs up the data disk I have attached to my Raspberry Pi to the external drive I've got plugged in that we're backing up to. The awkward bit comes when you realise that the archive and temporary directories are located on this same data-disk that we're trying to back up. To this end, I eventually found (through lots of fiddling) that you can exclude a folder duplicity via the --exclude '**.duplicity/**' syntax. I've no idea why it's different when you're not backing up the root of the filesystem, but it is (--exclude ./.duplicity/ didn't work, and neither did /mnt/data_drive/.duplicity/).

The final two duplicity calls just clean up and remove old backups that are older than 6 months, so that the drive doesn't fill up too much :-)

Step 5: What? Where? Who?

We've almost got every piece of the puzzle, but there's still one left: letting us know what's going on! This is a piece of cake in comparison to the above:

function xmpp_notify {
        echo $1 | sendxmpp --file "${xmpp_config_file}" --resource "${xmpp_resource}" --tls --chatroom "${xmpp_target_chatroom}"
}

Easy! All we have to do is point sendxmpp at our config file we created waaay in step #1, and tell it where the chatroom is that we'd like it to post messages in. With that, we can put all the pieces of the puzzle together:

#!/usr/bin/env bash

source .backup-settings

function xmpp_notify {
    echo $1 | sendxmpp --file "${xmpp_config_file}" --resource "${xmpp_resource}" --tls --chatroom "${xmpp_target_chatroom}"
}

xmpp_notify "Waiting for the backup disk to be plugged in.";

# Wait until the drive is available
while true
do
    readlink "${backup_drive_dev}";

    if [[ "$?" -eq 0 ]]; then
        break
    fi

    sleep 1;
done

xmpp_notify "Backup disk detected - mounting";

mkdir -p ${backup_drive_mount_point};

mount "${backup_drive_dev}" "${backup_drive_mount_point}" -o rw

xmpp_notify "Mounting complete - performing backup";

# Create the archive & temporary directories
mkdir -p /mnt/data_drive/.duplicity/{archives,tmp}/{os,data_drive}

echo '--- Root Filesystem ---' >/tmp/backup-status.txt
# Create the archive & temporary directories
mkdir -p /mnt/data_drive/.duplicity/{archives,tmp}/{os,data_drive}
# Do a new backup
PASSPHRASE=${encryption_password} duplicity --full-if-older-than 2M --archive-dir /mnt/data_drive/.duplicity/archives/os --tempdir /mnt/data_drive/.duplicity/tmp/os --exclude /proc --exclude /sys --exclude /tmp --exclude /dev --exclude /mnt --exclude /var/cache --exclude /var/tmp --exclude /var/backups / file://${backup_drive_mount_point}/duplicity-backups/os/ 2>&1 >>/tmp/backup-status.txt
echo '--- Data Disk ---' >>/tmp/backup-status.txt
PASSPHRASE=${data_drive_encryption_password} duplicity --full-if-older-than 2M --archive-dir /mnt/data_drive/.duplicity/archives/data_drive --tempdir /mnt/data_drive/.duplicity/tmp/data_drive /mnt/data_drive --exclude '**.duplicity/**' file://${backup_drive_mount_point}/duplicity-backups/data_drive/ 2>&1 >>/tmp/backup-status.txt

xmpp_notify "Backup complete!"
cat /tmp/backup-status.txt | sendxmpp --file "${xmpp_config_file}" --resource "${xmpp_resource}" --tls --chatroom "${xmpp_target_chatroom}"
rm /tmp/backup-status.txt

xmpp_notify "Performing cleanup."

PASSPHRASE=${encryption_password} duplicity remove-older-than 6M --force --archive-dir /mnt/data_drive/.duplicity/archives/os file:///${backup_drive_mount_point}/duplicity-backups/os/
PASSPHRASE=${data_drive_encryption_password} duplicity remove-older-than 6M --force --archive-dir /mnt/data_drive/.duplicity/archives/data_drive file:///${backup_drive_mount_point}/duplicity-backups/data_drive/

sync;
umount "${backup_drive_mount_point}";

xmpp_notify "Done! Backup completed. You can now remove the backup disk."

I've tweaked a few of the pieces to get them to work better together, and created a separate .backup-settings file to store all the settings in.

That completes my backup script! Found this useful? Got an improvement? Use a different strategy? Post a comment below!

The other side of the fence: A Manjaro review

Oen of the default Manjaro wallpapers. (Above: One of the default Manjaro wallpapers.)

Sorry for the delay! I've had rather a lot to do recently - including set up the machine I'm using to write this blog post.

For a while now, I've been running Ubuntu on my main laptop. After making the switch from Windows 7, I haven't looked back. Recently though, a friend of mine suggested I check out Manjaro - another distribution of Linux based on Arch Linux . After setting it up on a secondary machine and playing around with it, I rather like it, actually - and I've decided to write a post about my experiences coming from Ubuntu.

Like most things, I've got multiple different reasons for playing around with Manjaro. Not least of which is to experience a different ecosystem and a different way of doing things - namely the Arch Linux ecosystem. To that end, I've selected the OpenRC init system - since I've got experience with Systemd already, I feel it's essential to gain experience with other technologies.

With my preferences selected, I fired up manjaro-architect (available on the Manjaro website, which is linked above) and began the installation. I quickly found that the installation was not a simple process - requiring several reboots to get the options just right. In particular, the partitioning tools available are somewhat limited - such that I had to boot into a live Ubuntu environment to sort them out to get a dual boot setup working correctly.

On the other side, the installer allows the configuration of so many more options, like the mount options of the partitions, the kernel to use and it's associated modules, the init system that is used, and the desktop environment you want to use (I've picked XFCE). During the install process I've learnt about a bunch of different things that I had no idea about before.

After installation, I then started on the long task of configuring it to my liking. I'm still working on that, but I'm constantly amazed at the level of flexibility it offers. Nearly everything can be customised - including all the title bar graphics and the ordering and position of everything on the task bar (called a panel in XFCE.

I've found OpenRC an interesting learning experience too. It's very similar to upstart - another init system I used before UBuntu switched to systemd. As a result, it's so uch simpler to get my head around. It feels a lot more.... transparent than systemd, which is a good thing I think. I do miss a few of the features that systemd offers, however. In time, though, I'm sure that I'll find alternative ways of doing things - different projects do have different ways of thinking, after all!

The concept of the [AUR]() (The Arch User Repository) is possibly one of my faviourite things out of all the things I've encountered so far. It's a community-driven archive of packages, but instead of containing the package binaries themselves, each package contains instructions to fetch build, and install said package.

This way requires much less maintenance I suspect, and makes it much easier to stay up to date with things. The install process for a package from the AUR is a little complex, sure, but so much easier and more automated than doing it by hand. It's like taking the benefits of downloading an installer manually from a program's website like you have to on Windows, and combining it with the ease of use and automation that comes with package managers like apt (Debian-based distrubutions) and pacman / yaourt (Arch Linux-based distributions).

In short, Manjaro is a breath of fresh air, and very different to what I've tried before. While it's certainly not for the linux beginner (try Ubuntu or Linux Mint if you're a beginner!) - especially the installer - I think it fulfills a different purpose for me at least - as platform from which to explore the Arch Linux ecosystem in relative comfort and dive deeper into the way that all the different parts in a linux system interact with each other.

An (unscientific) Introduction to I2C

I've recently bought an LCD display for a project. Since I don't have many pins to play with, I ended up buying an I2C-driven display to cut the data pins down to just 2: One for outgoing messages, and one for receiving incoming messages from other devices.

It's taken me some time to get to grips with the idea of I2C, so I thought I'd write up what I've learnt so far here, along with some helpful hints if you run into problems yourself.

In effect, I2C is a wire protocol that allows multiple devices to talk to each other over a single pair of cables. Every I2C device has an 8 bit hardware address burned into it that it uses to address itself - much like the Internet Protocol when it comes to it, actually. Devices can send messages to one another using these addresses - though not all at the same time, obviously!

If you want to talk directly over I2C with a device, then Wire.h is the library you want to use. Normally though, devices will come with their own library that utilises Wire.h and communicates with it for you.

As a good first test to see if I2C is working, I found an I2C scanner that scans for connected devices. Since the address space is so limited, it doesn't take long at all:

/* --------------------------------------
 * i2c_scanner
 * 
 * Version 1
 *  This program (or code that looks like it)
 *  can be found in many places.
 *  For example on the Arduino.cc forum.
 *  The original author is not know.
 * Version 2, Juni 2012, Using Arduino 1.0.1
 *  Adapted to be as simple as possible by Arduino.cc user Krodal
 * Version 3, Feb 26  2013
 *  V3 by louarnold
 * Version 4, March 3, 2013, Using Arduino 1.0.3
 *  by Arduino.cc user Krodal.
 *  Changes by louarnold removed.
 *  Scanning addresses changed from 0...127 to 1...119,
 *  according to the i2c scanner by Nick Gammon
 * 
 * Version 5, March 28, 2013
 *  As version 4, but address scans now to 127.
 *  A sensor seems to use address 120.
 * Version 6, November 27, 2015.
 *  Added waiting for the Leonardo serial communication.
 * This sketch tests the standard 7-bit addresses
 * Devices with higher bit address might not be seen properly.
 */

#include <Wire.h>

void setup()
{
    Wire.begin();

    Serial.begin(9600);
    while (!Serial);             // Leonardo: wait for serial monitor
    Serial.println("\nI2C Scanner");
}

void loop()
{
    byte error, address;
    int nDevices;

    Serial.println("Scanning...");

    nDevices = 0;
    for(address = 1; address < 127; address++ )
    {
        // The i2c_scanner uses the return value of
        // the Write.endTransmission to see if
        // a device did acknowledge to the address.
        Wire.beginTransmission(address);
        error = Wire.endTransmission();

        if (error == 0)
        {
            Serial.print("I2C device found at address 0x");
            if (address<16)
                Serial.print("0");
            Serial.print(address,HEX);
            Serial.println("  !");

            nDevices++;
        }
        else if (error==4)
        {
            Serial.print("Unknown error at address 0x");
            if (address<16)
                Serial.print("0");
            Serial.println(address,HEX);
        }
    }
    if (nDevices == 0)
    Serial.println("No I2C devices found\n");
    else
    Serial.println("done\n");

    delay(5000);           // wait 5 seconds for next scan
}

As the initial comment mentions, I can't claim ownership of this code! I got it from here.

With the code in mind, it's time to look at the circuit design.

A simple I2C circuit connecting an Arduino Uno v3 and an LCD display.

(Above: A simple I2C circuit. Credits go to openclipart.org for the images.)

The above connects an Arduino Uno version 3 with a simple LCD display via a breadboard to allow for expansion to connect future devices. The power (the red and blue cables) link the 5V and GND pins from the Arduino to the appropriate pins on the back of the LCD (the image of an LCD I found didn't have the pins showing :P), and the I2C pins (green and yellow) connect the SDA and SCL pins on the Arduino to the LCD display.

With the circuit done, that completes the system! All that remains now is to build something cool with the components we've put together :D

The final product of the above IRL!

GalleryShare - Share a folder on your computer with a friend

The front page of GalleryShare

Just yesterday, I was browsing my repositories on both my personal git server (git.starbeamrainbowlabs.com) and GitHub, and I stumbled across a program I wrote a while ago and then completely forgot about. It lets you share a directory of files and pictures via http. The picture above is from the wallpapers folder on my laptop here!

On further inspection, I discovered that it didn't require too much work to tidy it up for a release, so I spent an hour or two tidying up a few things, and here is version 0.1! My, it's been far too long since I've blogged about a release of something on here....

If you want to share things yourself, you can download the latest version over here.

In the future, I might add an optional graphical interface to make it even easier for people to use :D

It's actually quite simple. It's powered by the System.Net.HttpServer class (so Windows users will either need to install mono or give it administrative privileges, which is a real shame) since I originally wrote it before I put the GlidingSquirrel together, though it does have it's own routing system of my own devising.

The pages it serves themselves are actually plain XML files, which are rendered with XSLT by the user's browser. This keeps the content that GalleryShare has to dynamically generate simple, and has the added benefit that it can be generated with C&csharp;'s System.Xml.XmlWriter class. It's practically a browser-side templating system, which also has the added benefit of providing an XML-based API for others to consume.

Thumbnails are generated with C♯'s inbuilt System.Drawing image handling functions - I did initially want to use Magick.NET (C♯ bindings for the awesome ImageMagick library) has the System.Drawing classes appear to be a bit funny about the images they'll accept, but Linux support doesn't seem to have landed just yet.

Are you interested in a more in-depth look at how GalleryShare renders thumbnails, or outputs XML? Perhaps the XSLT has caught your eye. Let me know in the comments below!

Further Reading

/r/dailyprogrammer hard challenge #322: Static HTTP 1.0 server

Recently I happened to stumble across the dailyprogrammer subreddit's latest challenge. It was for a static HTTP 1.0 server, and while I built something similar for my networking ACW, I thought I'd give this one a go to create an extendable http server that I can use in other projects. If you want to follow along, you can find the challenge here!

My language of choice, as you might have guessed, was C♯ (I know that C♯ has a HttpServer class inbuilt already, but to listen on 0.0.0.0 on Windows it requires administrative privileges).

It ended up going rather well, actually. In a little less than 24 hours after reading the post, I had myself a working solution, and I thought I'd share here how I built it. Let's start with a class diagram:

A class diagram for the gliding squirrel. (Above: A class diagram for the GlidingSquirrel. Is this diagram better than the last one I drew?)

I'm only showing properties on here, as I'll be showing you the methods attached to each class later. It's a pretty simple design, actually - HttpServer deals with all the core HTTP and networking logic, FileHttpServer handles the file system calls (and can be swapped out for your own class), and HttpRequest, HttpResponse, HttpMethod, HttpResponseCode all store the data parsed out from the raw request coming in, and the data we're about to send back out again.

With a general idea as to how it's put together, lets dive into how it actually works. HttpServer would probably be a good place to start:

public abstract class HttpServer
{
    public static readonly string Version = "0.1-alpha";

    public readonly IPAddress BindAddress;
    public readonly int Port;

    public string BindEndpoint { /* ... */ }

    protected TcpListener server;

    private Mime mimeLookup = new Mime();
    public Dictionary<string, string> MimeTypeOverrides = new Dictionary<string, string>() {
        [".html"] = "text/html"
    };

    public HttpServer(IPAddress inBindAddress, int inPort)
    { /* ... */ }
    public HttpServer(int inPort) : this(IPAddress.IPv6Any, inPort)
    {
    }

    public async Task Start() { /* ... */ }

    public string LookupMimeType(string filePath) { /* ... */ }

    protected async void HandleClientThreadRoot(object transferredClient) { /* ... */ }

    public async Task HandleClient(TcpClient client) { /* ... */ }

    protected abstract Task setup();

    public abstract Task HandleRequest(HttpRequest request, HttpResponse response);
}

(Full version)

It's heavily abbreviated because there's actually quite a bit of code to get through here, but you get the general idea. The start method is the main loop that accepts the TcpClients, and calls HandleClientThreadRoot for each client it accepts. I decided to use the inbuilt ThreadPool class to do the threading for me here:

TcpClient nextClient = await server.AcceptTcpClientAsync();
ThreadPool.QueueUserWorkItem(new WaitCallback(HandleClientThreadRoot), nextClient);

C♯ handles all the thread spawning and killing for me internally this way, which is rather nice. Next, HandleClientThreadRoot sets up a net to catch any errors that are thrown by the next stage (as we're now in a new thread, which can make debugging a nightmare otherwise), and then calls the main HandleClient:

try
{
    await HandleClient(client);
}
catch(Exception error)
{
    Console.WriteLine(error);
}
finally
{
    client.Close();
}

No matter what happens, the client's connection will always get closed. HandleClient is where the magic start to happen. It attaches a StreamReader and a StreamWriter to the client:

StreamReader source = new StreamReader(client.GetStream());
StreamWriter destination = new StreamWriter(client.GetStream()) { AutoFlush = true };

...and calls a static method on HttpRequest to read in and decode the request:

HttpRequest request = await HttpRequest.FromStream(source);
request.ClientAddress = client.Client.RemoteEndPoint as IPEndPoint;

More on that later. With the request decoded, HandleClient hands off the request to the abstract method HandleRequest - but not before setting up a secondary safety net first:

try
{
    await HandleRequest(request, response);
}
catch(Exception error)
{
    response.ResponseCode = new HttpResponseCode(503, "Server Error Occurred");
    await response.SetBody(
        $"An error ocurred whilst serving your request to '{request.Url}'. Details:\n\n" +
        $"{error.ToString()}"
    );
}

This secondary safety net means that we can send a meaningful error message back to the requesting client in the case that the abstract request handler throws an exception for some reason. In the future, I'll probably make this customisable - after all, you don't always want to let the client know exactly what crashed inside the server's internals!

The FileHttpServer class that handles the file system logic is quite simple, actually. The magic is in it's implementation of the abstract HandleRequest method that the HttpServer itself exposes:

public override async Task HandleRequest(HttpRequest request, HttpResponse response)
{
    if(request.Url.Contains(".."))
    {
        response.ResponseCode = HttpResponseCode.BadRequest;
        await response.SetBody("Error the requested path contains dangerous characters.");
        return;
    }

    string filePath = getFilePathFromRequestUrl(request.Url);
    if(!File.Exists(filePath))
    {
        response.ResponseCode = HttpResponseCode.NotFound;
        await response.SetBody($"Error: The file path '{request.Url}' could not be found.\n");
        return;
    }

    FileInfo requestFileStat = null;
    try {
        requestFileStat = new FileInfo(filePath);
    }
    catch(UnauthorizedAccessException error) {
        response.ResponseCode = HttpResponseCode.Forbidden;
        await response.SetBody(
            "Unfortunately, the server was unable to access the file requested.\n" + 
            "Details:\n\n" + 
            error.ToString() + 
            "\n"
        );
        return;
    }

    response.Headers.Add("content-type", LookupMimeType(filePath));
    response.Headers.Add("content-length", requestFileStat.Length.ToString());

    if(request.Method == HttpMethod.GET)
    {
        response.Body = new StreamReader(filePath);
    }
}

With all the helper methods and properties on HttpResponse, it's much shorter than it would otherwise be! Let's go through it step by step.

if(request.Url.Contains(".."))

This first step is a quick check for anything obvious that could be used against the server to break out of the web root. There are probably other dangerous things you can do(or try to do, anyway!) to a web server to attempt to trick it into returning arbitrary files, but I can't think of any of the top of my head that aren't covered further down. If you can, let me know in the comments!

string filePath = getFilePathFromRequestUrl(request.Url);

Next, we translate the raw path received in the request into a path to a file on disk. Let's take a look inside that method:

protected string getFilePathFromRequestUrl(string requestUrl)
{
    return $"{WebRoot}{requestUrl}";
}

It's rather simplistic, I know. I can't help but feel that there's something I missed here.... Let me know if you can think of anything. (If you're interested about the dollar syntax there - it's called an interpolated string, and is new in C♯ 6! Fancy name, I know. Check it out!)

if(!File.Exists(filePath))
{
    response.ResponseCode = HttpResponseCode.NotFound;
    await response.SetBody($"Error: The file path '{request.Url}' could not be found.\n");
    return;
}

Another obvious check. Can't have the server crashing every time it runs into a 404! A somewhat interesting note here: File.Exists only checks to see if there's a file that exists under the specified path. To check for the existence of a directory, you have to use Directory.Exists - which would make directory listing rather easy to implement. I might actually try that later - with an option to turn it off, of course.

FileInfo requestFileStat = null;
try {
    requestFileStat = new FileInfo(filePath);
}
catch(UnauthorizedAccessException error) {
    response.ResponseCode = HttpResponseCode.Forbidden;
    await response.SetBody(
        "Unfortunately, the server was unable to access the file requested.\n" + 
        "Details:\n\n" + 
        error.ToString() + 
        "\n"
    );
    return;
}

Ok, on to something that might be a bit more unfamiliar. The FileInfo class can be used to get, unsurprisingly, information about a file. You can get all sorts of statistics about a file or directory with it, such as the last modified time, whether it's read-only from the perspective of the current user, etc. We're only interested in the size of the file though for the next few lines:

response.Headers.Add("content-type", LookupMimeType(filePath));
response.Headers.Add("content-length", requestFileStat.Length.ToString());

These headers are important, as you might expect. Browsers to tend to like to know the type of content they are receiving - and especially it's size.

if(request.Method == HttpMethod.GET)
{
    response.Body = new StreamReader(filePath);
}

Lastly, we send the file's contents back to the user in the response - but only if it's a GET request. This rather neatly takes care of HEAD requests - but might cause issues elsewhere. I'll probably end up changing it if it does become an issue.

Anyway, now that we've covered everything right up to sending the response back to the client, let's end our tour with a look at the request parsing system. It's a bit backwards, but it does seem to work in an odd sort of way! It all starts in HttpRequest.FromStream.

public static async Task<HttpRequest> FromStream(StreamReader source)
{
    HttpRequest request = new HttpRequest();

    // Parse the first line
    string firstLine = await source.ReadLineAsync();
    var firstLineData = ParseFirstLine(firstLine);

    request.HttpVersion = firstLineData.httpVersion;
    request.Method = firstLineData.requestMethod;
    request.Url = firstLineData.requestPath;

    // Extract the headers
    List<string> rawHeaders = new List<string>();
    string nextLine;
    while((nextLine = source.ReadLine()).Length > 0)
        rawHeaders.Add(nextLine);

    request.Headers = ParseHeaders(rawHeaders);

    // Store the source stream as the request body now that we've extracts the headers
    request.Body = source;

    return request;
}

It looks deceptively simple at first glance. To start with, I read in the first line, extract everything useful from it, and attach them to a new request object. Then, I read in all the headers I can find, parse those too, and attach them to the request object we're building.

Finally, I attach the StreamReader to the request itself, as it's now pointing at the body of the request from the user. I haven't actually tested this, as I don't actually use it anywhere just yet, but it's a nice reminder just in case I do end up needing it :-)

Now, let's take a look at the cream on the cake - the method that parses the first line of the incoming request. I'm quite pleased with this actually, as it's my first time using a brand new feature of C♯:

public static (float httpVersion, HttpMethod requestMethod, string requestPath) ParseFirstLine(string firstLine)
{
    List<string> lineParts = new List<string>(firstLine.Split(' '));

    float httpVersion = float.Parse(lineParts.Last().Split('/')[1]);
    HttpMethod httpMethod = MethodFromString(lineParts.First());

    lineParts.RemoveAt(0); lineParts.RemoveAt(lineParts.Count - 1);
    string requestUrl = lineParts.Aggregate((string one, string two) => $"{one} {two}");

    return (
        httpVersion,
        httpMethod,
        requestUrl
    );
}

Monodevelop, my C♯ IDE, appears to go absolutely nuts over this with red squiggly lines everywhere, but it still compiles just fine :D

As I was writing this, a thought popped into my head that a tuple would be perfect here. After reading somewhere a month or two ago about a new tuple syntax that's coming to C♯ I thought I'd get awesomely distracted and take a look before continuing, and what I found was really cool. In C♯ 7 (the latest and quite possibly greatest version of C♯ to come yet!), there's a new feature called value tuples, which let's you dynamically declare tuples like I have above. They're already fully supported by the C♯ compiler, so you can use them today! Just try to ignore your editor if it gets as confused as mine did... :P

If you're interested in learning more about them, I'll leave a few links at the bottom of this post. Anyway, back to the GlidingSquirrel! Other than the new value tuples in the above, there's not much going on, actually. A few linq calls take care of the heavy lifting quite nicely.

And finally, here's my header parsing method.

public static Dictionary<string, string> ParseHeaders(List<string> rawHeaders)
{
    Dictionary<string, string> result = new Dictionary<string, string>();

    foreach(string header in rawHeaders)
    {
        string[] parts = header.Split(':');
        KeyValuePair<string, string> nextHeader = new KeyValuePair<string, string>(
            parts[0].Trim().ToLower(),
            parts[1].Trim()
        );
        if(result.ContainsKey(nextHeader.Key))
            result[nextHeader.Key] = $"{result[nextHeader.Key]},{nextHeader.Value}";
        else
            result[nextHeader.Key] = nextHeader.Value;
    }

    return result;
}

While I have attempted to build in support for multiple definitions of the same header according to the spec, I haven't actually encountered a time when it's actually been needed. Again, this is one of those things I've built in now for later - as I do intend on updating this and adding more features later - and perhaps even work it into another secret project I might post about soon.

Lastly, I'll leave you with a link to the repository I'm storing the code for the GlidingSquirrel, and a few links for your enjoyment:

GlidingSquirrel

Sources and Further Reading

Take up a Hobby!

My piano keyboard! (Above: My piano keyboard! It's an absolutely delightful Challen.)

I can recommend taking up a hobby. In my case, it's playing the piano. It's important to do something other than writing code. You might not think so now (I didn't a few years ago), but I can see the wisdom in those words now - I'm so glad that I didn't stop playing the piano (silent pianos get thrown away, after all! 1).

It might be a musical instrument. Or playing board games with friends. Or woodworking. Anything that doesn't (normally) involve a computer! It's sometimes helpful to get away from that nasty problem you're trying to figure out a solution for and come back to it later. I quite often find that the solution comes much more easily :-)

What's your hobby? Post a comment below!


  1. If you get this reference, leave a comment below :D 

Share files from your host machine with virtual machine guests with 9p and virtual machine manager

The virtual machine manager logo, with a stylised infinity symbol behind it. (Infinity symbol source, Virtual Machine Manager logo traced from the logo on the official site automatically with Inkscape)

Recently I've been looking at Virtual Machine Manager with qemu and KVM to set up a few virtual machines for my next year at university. During this learning and planning process, I've discovered a way (source) to setup an environment such that you can share a folder on your host machine (optionally read-only) with a guest machine running inside a virtual machine with a technology called 9p, and I wanted to share how I did it here.

To start make sure your virtual machine is powered off, and go to Add HardwareFilesystem, and fill in the boxes:

The add new filesystem passthrough box in vmm

  • Source path - The path on the host system to the folder you can to share.
  • Target path - The path at which the guest will see it. Note that this isn't a place on the guest file system, from what I can tell - see below.
  • Export filesystem as readonly mount - Check this box to make the share read-only.

Next, click "Finish", and start your virtual machine. Next, open a terminal on the guest machine and type something similar to this:


sudo mkdir /mnt/host_files
sudo mount -t 9p -o trans=virtio,version=9p2000.L /target /mnt/host_files

...this should mount the host share at /tmp/share to the path /mnt/host_files on the guest machine.

Found this different sort of post useful? Got something to add? Post about it in the comments!

Debug your systemd services with journalctl

Some nice bulbs and bits of wood.

The chances are that if you're using linux, you will probably have run into systemd. If you find yourself in the situation where you've got a systemd service that keeps dying and you don't know why (I've been there before several times!), and there's nothing helpful in /var/log, before you give up, you might want to give journalctl a try. It's systemd's way of capturing the output of a service and storing it in it's logging system (or something).

When I first found out about it, I read that apparently journalctl -xe servicename would show me the logs for any given service. It turned out that it wasn't the case (it just threw a nasty error), so I went trawling through the man pages and found the correct command-line switch. If you've got a service called rocketbooster.service, and you want to see if systemd has any logs stored for it, then you can execute this command:

journalctl --unit rocketbooster.service

...or for short

journalctl -u rocketbooster.service

It should open the logs (if there are any) in less - with the oldest logs at the top, so you might need to scroll all the way down to the bottom to see anything that's relevant to your problem (shift + G will take you to the bottom of the file).

I've found that systemd has a habit of rotating the logs too - and journalctl doesn't appear to know how to access the rotated logs, so it's best if you use this command as soon as possible after failure (suggestions on how to access these rotated logs are welcome! Post down in the comment :D).

I thought I'd document it here in case it was useful to anyone - and so I don't forget myself! :P

Access your home linux box from anywhere with SSH tunnels

An abstract tunnel that doesn't hold much relevant to the blog post :P

(Header by GDJ from openclipart.org. Source page)

....and other things! Recently, I bought a Raspberry Pi 3. Now that the rest of the components have arrived, I've got a rather nice little home server that's got a 1 terabyte WD PiDrive attached to it to provide lots of lovely shared storage, which is rather nice.

However, within a few weeks I was faced with a problem. How do I access my new box to configure it from my internship when I'm on lunch? Faced with such a challenge, I did what anyone would, and took to the internet to find a solution.

It didn't take long. A while ago I heard about these things called 'SSH tunnels', which, while not designed for a high throughput, are more than adequate for a low-intensity SSH connection that runs a few kilobytes a second in either direction. After reading this excellent answer by erik on the Unix & Linux StackExchange, I had an understanding of how SSH tunnels work, and was ready to put together a solution. You should go and read that answer if you'd like to understand SSH tunnels too - it explains it much better than I ever could :P

With that knowledge in hand, I went about planning the SSH tunnel. I already have a server a public IP address (it's hosting this website!), so I needed a reverse tunnel to allow me to access a port local to my linux box at home (called elessar - a virtual cookie for anyone who gets the reference!) from starbeamrainbowlabs.com.

Important! Ask yourself whether it's moral and ethical to set up an ssh tunnel before you think about following along with this article! If you find yourself behind a firewall or something similar, then the chances are that it's there for a good reason - and you might get into trouble if you try and circumvent it. I won't be held responsible for any loss or damages of any description caused by the reading of this post.

First job: create a limited account on starbeamrainbowlabs.com for elessar to SSH into. That's easy:

sudo useradd --system ssh-tunnel

Then, with a few quick lines in /etc/ssh/sshd_config:

Match User ssh-tunnel
    ForceCommand echo 'This account can only be used for ssh tunnelling.'

....we can prevent the ssh-tunnel user from being abused to gain shell access to the server (let me know if there are any further measures I can put in place here).

Now that I had a user account to ssh in as, I could set up a public / private keypair to authenticate with starbeamrainbowlabs.com, and cook up an SSH command for elessar that would set up the appropriate tunnel. After fiddling around a bit, I came up with this that did the job:

ssh -TN -R30582:localhost:5724 ssh-tunnel@starbeamrainbowlabs.com

Very cool. So with that command executing on elessar, I could ssh into elessar from starbeamrainbowlabs.com! In short, it sets up a tunnel that will make port 30582 on starbeamrainbowlabs.com tunnel through to port 5724 on elessar - the port on elessar that has SSH running on it, without allocating a pseudo-tty to save resources. explainshell.com can, well, explain it in more detail if you're interested.

Having an SSH command that would set up the tunnel is nice, but it's not very useful, since I have to execute it first before I can actually SSH into elessar from afar.

The solution was actually a little bit complicated. First, I wrote a simple systemd service file (systemd is what I have installed, since it's vanilla raspbian - this should be easily adaptable to other systems and setups) to start the SSH tunnel automagically on boot:

[Unit]
Description=SSH tunnel from starbeamrainbowlabs.com to local ssh server.

[Service]
Type=simple
ExecStart=/usr/bin/ssh -TN -R30582:localhost:5724 ssh-tunnel@starbeamrainbowlabs.com

[Install]
WantedBy=network-online.target

I quickly realised that there were a few flaws with this approach. Firstly, it tried to start the SSH connection before my router had connected to the internet, since my router starts faster than the box that initialises the fibre connection to my ISP. Secondly, it fails to retry when the connection dies.

The first problem can be solved relatively easily, by wrapping the ssh command in a clever bit of shell scripting:

/bin/sh -c 'until ping -c1 starbeamrainbowlabs.com &>/dev/null && sleep 5; do :; done && /usr/bin/ssh -TN -R30582:localhost:5724 ssh-tunnel@starbeamrainbowlabs.com

The above tries to ping starbeamrainbowlabs.com every 5 seconds until it succeeds, and only then does it attempt to open the SSH connection. This solves the first problem. To solve the second, we need to look at autossh. Autossh is a small tool that monitors an ssh connection in a variety of configurable ways and restarts the connection if ever dies for whatever reason. You can install it with your favourite package manager:

sudo apt install autossh

Substitute apt with whatever package manager you use on your system. With it installed, we can use a command like this:

autossh -o "UserKnownHostsFile /home/ssh-tunnel/.ssh/known_hosts" -o "IdentityFile /home/ssh-tunnel/.ssh/ssh-tunnel_ed25519" -o "PubkeyAuthentication=yes" -o "PasswordAuthentication=no" -o "ServerAliveInterval 900" -TN -R30582:localhost:5724 -p 7261 ssh-tunnel@starbeamrainbowlabs.com

to automatically start our ssh tunnel, and restart it if anything goes wrong. Note all the extra settings I had to specify here. This is because even though I had many of them specified in ~/.ssh/config for the ssh-tunnel user, because of systemd's weird environment when it starts a service, I found I had to specify everything in the command line with absolute paths (ugh).

Basically, the above tells autossh where the known_hosts file is (important for automation!), that it should only attempt public / private keypair authentication and not password authentication, that it should check the server's still there every 15 minutes, and all the other things we figured out above.

Finally, I combined the solutions I came up with for both problems, which left me with this:

[Unit]
Description=SSH tunnel from starbeamrainbowlabs.com to local ssh server.

[Service]
Type=simple
ExecStart=/bin/sh -c 'until ping -c1 starbeamrainbowlabs.com &>/dev/null && sleep 5; do :; done && /usr/bin/autossh -o "UserKnownHostsFile /home/pi/.ssh/known_hosts" -o "IdentityFile /home/pi/.ssh/ssh-tunnel_ed25519" -o "PubkeyAuthentication=yes" -o "PasswordAuthentication=no" -o "ServerAliveInterval 900" -TN -R30582:localhost:5724 -p 7261 ssh-tunnel@starbeamrainbowlabs.com'

[Install]
WantedBy=network-online.target

Here's a version that utilises the -f parameter of autossh to put the autossh into the background, which eliminates the sh parent process:

[Unit]
Description=SSH tunnel from starbeamrainbowlabs.com to local ssh server.

[Service]
Type=forking
Environment=AUTOSSH_PIDFILE=/var/run/sbrl-ssh-tunnel/ssh-tunnel.pid
PIDFile=/var/run/sbrl-ssh-tunnel/ssh-tunnel.pid
ExecStartPre=/bin/mkdir -p /var/run/sbrl-ssh-tunnel
ExecStartPre=-/bin/chown ssh-tunnel:ssh-tunnel /var/run/sbrl-ssh-tunnel
ExecStart=/bin/sh -c 'until ping -c1 starbeamrainbowlabs.com &>/dev/null && sleep 5; do :; done && /usr/bin/autossh -f -o "UserKnownHostsFile /home/pi/.ssh/known_hosts" -o "IdentityFile /home/pi/.ssh/ssh-tunnel_ed25519" -o "PubkeyAuthentication=yes" -o "PasswordAuthentication=no" -o "ServerAliveInterval 900" -TN -R30582:localhost:5724 -p 7261 ssh-tunnel@starbeamrainbowlabs.com'

[Install]
WantedBy=network-online.target

I ended up further modifying the above to set up an additional tunnel to allow elessar to send emails via the postfix email server that's running on starbeamrainbowlabs.com. Let me know if you'd be interested in a tutorial on this!

Sources and Futher Reading

Learn your terminal (or command line)

Enter stage left: the terminal (or command line, on windows). That window with strange white text on a black background. You might not see it, but every operating system has one - humming away in the background, just waiting to be used, but epic arcane skills are needed to navigate this bizarre and perhaps dated window into your computer.... or so it seems.

When you think of your computer, you will probably think of a GUI (a.k.a. goo-ey), with windows, a cursor, and perhaps a few buttons. GUIs make it easy for newcomers to easily find their way around a computer by referencing things that exist in the real world (e.g. folders and files, a floppy disk on the save button, etc.), but they can be inherently slower to use - especially for long series of perhaps repetitive tasks that stay essentially the same.

A terminal (linux and friends) or a command line (windows) is another view into your computer. It's a way of controlling your computer with text. Text that follows particular set of rules, that can be saved and repeated at will through the use of scripts. It's built on commands, each of which does one thing and one thing well. On their own they're mildly useful, but together they form a powerful framework that can perform almost any task. It's certainly different (and there's a little bit of learning curve, to be sure), but not as hard or arcane as you might think it currently.

A knowledge of the terminal or command line on your computer can be rather useful - especially so for those involved in computer science or technical support. How long would it take you to flatten a large set of deeply nested folders with a GUI? Or convert and recompress few folders worth of videos? Or even renew all your ssl certificates on your web server? All of these things can be automated through the use of a terminal or command line.

Even if you're just a casual computer user who's not into programming, it's still worth at least looking into. Perhaps it'll save you some time! Perhaps it'll save you from asking your friend where something is on their computer when you can't find it. Maybe it'll even save you if your computer suddenly decides it doesn't want to boot up properly. And you'll look cool doing it too :P (What better reason is there?)

If I've somehow managed to convince you to dive in and take up the challenge learning, then I'll end this somewhat different post with a collection of places you can go to get started.

Art by Mythdael