Starbeamrainbowlabs

Stardust
Blog


Archive


Mailing List Articles Atom Feed Comments Atom Feed Twitter Reddit Facebook

Tag Cloud

3d account algorithms android announcement architecture archives arduino artificial intelligence artix assembly async audio backups bash batch blog bookmarklet booting c sharp c++ challenge chrome os code codepen coding conundrums coding conundrums evolved command line compilers compiling compression css dailyprogrammer debugging demystification distributed computing documentation downtime electronics email embedded systems encryption es6 features event experiment external first impressions future game github github gist gitlab graphics hardware hardware meetup holiday holidays html html5 html5 canvas infrastructure interfaces internet io.js jabber jam javascript js bin labs learning library linux lora low level lua maintenance manjaro network networking nibriboard node.js operating systems performance photos php pixelbot portable privacy problem solving programming problems projects prolog protocol protocols pseudo 3d python reddit redis reference releases resource review rust searching secrets security series list server software sorting source code control statistics storage svg technical terminal textures three thing game three.js tool tutorial twitter ubuntu university update updates upgrade version control virtual reality virtualisation visual web website windows windows 10 xmpp xslt

Backing up to AWS S3 with duplicity

The server that this website runs on backs up automatically to the Simple Storage Service, provided by Amazon Web Services. Such an arrangement is actually fairly cheap - only ~20p/month! I realised recently that although I've blogged about duplicity before (where I discussed using an external hard drive), I never covered how I fully automate the process here on starbeamrainbowlabs.com.

A bunch of hard drives. (Above: A bunch of hard drives. The original can be found here.)

It's fairly similar in structure to the way it works backing up to an external hard drive - just with a few different components here and there, as the script that drives this is actually older than the one that backs up to an external hard drive.

To start, we'll need an AWS S3 bucket. I'm not going to cover how to do this here, as the AWS interface keeps changing, and this guide will likely become outdated quickly. Instead, the AWS S3 documentation has an official guide on how to create one. Make sure it's private, as you don't want anyone getting a hold of your backups!

With that done, you should have both an access key and a secret. Note these down in a file called .backup-password in a new directory that will hold the backup script like this:

#!/usr/bin/env bash
PASSPHRASE="INSERT_RANDOM_PASSWORD_HERE";
AWS_ACCESS_KEY_ID="INSERT_AWS_ACCESS_KEY_HERE";
AWS_SECRET_ACCESS_KEY="INSERT_AWS_SECRET_KEY_HERE";

The PASSPHRASE here should be a long and unintelligible string of random characters, and will encrypt your backups. Note that down somewhere safe too - preferably in your password manager or somewhere else at least as secure.

If you're on Linux, you should also set the permissions on the .backup-password file to ensure nobody gets access to it who shouldn't. Here's how I did it:

sudo chown root:root .backup-password
sudo chmod 0400 .backup-password

This ensures that only the root user is able to read the file - and nobody can write to it. With our secrets generated and safely stored, we can start writing the backup script itself. Let's start by reading in the secrets:

#!/usr/bin/env bash
source /root/.backup-password

I stored my .backup-password file in /root. Next, let's export these values. This enables the subprocesses we invoke to access these environment variables:

export PASSPHRASE;
export AWS_ACCESS_KEY_ID;
export AWS_SECRET_ACCESS_KEY;

Now it's time to do the backup itself! Here's what I do:

duplicity \
    --full-if-older-than 2M \
    --exclude /proc \
    --exclude /sys \
    --exclude /tmp \
    --exclude /dev \
    --exclude /mnt \
    --exclude /var/cache \
    --exclude /var/tmp \
    --exclude /var/backups \
    --exclude /srv/www-mail/rainloop/v \
    --s3-use-new-style --s3-european-buckets --s3-use-ia \
    / s3://s3-eu-west-1.amazonaws.com/INSERT_BUCKET_NAME_HERE

Compressed version:

duplicity --full-if-older-than 2M --exclude /proc --exclude /sys --exclude /tmp --exclude /dev --exclude /mnt --exclude /var/cache --exclude /var/tmp --exclude /var/backups --exclude /srv/www-mail/rainloop/v --s3-use-new-style --s3-european-buckets --s3-use-ia / s3://s3-eu-west-1.amazonaws.com/INSERT_BUCKET_NAME_HERE

This might look long and complicated, but it's mainly due to the large number of directories that I'm excluding from the backup. The key options here are --full-if-older-than 2M and --s3-use-ia, which specify I want a full backup to be done every 2 months and to use the infrequent access pricing tier to reduce costs.

The other important bit here is to replace INSERT_BUCKET_NAME_HERE with the name of the S3 bucket that you created.

Backing is all very well, but we want to remove old backups too - in order to avoid ridiculous bills (AWS are terrible for this - there's no way that you can set a hard spending limit! O.o). That's fairly easy to do:

duplicity remove-older-than 4M \
    --force \
    --s3-use-new-style --s3-european-buckets --s3-use-ia \
    s3://s3-eu-west-1.amazonaws.com/INSERT_BUCKET_NAME_HERE

Again, don't forget to replace INSERT_BUCKET_NAME_HERE with the name of your S3 bucket. Here, I specify I want all backups older than 4 months (the 4M bit) to be deleted.

It's worth noting here that it may not actually be able to remove backups older than 4 months here, as it can only delete a full backup if there are not incremental backups that depend on it. To this end, you'll need to plan for potentially storing (and being charged for) an extra backup cycle's worth of data. In my case, that's an extra 2 months worth of data.

That's the backup part of the script complete. If you want, you could finish up here and have a fully-working backup script. Personally, I want to know how much data is in my S3 bucket - so that I can get an idea as to how much I'll be charged when the bill comes through - and also so that I can see if anything's going wrong.

Unfortunately, this is a bit fiddly. Basically, we have to utilise the AWS command-line interface to recursively list the entire contents of our S3 bucket in summarising mode in order to get it to tell us what we want to know. Here's how to do that:

aws s3 ls s3://INSERT_BUCKET_BAME_HERE --recursive --human-readable --summarize

Don't forget to replace INSERT_BUCKET_BAME_HERE wiith your bucket's name. The output from this is somewhat verbose, so I ended up writing an awk script to process it and output something nicer. Said awk script looks like this:

/^\s*Total\s+Objects/ { parts[i++] = $3 }
/^\s*Total\s+Size/ { parts[i++] = $3; parts[i++] = $4; }
END {
    print(
        "AWS S3 Bucket Status:",
        parts[0], "objects, totalling "
        parts[1], parts[2]
    );
}

If we put all that together, it should look something like this:

aws s3 ls s3://INSERT_BUCKET_BAME_HERE --recursive --human-readable --summarize | awk '/^\s*Total\s+Objects/ { parts[i++] = $3 } /^\s*Total\s+Size/ { parts[i++] = $3; parts[i++] = $4; } END { print("AWS S3 Bucket Status:", parts[0], "objects, totalling " parts[1], parts[2]); }'

...it's a bit of a mess. Perhaps I should look at putting that awk script in a separate file :P Anyway, here's some example output:

AWS S3 Bucket Status: 602 objects, totalling 21.0 GiB Very nice indeed. To finish off, I'd rather like to know how long it took to do all this. Thankfully, bash has an inbuilt automatic variable that holds the number of seconds since the current process has started, so it's just a case of parsing this out into something readable:

echo "Done in $(($SECONDS / 3600))h $((($SECONDS / 60) % 60))m $(($SECONDS % 60))s.";

...I forget which Stackoverflow answer it was that showed this off, but if you know - please comment below and I'll update this to add credit. This should output something like this:

Done in 0h 12m 51s.

Awesome! We've now got a script that backs up to AWS S3, deletes old backups, and tells us both how much space on S3 is being used and how long the whole process took.

I'm including the entire script at the bottom of this post. I've changed it slightly to add a single variable for the bucket name - so there's only 1 place on line 9 (highlighted) you need to update there.

(Above: A Geopattern, tiled using the GNU Image Manipulation Program)


#!/usr/bin/env bash

# Make sure duplicity exists
test -x $(which duplicity) || exit 1;

# Pull in the password
. /root/.backup-password

AWS_S3_BUCKET_NAME="INSERT_BUCKET_NAME_HERE";

# Allow duplicity to access it
export PASSPHRASE;
export AWS_ACCESS_KEY_ID;
export AWS_SECRET_ACCESS_KEY;

# Actually do the backup
# Backup strategy:
# 1 x backup per week:
#   1 x full backup per 2 months
#   incremental backups in between
# S3 Bucket URI: https://${AWS_S3_BUCKET_NAME}/
echo [ $(date +%F%r) ] Performing backup.
duplicity --full-if-older-than 2M --exclude /proc --exclude /sys --exclude /tmp --exclude /dev --exclude /mnt --exclude /var/cache --exclude /var/tmp --exclude /var/backups --exclude /srv/www-mail/rainloop/v --s3-use-new-style --s3-european-buckets --s3-use-ia / s3://s3-eu-west-1.amazonaws.com/${AWS_S3_BUCKET_NAME}

# Remove old backups
# You have to plan for 1 extra full backup cycle when
# calculating space requirements - duplicity only
# removes a backup if it won't invalidate those further
# along the chain - the oldest backup will always be
# a full one.
echo [ $(date +%F%r) ] Backup complete. Removing old volumes.
duplicity remove-older-than 4M --force --encrypt-key F2A6D8B6 --s3-use-new-style --s3-european-buckets --s3-use-ia s3://s3-eu-west-1.amazonaws.com/${AWS_S3_BUCKET_NAME}
echo [ $(date +%F%r) ] Cleanup complete.

aws s3 ls s3://${AWS_S3_BUCKET_NAME} --recursive --human-readable --summarize | awk '/^\s*Total\s+Objects/ { parts[i++] = $3 } /^\s*Total\s+Size/ { parts[i++] = $3; parts[i++] = $4; } END { print("AWS S3 Bucket Status:", parts[0], "objects, totalling " parts[1], parts[2]); }'

echo "Done in $(($SECONDS / 3600))h $((($SECONDS / 60) % 60))m $(($SECONDS % 60))s.";

Question: How do you recover a deleted file that's been overwritten?

Answer: With the greatest of difficulty.

The blog post following this one in a few days time is, ironically, about backing things up. However, I actually ended up losing the entire post during the upload process to my server (it replaced both the source and destination files with an empty file!). I'd already saved it to disk, but still almost lost it anyway......

Recovery of deleted files is awkward at best. It relies on the fact that when you 'delete' something, it doesn't erase it from disk at all - just deallocates the sectors on disk that it was taking up and re-enters them into free pool of space - which is then re-used at will.

The most important thing to remember when you've just lost something is to not touch anything. Shutdown your computer, and, if you're not confident enough yourself, call someone who knows what they're doing to help you out.

The best way to recover a file is to boot into a live cd. This is a CD (or flash drive) that holds an (or multiple!) entire operating system(s). This way, no additional writing is done to the disk containing the deleted file - potentially corrupting it.

After fiddling about with this (I had to update my bootable flash drive, as Ubuntu 15.10 is out of support and I couldn't download the extundelete tool, whichI'll mention shortly), I found that I'd hit a dead-end.

I was using the extundelete (sudo apt install extundelete, apt) tool, and it claimed that it couldn't restore the file because it had been reallocated. Here's the command I used:

sudo extundelete --restore-file /absolute/path/to/file /dev/sda7

I suspect that it was getting confused because I had a file by that name on disk that was now empty.

Anyway, after doing something else for a while, I had an idea. Since my blog posts are just text files on disk, shouldn't it be on my disk somewhere? Could I locate it at all?

As it turns out, the answer is yes. Remembering a short sentence from the post I'd just written, I started a brute-force search of my disk:

sudo dd if=/dev/sda7 | strings | grep -i "AWS S3"

This has several components to it Explain Shell is great at providing an explanation of each bit in turn. Here's a short summary:

  • dd - This reads in the entire contents of a partition and pushes it into the following command. Find the partition name with lsblk.
  • strings - This extracts all runs of printable characters from the input stream.
  • grep - This searches (case-insensitively with -i) for an specified string in the input

I started to get results - a whole line from the blog post that had supposedly been deleted and overwritten! This wasn't really enough though. Taking a longer snippet to reduce the noise in the output, I tried again:

sudo dd if=/dev/sda7 | strings | grep -i -C100 "To start, we'll need an AWS S3 bucket"

This time, I added -C100. This tells grep that I want to see 100 lines before and after any lines that contain the specified search string.

With this, I managed to recover enough of the blog post to quickly re-edit and upload it. It did appear to remove blank lines and the back-ticks at the end of a code block, but they are easy to replace.

Note to self: Always copy first when crossing file system boundaries, and delete later. Don't move all in one go!

RhinoReminds: An XMPP reminder bot for my convenience

A Black Rhino from WikiMedia Commons. (Above: A Picture of a Black Rhino. Source: WikiMedia Commons)

Many times when I write a program it's to solve a problem. With Pepperminty Wiki, it was that I needed a lightweight wiki engine - and MediaWiki was just too complex. With TeleConsole, it was that I wanted to debug a C♯ Program in an environment that had neither a debugger nor a console (I'm talking about you, Unity 3D).

Today, I'm releasing RhinoReminds, an XMPP bot that reminds you about things. As you might have guessed, this is the end product of a few different posts I've made on here recently:

You can talk to it like so:

Remind me to water the greenhouse tomorrow at 4:03pm
Show all reminders
Delete reminders 2, 3, 4, and 7
Remind me in 1 hour to check the oven

...and it'll respond accordingly. It figures out which action to take based on the first word of the sentence you send it, but after that it uses AI (specifically Microsoft.Recognizers.Text, which I posted about here) to work out what you how you want it to do it.

I'm still working out a few bugs (namely reconnecting automagically after the connection to the server is lost, and ensuring all the messages it sends in reply actually make sense), but it's at the point now where it's stable enough that I can release it to everyone who'd either like to use it for themselves, or is simply curious :-)

If you'd like to run an instance of the bot for yourself, I recommend heading over to my personal git server here:

https://git.starbeamrainbowlabs.com/sbrl/RhinoReminds

The readme file should contain everything you need to know to get started. If not, let me know by contacting me, or commenting here!

Unfortunately, I'm not able to offer a public instance of this bot at the moment, due to concerns about spam. However, patches to improve the bots resistance against spammers (perhaps a cooldown period or something or too many messages are sent? or a limit of 50 active reminders per account?) are welcome.

Found this interesting? Got a cool use for it? Want help setting it up yourself? Comment below!

Write an XMPP bot in half an hour

Recently I've looked at using AI to extract key information from natural language, and creating a system service with systemd. The final piece of the puzzle is to write the bot itself - and that's what I'm posting about today.

Since not only do I use XMPP for instant messaging already but it's an open federated standard, I'll be building my bot on top of it for maximum flexibility.

To talk over XMPP programmatically, we're going to need library. Thankfully, I've located just such a library which appears to work well enough, called S22.XMPP. Especially nice is the comprehensive documentation that makes development go much more smoothly.

With our library in hand, let's begin! Our first order of business is to get some scaffolding in place to parse out the environment variables we'll need to login to an XMPP account.

using System;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;

using S22.Xmpp;
using S22.Xmpp.Client;
using S22.Xmpp.Im;

namespace XmppBotDemo
{
    public static class MainClass
    {
        // Needed later
        private static XmppClient client;

        // Settings
        private static Jid ourJid = null;
        private static string password = null;

        public static int Main(string[] args)
        {
            // Read in the environment variables
            ourJid = new Jid(Environment.GetEnvironmentVariable("XMPP_JID"));
            password = Environment.GetEnvironmentVariable("XMPP_PASSWORD");

            // Ensure they are present
            if (ourJid == null || password == null) {
                Console.Error.WriteLine("XMPP Bot Demo");
                Console.Error.WriteLine("=============");
                Console.Error.WriteLine("");
                Console.Error.WriteLine("Usage:");
                Console.Error.WriteLine("    ./XmppBotDemo.exe");
                Console.Error.WriteLine("");
                Console.Error.WriteLine("Environment Variables:");
                Console.Error.WriteLine("    XMPP_JID         Required. Specifies the JID to login with.");
                Console.Error.WriteLine("    XMPP_PASSWORD    Required. Specifies the password to login with.");
                return 1;
            }

            // TODO: Connect here           

            return 0;
        }
    }
}

Excellent! We're reading in & parsing 2 environment variables: XMPP_JID (the username), and XMPP_PASSWORD. It's worth noting that you can call these environment variables anything you like! I chose those names as they describe their contents well. It's also worth mentioning that it's important to use environment variables for secrets passing them as command-line arguments cases them to be much more visible to other uses of the system!

Let's connect to the XMPP server with our newly read-in credentials:

// Create the client instance
client = new XmppClient(ourJid.Domain, ourJid.Node, password);

client.Error += errorHandler;
client.SubscriptionRequest += subscriptionRequestHandler;
client.Message += messageHandler;

client.Connect();

// Wait for a connection
while (!client.Connected)
    Thread.Sleep(100);

Console.WriteLine($"[Main] Connected as {ourJid}.");

// Wait forever.
Thread.Sleep(Timeout.Infinite);

// TODO: Automatically reconnect to the server when we get disconnected.

Cool! Here, we create a new instance of the XMPPClient class, and attach 3 event handlers, which we'll look at later. We then connect to the server, and then wait until it completes - and then write a message to the console. It looks like S22.Xmpp spins up a new thread, so unfortunately we can't catch any errors it throws with a traditional try-catch statement. Instead, we'll have to ensure we're really careful that we catch any exceptions we throw accidentally - otherwise we'll get disconnected!

It does appear that XmppClient catches some errors though, which trigger the Error event - so we should attach an event handler to that.

/// <summary>
/// Handles any errors thrown by the XMPP client engine.
/// </summary>
private static void errorHandler(object sender, ErrorEventArgs eventArgs) {
    Console.Error.WriteLine($"Error: {eventArgs.Reason}");
    Console.Error.WriteLine(eventArgs.Exception);
}

Before a remote contact is able to talk to our bot, they will send us a subscription request - which we'll need to either accept or reject. This is also done via an event handler. It's the SubscriptionRequest one this time:

/// <summary>
/// Handles requests to talk to us.
/// </summary>
/// <remarks>
/// Only allow people to talk to us if they are on the same domain we are.
/// You probably don't want this for production, but for developmental purposes
/// it offers some measure of protection.
/// </remarks>
/// <param name="from">The JID of the remote user who wants to talk to us.</param>
/// <returns>Whether we're going to allow the requester to talk to us or not.</returns>
public static bool subscriptionRequestHandler(Jid from) {
    Console.WriteLine($"[Handler/SubscriptionRequest] {from} is requesting access, I'm saying {(from.Domain == ourJid.Domain?"yes":"no")}");
    return from.Domain == ourJid.Domain;
}

This simply allows anyone on our own domain to talk to us. For development purposes this will offer us some measure of protection, but for production you should probably implement a whitelisting or logging system here.

The other interesting thing we can do here is send a user a chat message to either welcome them to the server, or explain why we rejected their request. To do this, we need to write a pair of utility methods, as sending chat messages with S22.Xmpp is somewhat over-complicated:

#region Message Senders

/// <summary>
/// Sends a chat message to the specified JID.
/// </summary>
/// <param name="to">The JID to send the message to.</param>
/// <param name="message">The messaage to send.</param>
private static void sendChatMessage(Jid to, string message)
{
    //Console.WriteLine($"[Bot/Send/Chat] Sending {message} -> {to}");
    client.SendMessage(
        to, message,
        null, null, MessageType.Chat
    );
}
/// <summary>
/// Sends a chat message in direct reply to a given incoming message.
/// </summary>
/// <param name="originalMessage">Original message.</param>
/// <param name="reply">Reply.</param>
private static void sendChatReply(Message originalMessage, string reply)
{
    //Console.WriteLine($"[Bot/Send/Reply] Sending {reply} -> {originalMessage.From}");
    client.SendMessage(
        originalMessage.From, reply,
        null, originalMessage.Thread, MessageType.Chat
    );
}

#endregion

The difference between these 2 methods is that one sends a reply directly to a message that we've received (like a threaded reply), and the other simply sends a message directly to another contact.

Now that we've got all of our ducks in a row, we can write the bot itself! This is done via the Message event handler. For this demo, we'll write a bot that echo any messages to it in reverse:

/// <summary>
/// Handles incoming messages.
/// </summary>
private static void messageHandler(object sender, MessageEventArgs eventArgs) {
    Console.WriteLine($"[Bot/Handler/Message] {eventArgs.Message.Body.Length} chars from {eventArgs.Jid}");
    char[] messageCharArray = eventArgs.Message.Body.ToCharArray();
    Array.Reverse(messageCharArray);
    sendChatReply(
        eventArgs.Message,
        new string(messageCharArray)
    );
}

Excellent! That's our bot complete. The full program is at the bottom of this post.

Of course, this is a starting point - not an ending point! A number of issues with this demo stand out. There isn't a whitelist, and putting the whole program in a single file doesn't sound like a good idea. The XMPP logic should probably be refactored out into a separate file, in order to keep the input settings parsing separate from the bot itself.

Other issues that probably need addressing include better error handling and more - but fixing them all here would complicate the example rather.

Edit: The code is also available in a git repository if you'd like to clone it down and play around with it :-)

Found this interesting? Got a cool use for it? Still confused? Comment below!

Complete Program

using System;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;
using S22.Xmpp;
using S22.Xmpp.Client;
using S22.Xmpp.Im;

namespace XmppBotDemo
{
    public static class MainClass
    {
        private static XmppClient client;
        private static Jid ourJid = null;
        private static string password = null;

        public static int Main(string[] args)
        {
            // Read in the environment variables
            ourJid = new Jid(Environment.GetEnvironmentVariable("XMPP_JID"));
            password = Environment.GetEnvironmentVariable("XMPP_PASSWORD");

            // Ensure they are present
            if (ourJid == null || password == null) {
                Console.Error.WriteLine("XMPP Bot Demo");
                Console.Error.WriteLine("=============");
                Console.Error.WriteLine("");
                Console.Error.WriteLine("Usage:");
                Console.Error.WriteLine("    ./XmppBotDemo.exe");
                Console.Error.WriteLine("");
                Console.Error.WriteLine("Environment Variables:");
                Console.Error.WriteLine("    XMPP_JID         Required. Specifies the JID to login with.");
                Console.Error.WriteLine("    XMPP_PASSWORD    Required. Specifies the password to login with.");
                return 1;
            }

            // Create the client instance
            client = new XmppClient(ourJid.Domain, ourJid.Node, password);

            client.Error += errorHandler;
            client.SubscriptionRequest += subscriptionRequestHandler;
            client.Message += messageHandler;

            client.Connect();

            // Wait for a connection
            while (!client.Connected)
                Thread.Sleep(100);

            Console.WriteLine($"[Main] Connected as {ourJid}.");

            // Wait forever.
            Thread.Sleep(Timeout.Infinite);

            // TODO: Automatically reconnect to the server when we get disconnected.

            return 0;
        }

        #region Event Handlers

        /// <summary>
        /// Handles requests to talk to us.
        /// </summary>
        /// <remarks>
        /// Only allow people to talk to us if they are on the same domain we are.
        /// You probably don't want this for production, but for developmental purposes
        /// it offers some measure of protection.
        /// </remarks>
        /// <param name="from">The JID of the remote user who wants to talk to us.</param>
        /// <returns>Whether we're going to allow the requester to talk to us or not.</returns>
        public static bool subscriptionRequestHandler(Jid from) {
            Console.WriteLine($"[Handler/SubscriptionRequest] {from} is requesting access, I'm saying {(from.Domain == ourJid.Domain?"yes":"no")}");
            return from.Domain == ourJid.Domain;
        }

        /// <summary>
        /// Handles incoming messages.
        /// </summary>
        private static void messageHandler(object sender, MessageEventArgs eventArgs) {
            Console.WriteLine($"[Handler/Message] {eventArgs.Message.Body.Length} chars from {eventArgs.Jid}");
            char[] messageCharArray = eventArgs.Message.Body.ToCharArray();
            Array.Reverse(messageCharArray);
            sendChatReply(
                eventArgs.Message,
                new string(messageCharArray)
            );
        }

        /// <summary>
        /// Handles any errors thrown by the XMPP client engine.
        /// </summary>
        private static void errorHandler(object sender, ErrorEventArgs eventArgs) {
            Console.Error.WriteLine($"Error: {eventArgs.Reason}");
            Console.Error.WriteLine(eventArgs.Exception);
        }

        #endregion

        #region Message Senders

        /// <summary>
        /// Sends a chat message to the specified JID.
        /// </summary>
        /// <param name="to">The JID to send the message to.</param>
        /// <param name="message">The messaage to send.</param>
        private static void sendChatMessage(Jid to, string message)
        {
            //Console.WriteLine($"[Rhino/Send/Chat] Sending {message} -> {to}");
            client.SendMessage(
                to, message,
                null, null, MessageType.Chat
            );
        }
        /// <summary>
        /// Sends a chat message in direct reply to a given incoming message.
        /// </summary>
        /// <param name="originalMessage">Original message.</param>
        /// <param name="reply">Reply.</param>
        private static void sendChatReply(Message originalMessage, string reply)
        {
            //Console.WriteLine($"[Rhino/Send/Reply] Sending {reply} -> {originalMessage.From}");
            client.SendMessage(
                originalMessage.From, reply,
                null, originalMessage.Thread, MessageType.Chat
            );
        }

        #endregion
    }
}

Creating a system service with systemd

While I've got some grumblings with systemd over how it handles (or not) certain things, it's the most popular service manager on Linux systems today. By this, I mean it starts and stops the various services (like your SSH server, web server, cron) automatically, according to the rules laid out in special service files.

Since it's so popular and I keep having to write services (and look up how to do so every time), I thought I'd write a post here about it to save me the trouble :P

Bear in mind that systemd isn't the only service manager out there. Others include [OpenRC](), [runit](), [upstart](), and more! If you're using one of these (I'm looking to investigate using one of these on my next server rebuild), then this tutorial isn't for you. I will probably be releasing a tutorial down the road for OpenRC though, if I get around to having a server running an OS that uses it.

systemd stores it's service files in /etc/systemd/system/, so to start we need to create a new file in there:

sudo sensible-editor /etc/systemd/system/service_name.service

With the new file open in your favourite editor, it's time to set out our service definition. This is done with an ini-like syntax. Here's an example:

[Unit]
Description=Gitea
After=syslog.target rsyslog.service network.target

[Service]
Type=simple
User=git
Group=git
WorkingDirectory=/srv/git/gitea
ExecStart=/srv/git/gitea/gitea web
Restart=always
Environment=USER=git HOME=/srv/git

[Install]
WantedBy=multi-user.target

The above is a simple service file for Gitea, which is the engine behind my personal git server. Let's go through each section one by one.

Firstly, the [Unit] section defines the metadata about the service. It's fairly self explanatory actually - we set the description of the service here, and also the other services (space ` separated) that we want our service to be started after with theAfter` property.

Next comes the [Service] section. This section specifies how it should start the service. We tell it that it's a simple service (in other words it doesn't do anything fancy - other types are available, but we won't use them here), the user and group it should run under, and working directory of the process, and the command (and it's arguments) to execute in order to start the process.

In addition, we also tell it to restart the service if it crashes, and set a few environment variables to refine the way Gitea behaves. Very cool!

The final section, [Install], simply specifies the systemd-equivalent of which run-level this service should start on. It's very interesting from a how-does-my-system-work perspective - I recommend reading this Stack Exchange answer and this article for more information - it's a topic for another post here on this blog :-)

To start this new service, do the following:

sudo systemctl daemon-reload
sudo systemctl enable service_name.service
sudo systemctl start service_name.service

This starts our new service and configures it to automatically start when the system first boots.

With that taken care of, we've now got the basics down of our very own service file! We can take this further though. What if there's a secret key that we need to pass to a service on startup in an environment variable, but we don't want to specify it in the service because it's world-readable?

The answer here is a clever bit of shell scripting. Consider the following service file:

[Unit]
Description=Awesome XMPP Bot
After=network.target prosody.service

[Service]
Type=simple
User=bot
WorkingDirectory=/srv/bot
ExecStart=/srv/bot/start_service.sh
Restart=on-failure
# Other Restart options: or always, on-abort, etc

[Install]
WantedBy=multi-user.target

In this case, we've defined a service file for an XMPP bot (public server directory). In order for it to connect to an XMPP server, it needs a JID (a username - formatted like an email address) and password. However, we don't want to specify these directly in the service file because they are secret!

Instead, we've specified that it should start a shell script that's located at /srv/bot/start_service.sh instead of the bot itself. Here's the contents of that shell script:

#!/usr/bin/env bash

source .xmpp_credentials

export XMPP_JID;
export XMPP_PASSWORD;

exec /usr/bin/mono Bot.exe

This simple shell script loads the contents of the file .xmpp_credentials, specifies that the XMPP_JID and XMPP_PASSWORD environment variables should be passed to any further (sub) processes, and asks for the current process to be terminated and replaced with an instance of Mono executing our bot's code that stored in Bot.exe (this way we don't have an extra bash process sitting around doing nothing, since it's job is done as soon as we start the bot itself).

In this way, we can store our precious private details in a file that we can lock down so that only the bot's user account can read it. Here's what that .xmpp_credentials file might look like:

#!/usr/bin/env bash
XMPP_JID="bot@bobsrockets.com";
XMPP_PASSWORD="sekret";

....and if I run ls .xmpp_credentials, I might see something like this:

-r-x------ 1 bot bot 104 Nov 10 21:27 .xmpp_credentials

Here the file permissions allow only the bot user to read and execute the file, but not modify it (sudo chown bot:bot .xmpp_credentials and sudo chmod 0500 .xmpp_credentials set these permissions for the curious).

These completes the tutorial on setting up services with systemd. We've seen how to create service files and make them start on boot (much easier than alternatives like running a command manually or using screen!). We've also learnt a simple way to hide credentials (though more advanced alternatives do exist).

Found this useful? Found a better way to do it? Comment below!

Sources and Further Reading

Easy AI with Microsoft.Text.Recognizers

I recently discovered that there's an XMPP client library (NuGet) for .NET that I overlooked a few months ago, and so I promptly investigated the building of a bot!

The actual bot itself needs some polishing before I post about it here, but in writing said bot I stumbled across a perfectly brilliant library - released by Microsoft of all companies - that can be used to automatically extract common data-types from a natural-language sentence.

While said library is the underpinnings of the Azure Bot Framework, it's actually free and open-source. To that end, I decided to experiment with it - and ended up writing this blog post.

Data types include (but are not limited to) dates and times (and ranges thereof), numbers, percentages, monetary amounts, email addresses, phone numbers, simple binary choices, and more!

While it also lands you with a terrific number of DLL dependencies in your build output folder, the result is totally worth it! How about pulling a DateTime from this:

in 5 minutes

or this:

the first Monday of January

or even this:

next Monday at half past six

Pretty cool, right? You can even pull multiple things out of the same sentence. For example, from the following:

The host 1.2.3.4 has been down 3 times over the last month - the last of which was from 5pm and lasted 30 minutes

It can extract an IP address (1.2.3.4), a number (3), and a few dates and times (last month, 5pm, 30 minutes).

I've written a test program that shows it in action. Here's a demo of it working:

(Can't see the asciicast above? View it on asciinema.org)

The source code is, of course, available on my personal Git server: Demos/TextRecogniserDemo

If you can't check out the repo, here's the basic gist. First, install the Microsoft.Recognizers.Text package(s) for the types of data that you'd like to recognise. Then, to recognise a date or time, do this:

List<ModelResult> result = DateTimeRecognizer.RecognizeDateTime(nextLine, Culture.English);

The awkward bit is unwinding the ModelResult to get at the actual data. The matched text is stored in the ModelResult.Resolution property, but that's a SortedDictionary<string, object>. The interesting property inside which is value, but depending on the data type you're recognising - that can be an array too! The best way I've found to decipher the data types is to print the value of ModelResult.Resolution as a string to the console:

Console.WriteLine(result[0].Resolution.ToString());

The .NET runtime will helpfully convert this into something like this:

System.Collections.Generic.SortedDictionary`2[System.String,System.Object]

Very helpful. Then we can continue to drill down:

Console.WriteLine(result[0].Resolution["values"]);

This produces this:

System.Collections.Generic.List`1[System.Collections.Generic.Dictionary`2[System.String,System.String]]

Quite a mouthful, right? By cross-referencing this against the JSON (thanks, Newtonsoft.JSON!), we can figure out how to drill the rest of the way. I ended up writing myself a pair of little utility methods for dates and times:

public static DateTime RecogniseDateTime(string source, out string rawString)
{
    List<ModelResult> aiResults = DateTimeRecognizer.RecognizeDateTime(source, Culture.English);
    if (aiResults.Count == 0)
        throw new Exception("Error: Couldn't recognise any dates or times in that source string.");

    /* Example contents of the below dictionary:
        [0]: {[timex, 2018-11-11T06:15]}
        [1]: {[type, datetime]}
        [2]: {[value, 2018-11-11 06:15:00]}
    */

    rawString = aiResults[0].Text;
    Dictionary<string, string> aiResult = unwindResult(aiResults[0]);
    string type = aiResult["type"];
    if (!(new string[] { "datetime", "date", "time", "datetimerange", "daterange", "timerange" }).Contains(type))
        throw new Exception($"Error: An invalid type of {type} was encountered ('datetime' expected).");

    string result = Regex.IsMatch(type, @"range$") ? aiResult["start"] : aiResult["value"];
    return DateTime.Parse(result);
}

private static Dictionary<string, string> unwindResult(ModelResult modelResult)
{
    return (modelResult.Resolution["values"] as List<Dictionary<string, string>>)[0];
}

Of course, it depends on your use-case as to precisely how you unwind it, but the above should be a good starting point.

Once I've polished the bot I've written a bit, I might post about it on here.

Found this interesting? Run into an issue? Got a neat use for it? Comment below!

Using libsodium to upgrade the comment key system

I've blogged about the comment key system I utilise on this blog to prevent spam before (see also). Today, I've given it another upgrade to make it harder for spammers to fake a comment key!

In the last post, I transformed the comment key with a number of reversible operations - including a simple XOR password system. This is, of course, very insecure - especially since an attacker knows (or can at least guess) the content of the encrypted key, making it trivial (I suspect) to guess the password used for 'encryption'.

The solution here, obviously, is to utilise a better encryption system. Since it's the 5th November and I'm not particularly keen on my website going poof like the fireworks tonight, let's do something about it! PHP 7.2+ comes with native libsodium support (those still using older versions of PHP can still follow along! Simply install the PECL module). libsodium bills itself as

A modern, portable, easy to use crypto library.

After my experiences investigating it, I'd certainly say that's true (although the official PHP documentation could do with, erm, existing). I used this documentation instead instead - it's quite ironic because I have actually base64-encoded the password.......

Anyway, after doing some digging I found the quick reference, which explains how you can go about accomplishing common tasks with libsodium. For my comment key system, I want to encrypt my timestamp with a password - so I wanted the sodium_crypto_secretbox() and its associated sodium_crypto_secretbox_open() functions.

This pair of functions, when used together, implement a secure symmetric key encryption system. In other words, they securely encrypt a lump of data with a password. They do, however, have 2 requirements that must be taken care of. Firstly, the password must be of a specific length. This is easy enough to accomplish, as PHP is kind enough to tell us how long it needs to be:

/**
 * Generates a new random comment key system password.
 * @return string   The base64-encoded password.
 */
function key_generate_password() {
    return base64_encode_safe(random_bytes(SODIUM_CRYPTO_SECRETBOX_KEYBYTES));
}

Easy-peasy! base64_encode_safe isn't a built-in function - it's a utility function I wrote that we'll need later. For consistency, I've used it here too. Here it is, along with its counterpart:

/**
 * Encodes the given data with base64, but replaces characters that could 
 * get mangled in transit with safer equivalents.
 * @param   mixed   $data   The data to encode.
 * @return  string  The data, encoded as transmit-safe base64.
 */
function base64_encode_safe($data) {
    return str_replace(["/", "+"], ["-", "_"], base64_encode($data));
}
/**
 * Decodes data encoded with base64_encode_safe().
 * @param   mixed   $base64     The data to decode.
 * @return  string  The data, decoded from transmit-safe base64.
 */
function base64_decode_safe($base64) {
    return base64_decode(str_replace(["-", "_"], ["/", "+"], $base64));
}

With that taken care of, we can look at the other requirement: a nonce. Standing for Number used ONCE, it's a sequence of random bytes that's used by the encryption algorithm. We don't need to keep it a secret, but we do need to to decrypt the data again at the other end, in addition to the password - and we do need to ensure that we generate a new one for every comment key. Thankfully, this is also easy to do in a similar manner to generating a password:

$nonce = random_bytes(SODIUM_CRYPTO_SECRETBOX_NONCEBYTES);

With everything in place, we can look at upgrading the comment key generator itself. It looks much simpler now:

/**
 * Generates a new comment key.
 * Note that the 'encryption' used is not secure - it's just simple XOR!
 * It's mainly to better support verification in complex setups and 
 * serve as a nother annoying hurdle for spammers.
 * @param  string $pass The password to encrypt with. Should be a base64-encoded string from key_generate_password().
 * @return string       A new comment key stamped at the current time.
 */
function key_generate($pass) {
    $pass = base64_decode_safe($pass);
    $nonce = random_bytes(SODIUM_CRYPTO_SECRETBOX_NONCEBYTES);
    $encrypt = sodium_crypto_secretbox(
        strval(time()), // The thing we want to encrypt
        $nonce, // The nonce
        $pass // The password to encrypt with
    );

    return base64_encode_safe($nonce.$encrypt);
}

I bundle the nonce with the encrypted data here to ensure that the system continues to be stateless (i.e. we don't need to store any state information about a user on the server). I also encode the encrypted string with base64, as the encrypted strings contain lots of nasty characters (it's actually a binary byte array I suspect). This produces keys like this:

BOqDvr26XY9s8PhlmIZMIp6xCOZyfsh6J05S4Jp2cY3bL8ccf_oRgRMldrmzKk6RrnA=
Tt8H81tkJEqiJt-RvIstA_vz13LS8vjLgriSAvc1n5iKwHuEKjW93IMITikdOwr5-NY=
5NPvHg-l1GgcQ9ZbThZH7ixfKGqAtSBr5ggOFbN_wFRjo3OeJSWAcFNhQulYEVkzukI=

They are a bit long, but not unmanageable. In theory, I could make it a bit shorter by avoiding converting the integer output from time() to a string, but in my testing it didn't make much difference. I suspect that there's some sort of minimum length to the output string for some (probably good) reason.


php > var_dump(sodium_crypto_secretbox(time(), random_bytes(24), random_bytes(32)));
string(26) "GW$:���ߌ@]�+1b��������d%"
php > var_dump(sodium_crypto_secretbox(strval(time()), random_bytes(24), random_bytes(32)));
string(26) ":_}0H �E°9��Kn�p��ͧ��"

Now that we're encrypting comment keys properly, it isn't much good unless we can decrypt them as well! Let's do that too. The first step is to decode the base64 and re-split the nonce from the encrypted data:

$pass = base64_decode_safe($pass);
$key_enc_raw = base64_decode_safe($key);
$nonce = mb_substr($key_enc_raw, 0, SODIUM_CRYPTO_SECRETBOX_NONCEBYTES, "8bit");
$key_enc = mb_substr($key_enc_raw, SODIUM_CRYPTO_SECRETBOX_NONCEBYTES, null, "8bit");

This is derived from the example code. With the 2 separated once more, we can decrypt the key:

$key_dec = sodium_crypto_secretbox_open($key_enc, $nonce, $pass);

Apparently, according to the example code on the website I linked to above, if the return value isn't a string then the decryption failed. We should make sure we handle that when returning:

if(!is_string($key_dec))
    return null;
return intval($key_dec);

That completes the decryption code. Here is in full:

/**
 * Decodes the given key.
 * @param  string $key  The key to decode.
 * @param  string $pass The password to decode it with.
 * @return int|null     The time that the key was generated, or null if the provided key was invalid.
 */
function key_decode($key, $pass) {
    $pass = base64_decode_safe($pass);
    $key_enc_raw = base64_decode_safe($key);
    $nonce = mb_substr($key_enc_raw, 0, SODIUM_CRYPTO_SECRETBOX_NONCEBYTES, "8bit");
    $key_enc = mb_substr($key_enc_raw, SODIUM_CRYPTO_SECRETBOX_NONCEBYTES, null, "8bit");
    $key_dec = sodium_crypto_secretbox_open($key_enc, $nonce, $pass);
    if(!is_string($key_dec)) return null;
    return intval($key_dec);
}

Explicitly returning null in key_decode() requires a small change to key_verify(), in order to prevent it from thinking that a key is really old if the decryption fails (null is treated as 0 in arithmetic operations, apparently). Let's update key_verify() to handle that:

/**
 * Verifies a key.
 * @param   string  $key        The key to decode and verify.
 * @param   string  $pass       The password to decode the key with.
 * @param   int     $min_age    The minimum required age for the key.
 * @param   int     $max_age    The maximum allowed age for the key.
 * @return  bool                Whether the key is valid or not.
 */
function key_verify($key, $pass, $min_age, $max_age) {
    $key_time = key_decode($key, $pass);
    if($key_time == null) return false;
    $age = time() - $key_time;
    return $age >= $min_age && $age <= $max_age;
}

A simple check is all that's needed. With that, the system is all updated! Time to generate a new password with the provided function and put it to use. You can do that directly from the PHP REPL (php -a):


php > require("lib/comment_system/comment_key.php");
php > var_dump(key_generate_password());                                        string(44) "S0bJpph5htdaKHZdVfo9FB6O4jOSzr3xZZ94c2Qcn44="

Obviously, this isn't the password I'm using on my blog :P

I've got a few more ideas I'd like to play around with to deter spammers even more whilst maintaining the current transparency - so you may see another post soon on this subject :-)

With everything complete, here's the complete script:

(Above: The full comment key system code. Can't see it? Check it out on GitHub Gist here.)

Found this useful? Got an improvement? Confused about something? Comment below!

Sources and Further Reading

Markov Chains Part 4: Test Data

With a shiny-new markov chain engine (see parts 1, 2, and 3), I found that I had a distinct lack of test data to put through it. Obviously this was no good at all, so I decided to do something about it.

Initially, I started with a list of HTML colours (direct link; 8.6KiB), but that didn't produce very good output:

MarkovGrams/bin/Debug/MarkovGrams.exe markov-w --wordlist wordlists/Colours.txt --length 16
errobiartrawbear
frelecteringupsy
lectrictomadolbo
vendellorazanigh
arvanginklectrit
dighoonbottlaven
onadeestersweese
ndiu
llighoolequorain
indeesteadesomiu

I see a few problems here. Firstly, it's treating each word as it's entity, where in fact I'd like it to generate n-grams on a line-by-line basis. Thankfully, this is easy enough with my new --no-split option:

MarkovGrams/bin/Debug/MarkovGrams.exe markov-w --wordlist wordlists/Colours.txt --no-split --length 16
med carrylight b
jungin pe red dr
ureelloufts blue
uamoky bluellemo
trinaterry aupph
utatellon reep g
radolitter brast
bian reep mardar
ght burnse greep
atimson-phloungu

Hrm, that's still rather unreadable. What if we make the n-grams longer by bumping the order?

MarkovGrams/bin/Debug/MarkovGrams.exe markov-w --wordlist wordlists/Colours.txt --length 16 --order 4
on fuchsia blue 
rsity of carmili
e blossom per sp
ngel
ulean red au lav
as green yellowe
indigri
ly gray aspe
disco blus
berry pine blach

Better, but it looks like it's starting the generation process from inside the middle of words. We can fix that with my new --start-uppercase option, which ensures that each output always stars with an n-gram that begins with a capital letter. Unfortunately the wordlist is all lowercase:

air force blue
alice blue
alizarin crimson
almond
amaranth
amber
american rose
amethyst
android green
anti-flash white

This is an issue. The other problem is that with an order of 4 the choice-point ratio is dropping quite low - I got a low of just ~0.97 in my testing.

The choice-point ratio is a measure I came up with of the average number of different directions the engine could potential go in at each step of the generation process. I'd like to keep this number consistently higher than 2, at least - to ensure a good variety of output.

Greener Pastures

Rather than try fix that wordlist, let's go in search of something better. It looks like the CrossCode Wiki has a page that lists all the items in the entire game. That should do the trick! The only problem is extracting it though. Let's use a bit of bash! We can use curl to download the HTML of the page, and then xidel to parse out the text from the <a> tags inside tables. Here's what I came up with:

curl https://crosscode.gamepedia.com/Items | xidel --data - --css "table a"

This is a great start, but we've got blank lines in there, and the list isn't sorted alphabetically (not required, but makes it look nice :P). Let's fix that:

curl https://crosscode.gamepedia.com/Items | xidel --data - --css "table a" | awk "NF > 0" | sort

Very cool. Tacking wc -l on the end of the pipe chain I can we've got ourselves a list of 527(!) items! Here's a selection of input lines:

Rough Branch
Raw Meat
Shady Monocle
Blue Grass
Tengu Mask
Crystal Plate
Humming Razor
Everlasting Amber
Tracker Chip
Lawkeeper's Fist

Let's run it through the engine. After a bit of tweaking, I came up with this:

cat wordlists/Cross-Code-Items.txt | MarkovGrams/bin/Debug/MarkovGrams.exe markov-w --start-uppercase --no-split --length 16 --order 3
Capt Keboossauci
Fajiz Keblathfin
King Steaf Sharp
Stintze Geakralt
Fruisty 'olipe F
Apper's TN
Prow Peptumn's C
Rus Recreetan Co
Veggiel Spiragma
Laver's Bolden M

That's quite interesting! With a choice-point ratio of ~5.6 at an order of 3, we've got a nice variable output. If we increase the order to 4, we get ~1.5 - ~2.3:

Edgy Hoo
Junk Petal Goggl
Red Metal Need C
Samurai Shel
Echor
Krystal Wated Li
Sweet Residu
Raw Stomper Thor
Purple Fruit Dev
Smokawa

It appears to be cutting off at the end though. Not sure what we can do about that (ideas welcome!). This looks interesting, but I'm not done yet. I'd like it to work on word-level too!

Going up a level

After making some pretty extensive changes, I managed to add support for this. Firstly, I needed to add support for word-level n-gram generation. Currently, I've done this with a new GenerationMode enum.

public enum GenerationMode
{
    CharacterLevel,
    WordLevel
}

Under the hood I've just used a few if statements. Fortunately, in the case of the weighted generator, only the bottom method needed adjusting:

/// <summary>
/// Generates a dictionary of weighted n-grams from the specified string.
/// </summary>
/// <param name="str">The string to n-gram-ise.</param>
/// <param name="order">The order of n-grams to generate.</param>
/// <returns>The weighted dictionary of ngrams.</returns>
private static void GenerateWeighted(string str, int order, GenerationMode mode, ref Dictionary<string, int> results)
{
    if (mode == GenerationMode.CharacterLevel) {
        for (int i = 0; i < str.Length - order; i++) {
            string ngram = str.Substring(i, order);
            if (!results.ContainsKey(ngram))
                results[ngram] = 0;
            results[ngram]++;
        }
    }
    else {
        string[] parts = str.Split(" ".ToCharArray());
        for (int i = 0; i < parts.Length - order; i++) {
            string ngram = string.Join(" ", parts.Skip(i).Take(order)).Trim();
            if (ngram.Trim().Length == 0) continue;
            if (!results.ContainsKey(ngram))
                results[ngram] = 0;
            results[ngram]++;
        }
    }
}

Full code available here. After that, the core generation algorithm was next. The biggest change - apart from adding a setting for the GenerationMode enum - was the main while loop. This was a case of updating the condition to count the number of words instead of the number of characters in word mode:

(Mode == GenerationMode.CharacterLevel ? result.Length : result.CountCharInstances(" ".ToCharArray()) + 1) < length

A simple ternary if statement did the trick. I ended up tweaking it a bit to optimise it - the above is the end result (full code available here). Instead of counting the words, I count the number fo spaces instead and add 1. That CountCharInstances() method there is an extension method I wrote to simplify things. Here it is:

public static int CountCharInstances(this string str, char[] targets)
{
    int result = 0;
    for (int i = 0; i < str.Length; i++) {
        for (int t = 0; t < targets.Length; t++)
            if (str[i] == targets[t]) result++;
    }
    return result;
}

Recursive issues

After making these changes, I needed some (more!) test data. Inspiration struck: I could run it recipe names! They've quite often got more than 1 word, but not too many. Searching for such a list proved to be a challenge though. My first thought was BBC Food, but their terms of service disallow scraping :-(

A couple of different websites later, and I found the Recipes Wikia. Thousands of recipes, just ready and waiting! Time to get to work scraping them. My first stop was, naturally, the sitemap (though how I found in the first place I really can't remember :P).

What I was greeted with, however, was a bit of a shock:

<?xml version="1.0" encoding="UTF-8"?>
<sitemapindex xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
<sitemap><loc>http://recipes.wikia.com/sitemap-newsitemapxml-NS_0-p1.xml</loc></sitemap>
<sitemap><loc>http://recipes.wikia.com/sitemap-newsitemapxml-NS_0-p2.xml</loc></sitemap>
<sitemap><loc>http://recipes.wikia.com/sitemap-newsitemapxml-NS_0-p3.xml</loc></sitemap>
<sitemap><loc>http://recipes.wikia.com/sitemap-newsitemapxml-NS_0-p4.xml</loc></sitemap>
<sitemap><loc>http://recipes.wikia.com/sitemap-newsitemapxml-NS_0-p5.xml</loc></sitemap>
<sitemap><loc>http://recipes.wikia.com/sitemap-newsitemapxml-NS_0-p6.xml</loc></sitemap>
<sitemap><loc>http://recipes.wikia.com/sitemap-newsitemapxml-NS_0-p7.xml</loc></sitemap>
<sitemap><loc>http://recipes.wikia.com/sitemap-newsitemapxml-NS_0-p8.xml</loc></sitemap>
<sitemap><loc>http://recipes.wikia.com/sitemap-newsitemapxml-NS_0-p9.xml</loc></sitemap>
<sitemap><loc>http://recipes.wikia.com/sitemap-newsitemapxml-NS_14-p1.xml</loc></sitemap>
<sitemap><loc>https://services.wikia.com/discussions-sitemap/sitemap/3355</loc></sitemap>
</sitemapindex>
<!-- Generation time: 26ms -->
<!-- Generation date: 2018-10-25T10:14:26Z -->

Like who has a sitemap of sitemaps, anyways?! We better do something about this: Time for some more bash! Let's start by pulling out those sitemaps.

curl http://recipes.wikia.com/sitemap-newsitemapxml-index.xml | xidel --data - --css "loc"

Easy peasy! Next up, we don't want that bottom one - as it appears to have a bunch of discussion pages and other junk in it. Let's strip it out before we even download it!

curl http://recipes.wikia.com/sitemap-newsitemapxml-index.xml | xidel --data - --css "loc" | grep -i NS_0

With a list of sitemaps extract from the sitemap (completely coconuts I tell you) extracted, we need to download them all in turn and extract the page urls therein. This is, unfortunately, where it starts to get nasty. While a simple xargs call downloads them all easily enough (| xargs -n1 -I{} curl "{}" should do the trick), this outputs them all to stdout, and makes it very difficult for us to parse them.

I'd like to avoid shuffling things around on the file system if possible, as this introduces further complexity. We're not out of options yet though, as we can pull a subshell out of our proverbial hat:

curl http://recipes.wikia.com/sitemap-newsitemapxml-index.xml | xidel --data - --css "loc" | grep -i NS_0 | xargs -n1 -I{} sh -c 'curl {} | xidel --data - --css "loc"'

Yay! Now we're getting a list of urls to all the pages on the entire wiki:

http://recipes.wikia.com/wiki/Mexican_Black_Bean_Soup
http://recipes.wikia.com/wiki/Eggplant_and_Roasted_Garlic_Babakanoosh
http://recipes.wikia.com/wiki/Bathingan_bel_Khal_Wel_Thome
http://recipes.wikia.com/wiki/Lebanese_Tabbouleh
http://recipes.wikia.com/wiki/Lebanese_Hummus_Bi-tahini
http://recipes.wikia.com/wiki/Baba_Ghannooj
http://recipes.wikia.com/wiki/Lebanese_Falafel
http://recipes.wikia.com/wiki/Lebanese_Pita_Bread
http://recipes.wikia.com/wiki/Kebab_Koutbane
http://recipes.wikia.com/wiki/Moroccan_Yogurt_Dip

One problem though: We want recipes names, not urls! Let's do something about that. Our next special guest that inhabits our bottomless hat is the illustrious sed. Armed with the mystical power of find-and-replace, we can make short work of these urls:

... | sed -e 's/^.*\///g' -e 's/_/ /g'

The rest of the command is omitted for clarity. Here I've used 2 sed scripts: One to strip everything up to the last forward slash /, and another to replace the underscores _ with spaces. We're almost done, but there are a few annoying hoops left to jump through. Firstly, there are A bunch of unfortunate escape sequences lying around (I actually only discovered this when the engine started spitting out random ones :P). Also, there are far too many page names that contain the word Nutrient, oddly enough.

The latter is easy to deal with. A quick grep sorts it out:

... | grep -iv "Nutrient"

The former is awkward and annoying. As far as I can tell, there's no command I can call that will decode escape sequences. To this end, I wound up embedding some Python:

... | python -c "import urllib, sys; print urllib.unquote(sys.argv[1] if len(sys.argv) > 1 else sys.stdin.read()[0:-1])"

This makes the whole thing much more intimidating that it would otherwise be. Lastly, I'd really like to sort the list and save it to a file. Compared to the above, this is chicken feed!

| sort >Dishes.txt

And there we have it. Bash is very much like lego bricks when you break it down. The trick is to build it up step-by-step until you've got something that does what you want it to :)

Here's the complete command:

curl http://recipes.wikia.com/sitemap-newsitemapxml-index.xml | xidel --data - --css "loc" | grep -i NS_0 | xargs -n1 -I{} sh -c 'curl {} | xidel --data - --css "loc"' | sed -e 's/^.*\///g' -e 's/_/ /g' | python -c "import urllib, sys; print urllib.unquote(sys.argv[1] if len(sys.argv) > 1 else sys.stdin.read()[0:-1])" | grep -iv "Nutrient" | sort >Dishes.txt

After all that effort, I think we deserve something for our troubles! With ~42K(!) lines in the resulting file (42,039 to be exact as of the last time I ran the monster above :P), the output (after some tweaking, of course) is pretty sweet:

cat wordlists/Dishes.txt | mono --debug MarkovGrams/bin/Debug/MarkovGrams.exe markov-w --words --start-uppercase --length 8
Lemon Lime Ginger
Seared Tomatoes and Summer Squash
Cabbage and Potato au
Endive stuffed with Lamb and Winter Vegetable
Stuffed Chicken Breasts in Yogurt Turmeric Sauce with
Blossoms on Tomato
Morning Shortcake with Whipped Cream Graham
Orange and Pineapple Honey
Mango Raspberry Bread
Tempura with a Southwestern
Rice Florentine with
Cabbage Slaw with Pecans and Parmesan
Pork Sandwiches with Tangy Sweet Barbecue
Tea with Lemongrass and Chile Rice
Butterscotch Mousse Cake with Fudge
Fish and Shrimp -
Cucumber Salad with Roast Garlic Avocado
Beans in the Slow
Apple-Cherry Vinaigrette Salad
California Avocado Chinese Chicken Noodle Soup with Arugula

...I really need to do something about that cutting off issue. Other than that, I'm pretty happy with the result! The choice-point ratio is really variable, but most of the time it's hovering around ~2.5-7.5, which is great! The output if I lower the order from 3 to 2 isn't too bad either:

Salata me Htapodi kai
Hot 'n' Cheese Sandwich with Green Fish and
Poisson au Feuilles de Milagros
Valentines Day Cookies with Tofu with Walnut Rice
Up Party Perfect Turkey Tetrazzini
Olives and Strawberry Pie with Iceberg Salad with
Mashed Sweet sauce for Your Mood with Dried
Zespri Gold Corn rice tofu, and Corn Roasted
California Avocado and Rice Casserole with Dilled Shrimp
Egyptian Tomato and Red Bell Peppers, Mango Fandango

This gives us a staggering average choice-point ratio of ~125! Success :D

One more level

After this, I wanted to push the limits of the engine, so see what it's capable of. The obvious choice here is Shakespeare's Complete Works (~5.85MiB). Pushing this through the engine required some work, as ~30 seconds is far too slow - namely optimising the pipeline as much as possible.

The Mono Profiler helped a lot here. With it, I discovered that string.StartsWith() is really slow. Like, ridiculously slow (though this is relative, since I'm calling it hundreds of thousand of times), as it's culture-aware. In our case, we can't be bothering with any of that, as it's not relevant anyway. The easiest solution is to write another extension method:

public static bool StartsWithFast(this string str, string target) {
    if (str.Length < target.Length) return false;
    return str.Substring(0, target.Length) == target;
}

string.Substring() is faster, so by utilising this instead of the regular string.StartsWith() yields us a perfectly gigantic boost! Next up, I noticed that I can probably parallelize the Linq query that builds the list of possible n-grams we can choose from next, so that it runs on all the CPU cores:

Parallel.ForEach(ngrams, (KeyValuePair<string, double> ngramData) => {
    if (!ngramData.Key.StartsWithFast(nextStartsWith)) return;
    if (!convNextNgrams.TryAdd(ngramData.Key, ngramData.Value))
        throw new Exception("Error: Failed to add to staging ngram concurrent dictionary");
});

Again, this netted a another huge gain. With this and a few other architectural changes, I was able to chop the time down to a mere ~4 seconds (for a standard 10 words)! In the future, I might experiment with selective unmanaged code via the unsafe keyword to see if I can do any better.

For now, it's fast enough to enjoy some random Shakespeare on-demand:

What should they since that to that tells me Nero
He doth it turn and and too, gentle
Ha! you shall not come hither; since that to provoke
ANTONY. No further, sir; a so, farewell, Sir
Bona, joins with
Go with your fingering,
From fairies and the are like an ass which is
The very life-blood of our blood, no, not with the
Alas, why is here-in which
Transform'd and weak'ned? Hath Bolingbroke

Very interesting. The choice-point ratios sit at ~20 and ~3 for orders 3 and 4 respectively, though I got as high as 188 for an order of 3 during my testing. Definitely plenty of test data here :P

Conclusion

My experiments took me to several other places - which, if I included them all here, would result in an post much much longer than this! I scripted the download of several other wordlists in download.sh (direct link, 4.2KiB), if you're interested, with ready-downloaded copies in the wordlists folder of the repository.

I would advise reading the table in the README that gives credit to where I sourced each list, because of course I didn't put any of them together myself - I just wrote the script :P

Particularly of note is the Starbound list, which contains a list of all the blocks and items from the game Starbound. I might post about that one separately, as it ended up being a most interesting adventure.

In the future, I'd like to look at implementing a linguistic drift algorithm, to try and improve the output of the engine. The guy over at Here Dragons Abound has a great post on the subject, which you should definitely read if you're interested.

Found this interesting? Got an idea for another wordlist I can push though my engine? Confused by something? Comment below!

Sources and Further Reading

Disassembling .NET Assemblies with Mono

As part of the Component-Based Architectures module on my University course, I've been looking at what makes the .NET ecosystem tick, and how .NET assemblies (i.e. .NET .exe / .dll files) are put together. In the process, we looked as disassembling .NET assemblies into the text-form of the Common Intermediate Language (CIL) that they contain. The instructions on how to do this were windows-specific though - so I thought I'd post about the process on Linux and other platforms here.

Our tool of choice will be Mono - but before we get to that we'll need something to disassemble. Here's a good candidate for the role:

using System;

namespace SBRL.Demo.Disassembly {
    static class Program {
        public static void Main(string[] args) {
            int a = int.Parse(Console.ReadLine()), b = 10;
            Console.WriteLine(
                "{0} + {1} = {2}",
                a, b,
                a + b
            );
        }
    }
}

Excellent. Let's compile it:

csc Program.cs

This should create a new Program.exe file in the current directory. Before we get to disassembling it, it's worth mentioning how the compilation and execution process works in .NET. It's best explained with the aid of a diagram:

Left-to-right flowchart: Multiple source languages get compiled into Common Intermediate Language, which is then executed by an execution environment.

As is depicted in the diagram above, source code in multiple languages get compiled (maybe not with the same compiler, of course) into Common Intermediate Language, or CIL. This CIL is then executed in an Execution Environment - which is usually a virtual machine (Nope! not as in Virtual Box and KVM. It's not a separate operating system as such, rather than a layer of abstraction), which may (or may not) decide to compile the CIL down into native code through a process called JIT (Just-In-Time compilation).

It's also worth mentioning here that the CIL code generated by the compiler is in binary form, as this take up less space and is (much) faster for the computer to operate on. After all, CIL is designed to be efficient for a computer to understand - not people!

We can make it more readable by disassembling it into it's textual equivalent. Doing so with Mono is actually quite simple:

monodis Program.exe >Program.il

Here I redirect the output to a file called Program.il for convenience, as my editor has a plugin for syntax-highlighting CIL. For those reading without access to Mono, here's what I got when disassembling the above program:

.assembly extern mscorlib
{
  .ver 4:0:0:0
  .publickeytoken = (B7 7A 5C 56 19 34 E0 89 ) // .z\V.4..
}
.assembly 'Program'
{
  .custom instance void class [mscorlib]System.Runtime.CompilerServices.CompilationRelaxationsAttribute::'.ctor'(int32) =  (01 00 08 00 00 00 00 00 ) // ........

  .custom instance void class [mscorlib]System.Runtime.CompilerServices.RuntimeCompatibilityAttribute::'.ctor'() =  (
        01 00 01 00 54 02 16 57 72 61 70 4E 6F 6E 45 78   // ....T..WrapNonEx
        63 65 70 74 69 6F 6E 54 68 72 6F 77 73 01       ) // ceptionThrows.

  .custom instance void class [mscorlib]System.Diagnostics.DebuggableAttribute::'.ctor'(valuetype [mscorlib]System.Diagnostics.DebuggableAttribute/DebuggingModes) =  (01 00 07 01 00 00 00 00 ) // ........

  .hash algorithm 0x00008004
  .ver  0:0:0:0
}
.module Program.exe // GUID = {D6162DAD-AD98-45B3-814F-C646C6DD7998}

.namespace SBRL.Demo.Disassembly
{
  .class private auto ansi beforefieldinit Program
    extends [mscorlib]System.Object
  {

    // method line 1
    .method public static hidebysig 
           default void Main (string[] args)  cil managed 
    {
        // Method begins at RVA 0x2050
    .entrypoint
    // Code size 47 (0x2f)
    .maxstack 5
    .locals init (
        int32   V_0,
        int32   V_1)
    IL_0000:  nop 
    IL_0001:  call string class [mscorlib]System.Console::ReadLine()
    IL_0006:  call int32 int32::Parse(string)
    IL_000b:  stloc.0 
    IL_000c:  ldc.i4.s 0x0a
    IL_000e:  stloc.1 
    IL_000f:  ldstr "{0} + {1} = {2}"
    IL_0014:  ldloc.0 
    IL_0015:  box [mscorlib]System.Int32
    IL_001a:  ldloc.1 
    IL_001b:  box [mscorlib]System.Int32
    IL_0020:  ldloc.0 
    IL_0021:  ldloc.1 
    IL_0022:  add 
    IL_0023:  box [mscorlib]System.Int32
    IL_0028:  call void class [mscorlib]System.Console::WriteLine(string, object, object, object)
    IL_002d:  nop 
    IL_002e:  ret 
    } // end of method Program::Main

    // method line 2
    .method public hidebysig specialname rtspecialname 
           instance default void '.ctor' ()  cil managed 
    {
        // Method begins at RVA 0x208b
    // Code size 8 (0x8)
    .maxstack 8
    IL_0000:  ldarg.0 
    IL_0001:  call instance void object::'.ctor'()
    IL_0006:  nop 
    IL_0007:  ret 
    } // end of method Program::.ctor

  } // end of class SBRL.Demo.Disassembly.Program
}

Very interesting. There are a few things of note here:

  • The metadata at the top of the CIL tells the execution environment a bunch of useful things about the assembly, such as the version number, the classes contained within (and their signatures), and a bunch of other random attributes.
  • An extra .ctor method has been generator for us automatically. It's the class' constructor, and it automagically calls the base constructor of the object class, since all classes are descended from object.
  • The ints a and b are boxed before being passed to Console.WriteLine. Exactly what this does and why is quite complicated, and best explained by this Stackoverflow answer.
  • We can deduce that CIL is a stack-based language form the add instruction, as it has no arguments.

I'd recommend that you explore this on your own with your own test programs. Try changing things and see what happens!

  • Try making the Program class static
  • Try refactoring the int.Parse(Console.ReadLine()) into it's own method. How is the variable returned?

This isn't all, though. We can also recompile the CIL back into an assembly with the ilasm code:

ilasm Program.il

This makes for some additional fun experiments:

  • See if you can find where b's value is defined, and change it
  • What happens if you alter the Console.WriteLine() format string so that it becomes invalid?
  • Can you get ilasm to reassemble an executable into a .dll library file?

Found this interesting? Discovered something cool? Comment below!

Sources and Further Reading

Geolocation Strategies

Surprisingly often you'll find yourself needing to know the physical location of a user - be it to calculate which time-zone they're in, when the sun sets, or even which building they're in for a guided tour. Such a question has multiple different answers and ways of approaching it, so I decided to blog about the ones I can think of for future reference.

Location-By-IP-Address

The first method that comes to mind is doing a lookup based on the user's IP address. This gives city / area / country level accuracy, which should be good enough for sunrise and sunset times, time-zone information, and so on. Several websites exist that provide this as a service - ipinfo.io comes to mind. You can even do this in the terminal:

curl https://ipinfo.io/ | jq --raw-output '.loc'

(You'll need jq installed to use this. Most Linux distributions come with it in their repositories!)

If a web service doesn't suit you, then several downloadable databases exist. Most you have to pay for to get the 'full' (and most accurate) version, but the free versions available seem to be pretty good up to country / time-zone level. Here are a few I've found:

Precision Matters

If you need something more precise (say for a driving-tracking app or something), then GPS is usually the way to go, although phones are the only device that commonly carry a sensor. Despite the name, GPS actually refers to just a single satellite constellation that was launched by the USA.

Others exist too, of course - Russia have GLONASS, China has BeiDou, and the EU now has Galileo just to name a few. Most phones will support at least 2 from this list out-of-the-box. There's a great app available for Android if you'd like to play around and explore your phone's GPS capabilities.

Utilising a device's GPS support is usually just a case of finding the appropriate API (and, in some cases, permissions) for your environment. For the web, there's the Geolocation API (note that it's for HTTPS websites only). On Android, there's android.location apparently.

Bring in the big data

GPS can be slow at getting a fix, and drain your users' battery. It also has issues in indoor environments. Thankfully, there's an alternative: WiFi-based location mapping. Though it only works where there are WiFi hotspots about (urban areas are best), it is much faster and doesn't use nearly as much power.

Pioneered by Google with their Google Play Services Location API, it sends a list of the strongest WiFi access points (their MAC addresses & signal strengths) that the device can detect off to a remote server, and, after consulting it's massive database of WiFi access points and their locations, it responds with a triangulation of the user's approximate location.

Of course, you probably don't want to be giving Google your location on a regular basis just to save battery power. To this end, Mozilla (the company behind Firefox!) has built their own crowd-sourced location service. Creatively named the Mozilla Location Service, it's continually improved by users of Firefox on their Phones who submit their location along with the detected WiFi networks for analysis on a semi-regular (and automated) basis. There's even a map showing the current coverage!

If WiFi isn't for you, then don't despair! The mobile network (1/2/3/4G) cell towers can also be used - if you've got the right kind of device (sorry, desktops). Databases of cell tower locations exist that can be used to triangulate a user's approximate position.

If you've got a limited indoor area that's already got WiFi that you need to cover, it shouldn't be too time-consuming to manually construct your own database instead.

For example, in a tour-guide Android app you could record the signal strengths in the pre-set locations that you want to take people in your tour, and use a Neural Network (libraries exist!) to decide which one the user is closest to by comparing the signal strengths & MAC addresses of the top 3 WiFi access points in the area.

Bluetooth to the rescue!

If that's still not good enough (say if there aren't any WiFi networks around), you'll probably have to start deploying some of your own infrastructure. Bluetooth beacons are a good place to start. In short, a bluetooth beacon is simply a Bluetooth device that advertises it's presence on a regular basis, along with both it's actual position and the transmitting power that it is currently using to advertise itself. By triangulating the signal from 2 or more beacons, the user's position can be determined.

Of course, this requires that you have a bunch of Bluetooth beacons to hand, which aren't exactly cheap! You could, of course, build your own with an Arduino, but there are still the parts to buy there.

Other worthy contenders

That just about covers all the main geolocation methods I can think of, though there are few others I've come across that sound interesting - but aren't particularly useful unless you've got some specific circumstances.

The first of these is eLoran. Similar in structure to GPS, eLoran has ground-based transmitter stations instead of space-based satellites. It's mainly used by ships at sea and aeroplanes, but I'm sure that it's used elsewhere too. Of course, you need a special receiver chip (and aerial, I should imagine) to use it.

Secondly, there's some research ongoing into utilising LoRa gateways to geolocate a device. I've blogged about LoRa before (and might do again now that I'm actually getting really close to having a pair for RFM95 chips working!) here and also here. There's a blog post (not written by me) explaining geolocation via LoRa here, but it's basically the same kind of thing as above with WiFi, GPS, eLoran, and mobile cell towers.

Conclusion

We've covered practically every geolocation method I can think of - from coarse IP Address-based solutions to much finer GPS and WiFi-based technologies. There are certainly more of them than I thought - each with its own benefits and drawbacks, making different technologies useful in different situations.

Found this interesting? Got another method I haven't thought of? Comment below!

Art by Mythdael