Starbeamrainbowlabs

Stardust
Blog


Archive


Mailing List Articles Atom Feed Comments Atom Feed Twitter Reddit Facebook

Tag Cloud

3d account algorithms android announcement architecture archives arduino artificial intelligence artix assembly async audio automation backups bash batch blog bookmarklet booting bug hunting c sharp c++ challenge chrome os code codepen coding conundrums coding conundrums evolved command line compilers compiling compression css dailyprogrammer debugging demystification distributed computing documentation downtime electronics email embedded systems encryption es6 features event experiment external first impressions future game github github gist gitlab graphics hardware hardware meetup holiday holidays html html5 html5 canvas infrastructure interfaces internet interoperability io.js jabber jam javascript js bin labs learning library linux lora low level lua maintenance manjaro network networking nibriboard node.js operating systems performance photos php pixelbot portable privacy problem solving programming problems project projects prolog protocol protocols pseudo 3d python reddit redis reference releases resource review rust searching secrets security series list server software sorting source code control statistics storage svg talks technical terminal textures three thing game three.js tool tutorial twitter ubuntu university update updates upgrade version control virtual reality virtualisation visual web website windows windows 10 xmpp xslt

shunction: Self-hosted Azure Functions!

It's not 1st April, but if it was - this post would be the perfect thing to release! It's about shunction, a parody project I've written. While it is a parody, hopefully it is of some use to someone :P

The other week, I discovered Azure Functions - thanks to Rob Miles. In the same moment, I thought that it would be fairly trivial to build a system by which you could self-host it - and shunction was born.

Built upon the lantern build engine, shunction lets you keep a folder of executable scripts and execute them at will. It supports 4 modes of operation:

  • adhoc - One-off runs
  • cron - Regularly scheduled execution
  • inotify - Execute when something changes on disk
  • http - Listen for HTTP requests

Of course, the system can be upgraded at a later date to support additional operating modes. The last 2 in the list start a persistent and long-running process, which will only exit if terminated.

Shunction is designed to run on Linux systems. Because of the shebang, any executable script (be it a shell script, a Python script, a Node.js script, or otherwise) is supported.

If you've got multiple tasks you want have your server running and want to keep them all in one place, then shunction would allow you to do so. For example, you could keep the task scripts in a git repository that you clone down onto the server, and then use shunction to execute them as needed.

How to quickly run TUI programs via SSH

Hello, and welcome to another blog post! I hope everyone had a lovely and restful Easter.

Very often, I want to run a command on a remote machine via SSH and leave it in a terminal in 1 corner of my screen whilst I work in another terminal on that same machine.

Up until now, I've always SSHed into the machine in question and then run the command manually:

user@local:~$ ssh bob@bobsrockets.com
# .....
bob@bobsrockets.com:~$ sudo htop

This is fine, but it takes a moment to connect & setup the terminal on the remote end. What if there was a way to specify the command to run remotely?

Well, it turns out there is. SSH lets you specify the command to run on the remote server instead of the default shell:

ssh sean@seanssatellites.io apt search beanstalk

Sadly, this doesn't always yield the results expected. Colour disappears from the output, and sometimes things like htop (ssh bill@billsboosters.co.uk htop) and sudo (ssh edgar@edsengineering.eu sudo apt update) break altogether:

Error opening terminal: unknown.

I can't remember how I figured it out, but I discovered that the issue is that when you specify the command instead of letting the default shell initialise, it treats it as some sort of 'script-mode', and doesn't allocate a pseudo-terminal on the remote machine.

Thankfully, there's a way to force it to allocate a pseudo-terminal. This is done with the -t flag:

ssh -t bob@bobsrockets.com sudo htop

This then enables interactive commands to work as intended, and causes colour to be displayed again :D

Found this useful? Got another great SSH tip? Comment below!

Fixing recursive uploads with lftp: The tale of the rogue symbolic link

I've been setting up continuous deployment recently for an application I'm working on, and as part of this process I'm uploading the release with sftp, using a restricted user account that is both chrooted (though I use a subfolder of the home directory to be extra-sure) and doesn't have shell access.

Since the application is written in PHP, I use composer to manage the server-side PHP library dependencies - which works very well. The problems start when I try to upload the whole thing to the server - so I thought I'd make a quick post here on how I fixed it.

In a previous build step, I generate an archive for the release, and put it in the continuous integration (CI) archive folder.

In the deployment phase, it unpacks this compressed archive and then uploads it to the production server with lftp, because I need to do some fiddling about that I can't do with regular sftp (anyone up for a tutorial on this? I'd be happy to write a few posts on this). However, I kept getting this weird error in the CI logs:

lftp: MirrorJob.cc:242: void MirrorJob::JobFinished(Job*): Assertion `transfer_count>0' failed.
./lantern-build-engine/lantern.sh: line 173:  5325 Aborted                 $command_name $@

Very strange indeed! Apparently, lftp isn't known for outputting especially useful error messages when used in an automated script like this. I tried everything. I rewrote, refactored, and completely turned the whole thing upside-down multiple times. This, as you might have guessed, took quite a while.

Commits aside, it was only when I refactored it to do the upload via the regular sftp command like this that it became apparent what the problem was:

sftp -i "${SSH_KEY_PATH}" -P "${deploy_ssh_port}" -o PasswordAuthentication=no "${deploy_ssh_user}@${deploy_ssh_host}" << SFTPCOMMANDS
mkdir ${deploy_root_dir}/www-new
put -r ${source_upload_dir}/* ${deploy_root_dir}/www-new
bye
SFTPCOMMANDS

Thankfully, sftp outputs much more helpful error messages. I saw this in the CI logs:

.....
Entering /tmp/tmp.ssR3j7vGhC-air-quality-upload//vendor/nikic/php-parser/bin
Entering /tmp/tmp.ssR3j7vGhC-air-quality-upload//vendor/bin
php-parse: not a regular file

The last line there instantly told me what I needed to know: It was failing to upload a symbolic link.

The solution here was simple: Unwind the symbolic links into hard links instead, and then I'll still get the benefit of a link on the local disk, but sftp will treat it as a regular file and upload a duplicate.

This is done like so:

find "${temp_dir}" -type l -exec bash -c 'ln -f "$(readlink -m "$0")" "$0"' {} \;

Thanks to SuperUser for the above (though I would have expected to find it on the Unix Stack Exchange).

If you'd like to see the full deployment script I've written, you can find it here.

There's actually quite a bit of context to how I ended up encountering this problem in the first place - which includes things like CI servers, no small amount of bash scripting, git servers, and remote deployment.

In the future, I'd like to make a few posts about the exploration I've been doing in these areas - perhaps along the lines of "how did we get here?", as I think they'd make for interesting reading.....

Delivering Linux 101

Achievement get: Deliver workshop!

At the beginning of my time here at University I never thought I'd be planning and leading the delivery an entire workshop on the basics of Linux. Assessed coursework presentations have nothing on this!

Overall, I think it went rather well, actually. About a dozen people attended in total, and most people seemed to manage to get near the end of the tasks I had prepared:

  1. Installing Ubuntu
  2. Installing Mono
  3. Investigating Monodevelop

I think next time I want to better prepare for the gap when installing the operating system, as it took much longer than I expected. Perhaps choosing the "minimal" installation instead of the "normal" installation would help here?

Preparing some slides on things like the folder structure and layout, and re-ordering the slides about package management would would all help.

If I can't cut down on the installation time, pre-installed virtual machines would also work - but I'd like to keep the OS installation if possible to show that it's an easy process installing Ubuntu on their own machines.

Moving forwards, I've already received a bunch of feedback on what future sessions could contain:

  1. Setting up remote access
    • This would be SSH, which is already installed & pre-setup on a server installation of Ubuntu
  2. Gaming
    • I unsure precisely what's meant by this. Is it installation of various games? Or maybe it's configuration of various platforms such as Steam? Perhaps someone could elaborate on it?
  3. Server installation & maintenance
    • Installation is largely similar to a desktop
    • I'd want to measure how long it takes to install, because much of the work with a server is the post-install tasks
    • Perhaps looking into a pre-installed server might be beneficial here, but security would be a slight concern

I think for anything more advanced, I'll probably go with a lab sheet-style setup instead, so that people can work at their own pace - especially since something like server configuration has many different steps to it.

I'd certainly want a goal to work towards for such a session. I've had some ideas already:

  • Setting up a web server
    • Installing Nginx
    • Writing and understanding configuration files
    • Possibly some FastCGI? PHP / Python? Probably not, what with everything else
  • Setting up a server to host a custom application
    • Writing systemd service files
    • Setting up log rotation

Common to both of these ideas would be:

  • Basic terminal skills
  • Uploading / downloading files
  • Basic hardening

I'm pretty sure I'll be doing another one of these sessions, although I'm unsure as to whether there's the demand for a repeat of this one.

If you've got any thoughts, let me know in the comments below!

Thanks also to @MoirkoB and everyone else who provided both time and resources to enable this to go ahead. Without, I'm sure it wouldn't have happened.

If you'd like to view the slide deck I used, you can do so here:

Linux 101 Slide Deck

If you missed it, but would like to be notified of future sessions, then fill out this Google Form:

Linux 101 Overflow

Automatically rotating log files on Linux

I'm rather busy at the moment with University, but I thought I'd post about Linux's log rotating system, which I've discovered recently. This post is best read as a follow-up to my earlier post, creating a system service with systemd, in which I talk about how to write a systemd service file - and how to send the output of your program to syslog - which will put it in /var/log for you.

Log rotating is the practice of automatically renaming and moving log files around at regular intervals - and keeping only so many log files at once. For example, I might define the following rules:

  • Rotate the log files every week
  • Keep 10 log files in total
  • Compress log files past the 2nd one

This would yield me a set of log files like this, for instance:

dpkg.log
dpkg.log.1
dpkg.log.2.gz
dpkg.log.3.gz
dpkg.log.4.gz
dpkg.log.5.gz
dpkg.log.6.gz
dpkg.log.7.gz
dpkg.log.8.gz
dpkg.log.9.gz
dpkg.log.10.gz

When the logs are next rotated, the last one is deleted and all the rest are renamed sequentially - like 10 in the bed.

Compressing log files is good for saving space, but in order to read them again we have to fiddle about with zcat / gzip.

The log rotating system on Linux is a cron job that runs at regular intervals - it doesn't run as a system service. It's configured by a series of files in /etc/logrotate.d/ - 1 for each service that has log files that want rotating automatically. Here's an example definition file:

/var/log/rhinoreminds/rhinoreminds.log {
    rotate 12
    weekly
    missingok
    notifempty
    compress
    delaycompress
}

Basically you specify the filename first, and then a bunch of directives to tell it what to do inside { }. The above is for RhinoReminds, an XMPP reminder bot I've written, and defines the following:

  • Keep 12 log files in the rotation cycle
  • Rotate the logs every week
  • It's ok if the log file doesn't exist
  • Don't rotate the log file if it's empty
  • Compress log files on rotation if they aren't already
  • .....but delay this by 1 rotation cycle

Very cool! This should produce the following:

/var/log/rhinoreminds/rhinoreminds.log
/var/log/rhinoreminds/rhinoreminds.log.1
/var/log/rhinoreminds/rhinoreminds.log.2.gz
/var/log/rhinoreminds/rhinoreminds.log.3.gz
/var/log/rhinoreminds/rhinoreminds.log.4.gz
/var/log/rhinoreminds/rhinoreminds.log.5.gz
/var/log/rhinoreminds/rhinoreminds.log.6.gz
/var/log/rhinoreminds/rhinoreminds.log.7.gz
/var/log/rhinoreminds/rhinoreminds.log.8.gz
/var/log/rhinoreminds/rhinoreminds.log.9.gz
/var/log/rhinoreminds/rhinoreminds.log.10.gz
/var/log/rhinoreminds/rhinoreminds.log.11.gz
/var/log/rhinoreminds/rhinoreminds.log.12.gz

Sources and Further Reading

TCP (Client) Networking in Pure Bash

Recently I re-remembered about /dev/tcp - a virtual bash file system that allows you to directly connect to a remote TCP endpoint - without the use of nc / netcat / ncat.

While it only allows you to connect (no listening, sadly), it's still a great bash built-in that helps avoid awkward platform-specific issues.

Here's how you'd listen for a connection with netcat, sending as many random numbers as possible to the poor unsuspecting client:

netcat -l 0.0.0.0 6666 </dev/urandom

Here's how you'd traditionally connect to that via netcat:

netcat X.Y.Z.W 6666 | pv >/dev/null

The pv command there is not installed by default, but is a great tool that shows the amount of data flowing through a pipe. It's available in the repositories for most Linux distributions without any special configuration required - so sudo apt install pv should be enough on Debian-based distributions.

Now, let's look at how we'd do this with pure bash:

pv >/dev/null </dev/tcp/X.Y.Z.W/6666

Very neat! We've been able to eliminate an extra child process. The question is though: how do they compare performance-wise? Well, that depends on how we measure it. In my case, I measured a single connection, downloading data as fast as it can for 60 seconds.

Another test would be to open many connections and download lots of small files. While I haven't done that here, I theorise that the pure-bash method would win out, as it doesn't have to spawn lots of subprocesses.

In my test, I did this:

# Traditional method
timeout 60 nc X.Y.Z.W 6666 | pv >/dev/null
# Pure-Bash method
timeout 60 pv >/dev/null </dev/tcp/X.Y.Z.W/6666

The timeout command kills the process after a given number of seconds. The server I connected to was just this:

while true; do nc -l 0.0.0.0 6666 </dev/urandom; done

Running the above test, I got the following output:

$ timeout 60 pv >/dev/null </dev/tcp/172.16.230.58/6666
 652MiB 0:00:59 [11.2MiB/s] [                                      <=>         ]
$ timeout 60 nc 172.16.230.58 6666 | pv >/dev/null
 599MiB 0:01:00 [11.1MiB/s] [                                     <=>          ]
Method Total Data Transferred
Traditional 599MiB
Pure Bash 652MiB

As it turns out, the pure bash method is apparently faster - by ~8.8%. I think this might have something to do with the lack of the additional sub-process, or some other optimisation that bash can apply when doing the TCP networking itself.

Found this interesting? Got a cool use for it? Discovered another awesome bash built-in? Comment below!

Troubleshooting my dotnet setup

I've recently been setting up dotnet on my Artix Linux laptop for my course at University. While I'm unsure precisely what dotnet is intended to do (and how it's different to Mono), my current understanding is that it's an implementation of .NET Core intended for developing and running ASP.NET web applications (there might be more on ASP.NET in a later 'first impressions' post soon-ish).

While the distribution is somewhat esoteric (it's based on Arch Linux), I've run into a number of issues with the installation process and getting Monodevelop to detect it - and if what I've read whilst researching said issues, they aren't confined to a single operating system.

Since I haven't been able to find any concrete instructions on how to troubleshoot the installation for the specific issues I've been facing, I thought I'd blog about it to help others out.

Installation on Arch-based distributions is actually pretty easy. I did this:

sudo pacman -S dotnet-sdk

Easy!

Monodevelop + dotnet = headache?

After this, I tried opening Monodevelop - and found an ominous message saying something along the lines of ".NET Core SDK 2.2 is not installed". Strange. If I try dotnet in the terminal, I get something like this:

$ dotnet
Usage: dotnet [options]
Usage: dotnet [path-to-application]

.....

Turns out that it's a known bug. Sadly, there doesn't appear to be much interest in fixing it - and neither does there appear to be much information about how Monodevelop does actually detect a dotnet installation.

Thankfully, I've deciphered the bug report and done all the work for you :P The bug report appears to suggest that Monodevelop expects dotnet to be installed to the directory /usr/share/dotnet. My system didn't install it there, so went looking to find where it did install it to. Doing this:

whereis dotnet

Yielded just /usr/bin/dotnet. My first thought was that this was a symbolic link to the binary in the actual install directory, so I tried this to see:

ls -l /usr/bin/dotnet

Sadly, it was not to be. Instead of a symbolic link, I found instead what appeared to be a binary file itself - which could also be a hard link. Not to be outdone, I tried a more brute-force approach to find it:

sudo find / -mount -type d -name "dotnet"

Success! This gave a list of all directories on my main / root partition that are called dotnet. From there, it was easy to pick out that it actually installed it to /opt/dotnet.

Instead of moving it from the installation directory and potentially breaking my package manager, I instead opted to create a new symbolic link:

sudo ln -s /opt/dotnet /usr/share/dotnet

This fixed the issue, allowing Monodevelop to correctly detect my installation of dotnet.

Templates

Thinking my problems were over, I went to create a new dotnet project following a tutorial. Unfortunately, I ran into a number of awkward and random errors - some of which kept changing from run to run!

I created the project with the dotnet new subcommand like this:

dotnet new --auth individual mvc

Apparently, the template projects generated by the dotnet new subcommand are horribly broken. To this end, I re-created my project through Monodevelop with the provided inbuilt templates. I was met with a considerable amount more success here than I was with dotnet new.

HTTPS errors

The last issue I've run into is a large number of errors relating to the support for HTTPS that's built-in to the dotnet SDK.

Unfortunately, I haven't been able to resolve these. To this end, I disabled HTTPS support. Although this sounds like a bad idea, my reasoning is that in production, I would always have the application server itself run plain-old HTTP - and put it behind a reverse-proxy like Nginx that provides HTTPS, as this separates concerns. It also allows me to have just a single place that implements HTTPS support - and a single place that I have to constantly tweak and update to keep the TLS configuration secure.

To this end, there are 2 things you've got to do to disable HTTPS support. Firstly, in the file Startup.cs, find and comment out the following line:

app.UseHttpsRedirection();

In a production environment, you'll probably have your reverse-proxy configured to do this HTTP to HTTPS redirection anyway - another instance of separating concerns.

The other thing to do is to alter the endpoint and protocol that it listens on. Right click on the project name in the solution pane, click "Options", then "Run -> Configurations -> Default", then the "ASP.NET Core" tab, and remove the s in `https in the "App URL" box like this:

A screenshot of the above option that needs changing.

By the looks of things, you'll have to do this 2nd step on every machine you develop on - unless you also untick the "user-specific" box (careful you don't include any passwords etc. in the environment variables in the opposite tab in that case).

You may wish to consider creating a new configuration that has HTTPS disabled if you want to avoid changing the default configuration.

Found this useful? Got a related issue you've managed to fix? Comment below!

Setup your very own VPN in 10 minutes flat

Hey! Happy new year :-)

I've been looking to setup a personal VPN for a while, and the other week I discovered a rather brilliant project called PiVPN, which greatly simplifies the process of setting one up - and managing it thereafter.

It's been working rather well so far, so I thought I'd post about it so you can set one up for yourself too. But first though, we should look at the why. Why a VPN? What does it do?

Basically, a VPN let you punch a great big hole in the network that you're connected to and appear as if you're actually on a network elsewhere. The extent to which this is the case varies depending on the purpose, (for example a University or business might setup a VPN that allows members to access internal resources, but doesn't route all traffic through the VPN), but the general principle is the same.

It's best explained with a diagram. Imagine you're at a Café:

Everyone on the Café's WiFi can see the internet traffic you're sending out. If any of it is unencrypted, then they can additionally see the content of said traffic - e.g. emails you send, web pages you load, etc. Even if it's encrypted, statistical analysis can reveal which websites you're visiting and more.

If you don't trust a network that you're connected to, then by utilising a VPN you can create an encrypted tunnel to another location that you do trust:

Then, all that the other users of the Café's WiFi will see is an encrypted stream of packets - all heading for the same destination. All they'll know is roughly how much traffic you're sending and receiving, but not to where.

This is the primary reason that I'd like my own VPN. I trust the network I've got setup in my own house, so it stands to reason that I'd like to setup a VPN server there, and pretend that my devices when I'm out and about are still at home.

In theory, I should be able to access the resources on my home network too when I'm using such a VPN - which is an added bonus. Other reasons do exist for using a VPN, but I won't discuss them here.

In terms of VPN server software, I've done a fair amount of research into the different options available. My main criteria are as follows:

  • Fairly easy to install
  • Easy to understand what it's doing once installed (transparency)
  • Easy to manage

The 2 main technologies I came across were OpenVPN and IPSec. Each has their own strengths & weaknesses. An IPSec VPN is, apparently, more efficient - especially since it executes on the client in kernel-space instead of user-space. It's a lighter protocol, too - leading to less overhead. It's also much more likely to be detected and blocked when travelling through strict firewalls, making me slightly unsure about it.

OpenVPN, on the other hand, executes entirely in user-space on both the client and the server - leading to a slightly greater overhead (especially with the mitigations for the recent Spectre & Meltdown hardware bugs). It does, however, use TLS (though over UDP by default). This characteristic makes it much more likely it'll slip through stricter firewalls. I'm unsure if that's a quality that I'm actually after or not.

Ultimately, it's the ease of management that points the way to my final choice. Looking into it, with both choices there's complex certificate management to be done whenever you want to add a new client to the VPN. For example, with StrongSwan (an open-source IPSec VPN program), you've got to generate a number of certificates with a chain of rather long commands - and the users themselves have passwords stored in plain text in a file!

While I've got no problem with reading and understanding such commands, I do have a problem with rememberability. If I want to add a new client, how easy is that to do? How long would I have to spend re-reading documentation to figure out how to do it?

Sure, I could write a program to manage the configuration files for me, but that would also require maintenance - and probably take much longer than I anticipate to write.

I forget where I found it, but it is for this reason that I ultimately decided to choose PiVPN. It's a set of scripts that sets up and manages one's an OpenVPN installation. To this end, it provides a single command - pivpn - that can be used to add, remove, and list clients and their statistics. With a concise help text, it makes it easy to figure out how to perform common tasks utilising existing terminal skills by conforming to established CLI interface norms.

If you want to install it yourself, then simply do this:

curl -L https://install.pivpn.io | bash

Of course, simply downloading and executing a random script from the Internet is never a good idea. Let's read it first:

curl -L https://install.pivpn.io | less

Once you're happy that it's not going to do anything malign to your system, proceed with the installation by executing the 1st command. It should guide you through a number of screens. Some important points I ran into:

  • The static IP address it talks about is the IP address of your server on the local network. The installation asks about the public IP address in a later step. If you've already got a static IP setup on your server (and you probably have), then you don't need to worry about this.
  • It asks you to install and enable unattended-upgrades. You should probably do this, but I ended up skipping this - as I've already got apticron setup and sending me regular emails - as I rather like to babysit the upgrade of packages on the main machines I manage. I might look into unattended-upgrades in the future if I acquire more servers than are comfortable to manage this way.
  • Make sure you fully update your system before running the installation. I use this command: sudo apt update && sudo apt-get dist-upgrade && sudo apt-get autoclean && sudo apt-get autoremove
  • Changing the port of the VPN isn't a bad idea, since PiVPN will automatically assemble .ovpn configuration files for you. I didn't end up doing this to start with, but I can always change it in the NAT rule I configured on my router later.
  • Don't forget to allow OpenVPN through your firewall! For ufw users (like me), then it's something like sudo ufw allow <port_number>/udp.
  • Don't forget to setup a NAT rule / port forwarding on your router if said server doesn't have a public IP address (if it's IPv4 it probably doesn't). If you're confused on this point, comment below and I'll blog about it. It's..... a complicated topic.

If you'd like a more in-depth guide to setting up PiVPN, then I can recommend this guide. It's a little bit dated (PiVPN now uses elliptical-curve cryptography by default), but still serves to illustrate the process pretty well.

If you're confused about some of the concepts I've presented here - leave a comment below! I'm happy to explain them in more detail. Who knows - I might end up writing another blog post on the subject....

Question: How do you recover a deleted file that's been overwritten?

Answer: With the greatest of difficulty.

The blog post following this one in a few days time is, ironically, about backing things up. However, I actually ended up losing the entire post during the upload process to my server (it replaced both the source and destination files with an empty file!). I'd already saved it to disk, but still almost lost it anyway......

Recovery of deleted files is awkward at best. It relies on the fact that when you 'delete' something, it doesn't erase it from disk at all - just deallocates the sectors on disk that it was taking up and re-enters them into free pool of space - which is then re-used at will.

The most important thing to remember when you've just lost something is to not touch anything. Shutdown your computer, and, if you're not confident enough yourself, call someone who knows what they're doing to help you out.

The best way to recover a file is to boot into a live cd. This is a CD (or flash drive) that holds an (or multiple!) entire operating system(s). This way, no additional writing is done to the disk containing the deleted file - potentially corrupting it.

After fiddling about with this (I had to update my bootable flash drive, as Ubuntu 15.10 is out of support and I couldn't download the extundelete tool, whichI'll mention shortly), I found that I'd hit a dead-end.

I was using the extundelete (sudo apt install extundelete, apt) tool, and it claimed that it couldn't restore the file because it had been reallocated. Here's the command I used:

sudo extundelete --restore-file /absolute/path/to/file /dev/sda7

I suspect that it was getting confused because I had a file by that name on disk that was now empty.

Anyway, after doing something else for a while, I had an idea. Since my blog posts are just text files on disk, shouldn't it be on my disk somewhere? Could I locate it at all?

As it turns out, the answer is yes. Remembering a short sentence from the post I'd just written, I started a brute-force search of my disk:

sudo dd if=/dev/sda7 | strings | grep -i "AWS S3"

This has several components to it Explain Shell is great at providing an explanation of each bit in turn. Here's a short summary:

  • dd - This reads in the entire contents of a partition and pushes it into the following command. Find the partition name with lsblk.
  • strings - This extracts all runs of printable characters from the input stream.
  • grep - This searches (case-insensitively with -i) for an specified string in the input

I started to get results - a whole line from the blog post that had supposedly been deleted and overwritten! This wasn't really enough though. Taking a longer snippet to reduce the noise in the output, I tried again:

sudo dd if=/dev/sda7 | strings | grep -i -C100 "To start, we'll need an AWS S3 bucket"

This time, I added -C100. This tells grep that I want to see 100 lines before and after any lines that contain the specified search string.

With this, I managed to recover enough of the blog post to quickly re-edit and upload it. It did appear to remove blank lines and the back-ticks at the end of a code block, but they are easy to replace.

Note to self: Always copy first when crossing file system boundaries, and delete later. Don't move all in one go!

Write an XMPP bot in half an hour

Recently I've looked at using AI to extract key information from natural language, and creating a system service with systemd. The final piece of the puzzle is to write the bot itself - and that's what I'm posting about today.

Since not only do I use XMPP for instant messaging already but it's an open federated standard, I'll be building my bot on top of it for maximum flexibility.

To talk over XMPP programmatically, we're going to need library. Thankfully, I've located just such a library which appears to work well enough, called S22.XMPP. Especially nice is the comprehensive documentation that makes development go much more smoothly.

With our library in hand, let's begin! Our first order of business is to get some scaffolding in place to parse out the environment variables we'll need to login to an XMPP account.

using System;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;

using S22.Xmpp;
using S22.Xmpp.Client;
using S22.Xmpp.Im;

namespace XmppBotDemo
{
    public static class MainClass
    {
        // Needed later
        private static XmppClient client;

        // Settings
        private static Jid ourJid = null;
        private static string password = null;

        public static int Main(string[] args)
        {
            // Read in the environment variables
            ourJid = new Jid(Environment.GetEnvironmentVariable("XMPP_JID"));
            password = Environment.GetEnvironmentVariable("XMPP_PASSWORD");

            // Ensure they are present
            if (ourJid == null || password == null) {
                Console.Error.WriteLine("XMPP Bot Demo");
                Console.Error.WriteLine("=============");
                Console.Error.WriteLine("");
                Console.Error.WriteLine("Usage:");
                Console.Error.WriteLine("    ./XmppBotDemo.exe");
                Console.Error.WriteLine("");
                Console.Error.WriteLine("Environment Variables:");
                Console.Error.WriteLine("    XMPP_JID         Required. Specifies the JID to login with.");
                Console.Error.WriteLine("    XMPP_PASSWORD    Required. Specifies the password to login with.");
                return 1;
            }

            // TODO: Connect here           

            return 0;
        }
    }
}

Excellent! We're reading in & parsing 2 environment variables: XMPP_JID (the username), and XMPP_PASSWORD. It's worth noting that you can call these environment variables anything you like! I chose those names as they describe their contents well. It's also worth mentioning that it's important to use environment variables for secrets passing them as command-line arguments cases them to be much more visible to other uses of the system!

Let's connect to the XMPP server with our newly read-in credentials:

// Create the client instance
client = new XmppClient(ourJid.Domain, ourJid.Node, password);

client.Error += errorHandler;
client.SubscriptionRequest += subscriptionRequestHandler;
client.Message += messageHandler;

client.Connect();

// Wait for a connection
while (!client.Connected)
    Thread.Sleep(100);

Console.WriteLine($"[Main] Connected as {ourJid}.");

// Wait forever.
Thread.Sleep(Timeout.Infinite);

// TODO: Automatically reconnect to the server when we get disconnected.

Cool! Here, we create a new instance of the XMPPClient class, and attach 3 event handlers, which we'll look at later. We then connect to the server, and then wait until it completes - and then write a message to the console. It looks like S22.Xmpp spins up a new thread, so unfortunately we can't catch any errors it throws with a traditional try-catch statement. Instead, we'll have to ensure we're really careful that we catch any exceptions we throw accidentally - otherwise we'll get disconnected!

It does appear that XmppClient catches some errors though, which trigger the Error event - so we should attach an event handler to that.

/// <summary>
/// Handles any errors thrown by the XMPP client engine.
/// </summary>
private static void errorHandler(object sender, ErrorEventArgs eventArgs) {
    Console.Error.WriteLine($"Error: {eventArgs.Reason}");
    Console.Error.WriteLine(eventArgs.Exception);
}

Before a remote contact is able to talk to our bot, they will send us a subscription request - which we'll need to either accept or reject. This is also done via an event handler. It's the SubscriptionRequest one this time:

/// <summary>
/// Handles requests to talk to us.
/// </summary>
/// <remarks>
/// Only allow people to talk to us if they are on the same domain we are.
/// You probably don't want this for production, but for developmental purposes
/// it offers some measure of protection.
/// </remarks>
/// <param name="from">The JID of the remote user who wants to talk to us.</param>
/// <returns>Whether we're going to allow the requester to talk to us or not.</returns>
public static bool subscriptionRequestHandler(Jid from) {
    Console.WriteLine($"[Handler/SubscriptionRequest] {from} is requesting access, I'm saying {(from.Domain == ourJid.Domain?"yes":"no")}");
    return from.Domain == ourJid.Domain;
}

This simply allows anyone on our own domain to talk to us. For development purposes this will offer us some measure of protection, but for production you should probably implement a whitelisting or logging system here.

The other interesting thing we can do here is send a user a chat message to either welcome them to the server, or explain why we rejected their request. To do this, we need to write a pair of utility methods, as sending chat messages with S22.Xmpp is somewhat over-complicated:

#region Message Senders

/// <summary>
/// Sends a chat message to the specified JID.
/// </summary>
/// <param name="to">The JID to send the message to.</param>
/// <param name="message">The messaage to send.</param>
private static void sendChatMessage(Jid to, string message)
{
    //Console.WriteLine($"[Bot/Send/Chat] Sending {message} -> {to}");
    client.SendMessage(
        to, message,
        null, null, MessageType.Chat
    );
}
/// <summary>
/// Sends a chat message in direct reply to a given incoming message.
/// </summary>
/// <param name="originalMessage">Original message.</param>
/// <param name="reply">Reply.</param>
private static void sendChatReply(Message originalMessage, string reply)
{
    //Console.WriteLine($"[Bot/Send/Reply] Sending {reply} -> {originalMessage.From}");
    client.SendMessage(
        originalMessage.From, reply,
        null, originalMessage.Thread, MessageType.Chat
    );
}

#endregion

The difference between these 2 methods is that one sends a reply directly to a message that we've received (like a threaded reply), and the other simply sends a message directly to another contact.

Now that we've got all of our ducks in a row, we can write the bot itself! This is done via the Message event handler. For this demo, we'll write a bot that echo any messages to it in reverse:

/// <summary>
/// Handles incoming messages.
/// </summary>
private static void messageHandler(object sender, MessageEventArgs eventArgs) {
    Console.WriteLine($"[Bot/Handler/Message] {eventArgs.Message.Body.Length} chars from {eventArgs.Jid}");
    char[] messageCharArray = eventArgs.Message.Body.ToCharArray();
    Array.Reverse(messageCharArray);
    sendChatReply(
        eventArgs.Message,
        new string(messageCharArray)
    );
}

Excellent! That's our bot complete. The full program is at the bottom of this post.

Of course, this is a starting point - not an ending point! A number of issues with this demo stand out. There isn't a whitelist, and putting the whole program in a single file doesn't sound like a good idea. The XMPP logic should probably be refactored out into a separate file, in order to keep the input settings parsing separate from the bot itself.

Other issues that probably need addressing include better error handling and more - but fixing them all here would complicate the example rather.

Edit: The code is also available in a git repository if you'd like to clone it down and play around with it :-)

Found this interesting? Got a cool use for it? Still confused? Comment below!

Complete Program

using System;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;
using S22.Xmpp;
using S22.Xmpp.Client;
using S22.Xmpp.Im;

namespace XmppBotDemo
{
    public static class MainClass
    {
        private static XmppClient client;
        private static Jid ourJid = null;
        private static string password = null;

        public static int Main(string[] args)
        {
            // Read in the environment variables
            ourJid = new Jid(Environment.GetEnvironmentVariable("XMPP_JID"));
            password = Environment.GetEnvironmentVariable("XMPP_PASSWORD");

            // Ensure they are present
            if (ourJid == null || password == null) {
                Console.Error.WriteLine("XMPP Bot Demo");
                Console.Error.WriteLine("=============");
                Console.Error.WriteLine("");
                Console.Error.WriteLine("Usage:");
                Console.Error.WriteLine("    ./XmppBotDemo.exe");
                Console.Error.WriteLine("");
                Console.Error.WriteLine("Environment Variables:");
                Console.Error.WriteLine("    XMPP_JID         Required. Specifies the JID to login with.");
                Console.Error.WriteLine("    XMPP_PASSWORD    Required. Specifies the password to login with.");
                return 1;
            }

            // Create the client instance
            client = new XmppClient(ourJid.Domain, ourJid.Node, password);

            client.Error += errorHandler;
            client.SubscriptionRequest += subscriptionRequestHandler;
            client.Message += messageHandler;

            client.Connect();

            // Wait for a connection
            while (!client.Connected)
                Thread.Sleep(100);

            Console.WriteLine($"[Main] Connected as {ourJid}.");

            // Wait forever.
            Thread.Sleep(Timeout.Infinite);

            // TODO: Automatically reconnect to the server when we get disconnected.

            return 0;
        }

        #region Event Handlers

        /// <summary>
        /// Handles requests to talk to us.
        /// </summary>
        /// <remarks>
        /// Only allow people to talk to us if they are on the same domain we are.
        /// You probably don't want this for production, but for developmental purposes
        /// it offers some measure of protection.
        /// </remarks>
        /// <param name="from">The JID of the remote user who wants to talk to us.</param>
        /// <returns>Whether we're going to allow the requester to talk to us or not.</returns>
        public static bool subscriptionRequestHandler(Jid from) {
            Console.WriteLine($"[Handler/SubscriptionRequest] {from} is requesting access, I'm saying {(from.Domain == ourJid.Domain?"yes":"no")}");
            return from.Domain == ourJid.Domain;
        }

        /// <summary>
        /// Handles incoming messages.
        /// </summary>
        private static void messageHandler(object sender, MessageEventArgs eventArgs) {
            Console.WriteLine($"[Handler/Message] {eventArgs.Message.Body.Length} chars from {eventArgs.Jid}");
            char[] messageCharArray = eventArgs.Message.Body.ToCharArray();
            Array.Reverse(messageCharArray);
            sendChatReply(
                eventArgs.Message,
                new string(messageCharArray)
            );
        }

        /// <summary>
        /// Handles any errors thrown by the XMPP client engine.
        /// </summary>
        private static void errorHandler(object sender, ErrorEventArgs eventArgs) {
            Console.Error.WriteLine($"Error: {eventArgs.Reason}");
            Console.Error.WriteLine(eventArgs.Exception);
        }

        #endregion

        #region Message Senders

        /// <summary>
        /// Sends a chat message to the specified JID.
        /// </summary>
        /// <param name="to">The JID to send the message to.</param>
        /// <param name="message">The messaage to send.</param>
        private static void sendChatMessage(Jid to, string message)
        {
            //Console.WriteLine($"[Rhino/Send/Chat] Sending {message} -> {to}");
            client.SendMessage(
                to, message,
                null, null, MessageType.Chat
            );
        }
        /// <summary>
        /// Sends a chat message in direct reply to a given incoming message.
        /// </summary>
        /// <param name="originalMessage">Original message.</param>
        /// <param name="reply">Reply.</param>
        private static void sendChatReply(Message originalMessage, string reply)
        {
            //Console.WriteLine($"[Rhino/Send/Reply] Sending {reply} -> {originalMessage.From}");
            client.SendMessage(
                originalMessage.From, reply,
                null, originalMessage.Thread, MessageType.Chat
            );
        }

        #endregion
    }
}
Art by Mythdael