Starbeamrainbowlabs

Stardust
Blog


Archive


Mailing List Articles Atom Feed Comments Atom Feed Twitter Reddit Facebook

Tag Cloud

3d 3d printing account algorithms android announcement architecture archives arduino artificial intelligence artix assembly async audio automation backups bash batch blog bookmarklet booting bug hunting c sharp c++ challenge chrome os code codepen coding conundrums coding conundrums evolved command line compilers compiling compression css dailyprogrammer data analysis debugging demystification distributed computing documentation downtime electronics email embedded systems encryption es6 features ethics event experiment external first impressions future game github github gist gitlab graphics hardware hardware meetup holiday holidays html html5 html5 canvas infrastructure interfaces internet interoperability io.js jabber jam javascript js bin labs learning library linux lora low level lua maintenance manjaro network networking nibriboard node.js operating systems own your code performance photos php pixelbot portable privacy problem solving programming problems project projects prolog protocol protocols pseudo 3d python reddit redis reference releases resource review rust searching secrets security series list server software sorting source code control statistics storage svg talks technical terminal textures thoughts three thing game three.js tool tutorial twitter ubuntu university update updates upgrade version control virtual reality virtualisation visual web website windows windows 10 xmpp xslt

Setting up a Mosquitto MQTT server

I recently found myself setting up a mosquitto instance (yep, for this) due to a migration we're in the middle of doing and it got quite interesting, so I thought I'd post about it here. This post is also partly documentation of what I did and why, just in case future people come across it and wonder how it's setup, though I have tried to make it fairly self-documenting.

At first, I started by doing sudo apt install mosquitto and seeing if it would work. I can't remember if it did or not, but it certainly didn't after I played around with the configuration files. To this end, I decided that enough was enough and I turned the entire configuration upside-down. First up, I needed to disable the existing sysV init-based service that ships with the mosquitto package:

sudo systemctl stop mosquitto # Just in case
sudo systemctl start mosquitto

Next, I wrote a new systemd service file:

[Unit]

Description=Mosquitto MQTT Broker
After=syslog.target rsyslog.target network.target

[Service]
Type=simple
PIDFile=/var/run/mosquitto/mosquitto.pid
User=mosquitto

PermissionsStartOnly=true
ExecStartPre=-/bin/mkdir /run/mosquitto
ExecStartPre=/bin/chown -R mosquitto:mosquitto /run/mosquitto

ExecStart=/usr/sbin/mosquitto --config-file /etc/mosquitto/mosquitto.conf
ExecReload=/bin/kill -s HUP $MAINPID

StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=mosquitto


[Install]
WantedBy=multi-user.target

This is broadly similar to the service file I developed in my earlier tutorial post, but it's slightly more complicated.

For one, I use PermissionsStartOnly=true and a series of ExecStartPre directives to allow mosquitto to create a PID file in a directory in /run. /run is a special directory on Linux for PID files and other such things, but normally only root can modify it. mosquitto will be running under the mosquitto user (surprise surprise), so we need to create a subdirectory for it and chown it so that it has write permissions.

A PID file is just a regular file on disk that contains the PID (Process IDentifier) number of the primary process of a system service. System service managers such as systemd and OpenRC use this number to manage the health of the service while it's running and send it various signals (such as to ask it to reload its configuration file).

With this in place, I then added an rsyslog definition at /etc/rsyslog.d/mosquitto.conf to tell it where to put the log files:

if $programname == 'kraggwapple' then /var/log/mosquitto/mosquitto.log
if $programname == 'kraggwapple' then stop

Thinking about it, I should probably check that a log rotation definition file is also in place.

Just in case, I then chowned the pre-existing log files to ensure that rsyslog could read & write to it:

sudo chown -R syslog: /var/log/mosquitto

Then, I filled out /etc/mosquitto/mosquitto.conf with a few extra directives and restarted the service. Here's the full configuration file:

# Place your local configuration in /etc/mosquitto/conf.d/
#
# A full description of the configuration file is at
# /usr/share/doc/mosquitto/examples/mosquitto.conf.example

# NOTE: We can't use tab characters here, as mosquitto doesn't like it.

pid_file /run/mosquitto/mosquitto.pid

# Persistence configuration
persistence true
persistence_location /var/lib/mosquitto/


# Not a file today, thanks
# Log files will actually end up at /var/llog/mosquitto/mosquitto.log, but will go via syslog
# See /etc/rsyslog.d/mosquitto.conf
#log_dest file /var/log/mosquitto/mosquitto.log
log_dest syslog


include_dir /etc/mosquitto/conf.d


# Documentation: https://mosquitto.org/man/mosquitto-conf-5.html

# Require a username / password to connect
allow_anonymous false
# ....which are stored in the following file
password_file /etc/mosquitto/mosquitto_users

# Make a log entry when a client connects & disconnects, to aid debugging
connection_messages true

# TLS configuration
# Disabled at the moment, since we don't yet have a letsencrypt cert
# NOTE: I don't think that the sensors currently connect over TLS. We should probably fix this.
# TODO: Point these at letsencrypt
#cafile /etc/mosquitto/certs/ca.crt
#certfile /etc/mosquitto/certs/hostname.localdomain.crt
#keyfile /etc/mosquitto/certs/hostname.localdomain.key

As you can tell, I've still got some work to do here - namely the TLS setup. It's a bit of a chicken-and-egg problem, because I need the domain name to be pointing at the MQTT server in order to get a Let's Encrypt TLS certificate, but that'll break all the sensors using the current one..... I'm sure I'll figure it out.

But wait! We forgot the user accounts. Before I started the new service, I added some user accounts for client applications to connect with:

sudo mosquitto_passwd /etc/mosquitto/mosquitto_users username1
sudo mosquitto_passwd /etc/mosquitto/mosquitto_users username1

The mosquitto_passwd program prompts for a password - that way you don't end up with the passwords in your ~/.bash_history file.

With all that taken care of, I started the systemd service:

sudo systemctl daemon-reload
sudo systemctl start mosquitto-broker.service

Of course, I ended up doing a considerable amount of debugging in between all this - I've edited it down to make it more readable and fit better in a blog post :P

Lastly, because I'm paranoid, I double-checked that it was running with htop and netstat:


sudo netstat -peanut | grep -i mosquitto
tcp        0      0 0.0.0.0:1883            0.0.0.0:*               LISTEN      112        2676558    5246/mosquitto      
tcp        0      0 x.y.z.w:1883           x.y.z.w:54657       ESTABLISHED 112        2870033    1234/mosquitto      
tcp        0      0 x.y.z.w:1883           x.y.z.w:39365       ESTABLISHED 112        2987984    1234/mosquitto      
tcp        0      0 x.y.z.w:1883           x.y.z.w:58428       ESTABLISHED 112        2999427    1234/mosquitto      
tcp6       0      0 :::1883                 :::*                    LISTEN      112        2676559    1234/mosquitto      

...no idea why it want to connect to itself, but hey! Whatever floats its boat.

Ensure your SSH server is secure with SSH Check

We've got ssllabs.com for testing HTTPS servers to ensure they are setup to be secure, and personally I've been using it for years now (psst, starbeamrainbowlabs.com gets an A+!).

SSH servers are a very different story, however. While I've blogged about them before, I mainly focused on preventing unauthorised access to a server by methods such as password cracking attacks.

Now that I'm coming to the end of my Msc in Security and Distributed Computing, however, I've realised there's a crucial element missing here: the security of the connection itself. HTTPS isn't the only one with complicated cipher suites that it supports that need correctly configuring.

The solution here is to check the SSH server in the same way that we do for a HTTPS web server. For this though we need a tool to do this for us and tell us what's good and what's not about our configuration - which is where SSH Check comes in.

I discovered it recently, and it pretends to connect to an SSH server to gauge it's configuration - after which it quickly disconnects before the remote server asks it for credentials to login.

A screenshot of a test of the example ssh server

Because SSH allows for every stage of the encryption process to be configured individually, SSH Check tests 4 main areas:

  • The key exchange algorithm (the algorithm used to exchange the secret key for symmetric encryption going forwards)
  • The algorithms used in the server's host SSH keys (the key whose ID is shown to you when you connect asking you if you want to continue)
  • The encryption algorithm (the symmetrical encryption algorithm used after key exchange)
  • The MAC algorithm (the Message Authentication Code algorithm - used to ensure integrity of messages)

It displays whether each algorithm is considered safe or not, and which ones are widely considered to be either deprecated or contain backdoors. In addition, it also displays the technical names of each one so that you can easily reconfigure your SSH server to disable unsafe algorithms, which is nice (good luck deciphering the SSL Labs encryption algorithms list and matching it up to the list already configured in your web server......).

It also presents a bunch of other interesting information too, which is nice. It identified a number of potential issues with the way that I had SSH setup for starbeamrainbowlabs.com along with some suggested improvements, which I've now fixed.

If you have a server that you access via SSH, I recommend checking it with SSH Check - especially if you expose SSH publicly over the Internet.

Found this interesting? Got another testing tool you'd like to share? Comment below!

Own your code, Part 2: The curious case of the unreliable webhook

In the last post, I talked about how to setup your own Git server with Gitea. In this one, I'm going to take bit of a different tack - and talk about one of the really annoying problems I ran into when setting up my continuous integration server, Laminar CI.

Since I wanted to run the continuous integration server on a different machine to the Gitea server itself, I needed a way for the Gitea server to talk to the CI server. The natural choice here is, of course, a Webhook-based system.

After installing and configuring Webhook on the CI server, I set to work writing a webhook receiver shell script (more on this in a future post!). Unfortunately, it turned out that that Gitea didn't like sending to my CI server very much:

A ton of failed attempts at sending a webhook to the CI server

Whether it succeeded or not was random. If I hit the "Test Delivery" button enough times, it would eventually go through. My first thought was to bring up the Gitea server logs to see if it would give any additional information. It claimed that there was an i/o timeout communicating with the CI server:

Delivery: Post https://ci.bobsrockets.com/hooks/laminar-config-check: read tcp 5.196.73.75:54504->x.y.z.w:443: i/o timeout

Interesting, but not particularly helpful. If that's the case, then I should be able to get the same error with curl on the Gitea server, right?

curl https://ci.bobsrockets.com/hooks/testhook

.....wrong. It worked flawlessly. Every time.

Not to be beaten by such an annoying issue, I moved on to my next suspicion. Since my CI server is unfortunately behind NAT, I checked the NAT rules on the router in front of it to ensure that it was being exposed correctly.

Unfortunately, I couldn't find anything wrong here either! By this point, it was starting to get really rather odd. As a sanity check, I decided to check the server logs on the CI server, since I'm running Webhook behind Nginx (as a reverse-proxy):

5.196.73.75 - - [04/Dec/2018:20:48:05 +0000] "POST /hooks/laminar-config-check HTTP/1.1" 408 0 "-" "GiteaServer"

Now that's weird. Nginx has recorded a HTTP 408 error. Looking is up reveals that it's a Request Timeout error, which has the following definition:

The server did not receive a complete request message within the time that it was prepared to wait.

Wait what? Sounds to me like there's an argument going on between the 2 servers here - in which each server is claiming that the other didn't send a complete request or response.

At this point, I blamed this on a faulty HTTP implementation in Gitea, and opened an issue.

As a workaround, I ended up configuring Laminar to use a Unix socket on disk (as opposed to an abstract socket), forwarding it over SSH, and using a git hook to interact with it instead (more on how I managed this in a future post. There's a ton of shell scripting that I need to talk about first).

This isn't the end of this tail though! A month or two after I opened the issue, I wound up in the situation whereby I wanted to connect a GitHub repository to my CI server. Since I don't have shell access on github.com, I had to use the webhook.

When I did though, I got a nasty shock: The webhook deliveries exhibited the exact same random failures as I saw with the Gitea webhook. If I'd verified the Webhook server and cleared Gitea's HTTP implementation's name, then what else could be causing the problem?

At this point, I can only begin to speculate what the issue is. Personally, I suspect that it's a bug in the port-forwarding logic of my router, whereby it drops the first packet from a new IP address while it sets up a new NAT session to forward the packets to the CI server or something - so subsequent requests will go through fine, so long as they are sent within the NAT session timeout and from the same IP. If you've got a better idea, please comment below!

Of course, I really wanted to get the GitHub repository connected to my CI server, and if the only way I could do this was with a webhook, it was time for some request-wrangling.

My solution: A PHP proxy script running on the same server as the Gitea server (since it has a PHP-enabled web server set up already). If said script eats the request and emits a 202 Accepted immediately, then it can continue trying to get a hold of the webhook on the CI server 'till the cows come home - and GitHub will never know! Genius.

PHP-FPM (the fastcgi process manager; great alongside Nginx) makes this possible with the fastcgi_finish_request() method, which both flushes the buffer and ends the request to the client, but doesn't kill the PHP script - allowing for further processing to take place without the client having to wait.

Extreme caution must be taken with this approach however, as it can easily lead to a situation where the all the PHP-FPM processes are busy waiting on replies from the CI server, leaving no room for other requests to be fulfilled and a big messy pile-up in the queue forming behind them.

Warnings aside, here's what I came up with:

<?php

$settings = [
    "target_url" => "https://ci.bobsrockets.com/hooks/laminar-git-repo",
    "response_message" => "Processing laminar job proxy request.",
    "retries" => 3,
    "attempt_timeout" => 2 // in seconds, for a single attempt
];

$headers = "host: ci.starbeamrainbowlabs.com\r\n";
foreach(getallheaders() as $key => $value) {
    if(strtolower($key) == "host") continue;
    $headers .= "$key: $value\r\n";
}
$headers .= "\r\n";

$request_content = file_get_contents("php://input");

// --------------------------------------------

http_response_code(202);
header("content-type: text/plain");
header("content-length: " . strlen($settings["response_message"]));
echo($settings["response_message"]);

fastcgi_finish_request();

// --------------------------------------------

function log_message($msg) {
    file_put_contents("ci-requests.log", $msg, FILE_APPEND);
}

for($i = 0; $i < $settings["retries"]; $i++) {
    $start = microtime(true);

    $context = stream_context_create([
        "http" => [
            "header" => $headers,
            "method" => "POST",
            "content" => $request_content,
            "timeout" => $settings["attempt_timeout"]
        ]
    ]);

    $result = file_get_contents($settings["target_url"], false, $context);

    if($result !== false) {
        log_message("[" . date("r") . "] Queued laminar job in " . (microtime(true) - $start_time)*1000 . "ms");
        break;
    }


    log_message("[" . date("r") . "] Failed to laminar job after " . (microtime(true) - $start_time)*1000 . "ms.");
}

I've named it autowrangler.php. A few things of note here:

  • php://input is a special virtual file that's mapped internally by PHP to the client's request. By eating it with file_get_contents(), we can get the entire request body that the client has sent to us, so that we can forward it on to the CI server.
  • getallheaders() lets us get a hold of all the headers sent to us by the client for later forwarding
  • I use log_message() to keep a log of the successes and failures in a log file. So far I've got a ~32% failure rate, but never more than 1 failure in a row - giving some credit to my earlier theory I talked about above.

This ends the tale of the recalcitrant and unreliable webhook. Hopefully you've found this an interesting read. In future posts, I want to look at how I configured Webhook, the inner workings of the git hook I mentioned above, and the collection of shell scripts I've cooked to that make my CI server tick in a way that makes it easy to add new projects quickly.

Found this interesting? Run into this issue yourself? Found a better solution workaround? Comment below!

Powahroot: Client and Server-side routing in Javascript

The powahroot logo, which is a 16x16 pixel-art image and looks like a purple-red carrot with bright orange stripes and yellow light lines coming out of the sides

If I want to really understand something, I usually end up implementing it myself. This is the case with my latest library - powahroot, but also because I didn't really like the way any of the alternatives functioned because I'm picky.

Originally I wrote it for this project (although it's actually for a little satellite project that isn't open-source unfortunately - maybe at some point in the future!) - but I liked it so much that I decided that I had to turn it into a full library that I could share here.

In short, a routing framework helps you get requests handled in the right places in your application. I've actually blogged about this before, so I'd recommend you go and read that post first before continuing with this one.

For all the similarities between the server side (as mentioned in my earlier post) and the client side, the 2 environments are different enough that they warrant having 2 distinctly separate routers. In powahroot, I provide both a ServerRouter and a ClientRouter.

The ServerRouter is designed to handle Node.js HTTP request and response objects. It provides shortcut methods .get(), .post(), and others to quickly create routes for different request types - and also supports middleware to enable logical separation of authentication, request processing, and response generation.

The ClientRouter, on the other hand, is essentially a stripped-down version of the ServerRouter that's tailored to functioning in a browser environment. It doesn't support middleware (yet?), but it does support the pushstate that's part of the History API.

I've also published it on npm, so you can install it like this:

npm install --save powahroot

Then you can use it like this:

# On the server
import ServerRouter from 'powahroot/Server.mjs';

// ....

const router = new ServerRouter();
router.on_all(async (context, next) => { console.debug(context.url); await next()})
router.get("/files/::filepath", (context, _next) => context.send.plain(200, `You requested ${context.params.filepath}`));
// .....
# On the client
import ClientRouter from 'powahroot/Client.mjs';

// ....

const router = new ClientRouter({
    // Options object. Default settings:
    verbose: false, // Whether to be verbose in console.log() messages
    listen_pushstate: true, // Whether to react to browser pushstate events (excluding those generated by powahroot itself, because that would cause an infinite loop :P)
});

As you can see, powahroot uses ES6 Modules, which makes it easy to split up your code into separate independently-operating sections.

In addition, I've also generated some documentation with the documentation tool on npm. It details the API available to you, and should serve as a good reference when using the library.

You can find that here: https://starbeamrainbowlabs.com/code/powahroot/docs/

It's automatically updated via continuous integration and continuous deployment, which I really do need to get around to blogging about (I've spent a significant amount of time setting up the base system upon which powahroot's CI and CD works. In short I use Laminar CI and a GitHub Webhook, but there's a lot of complicated details).

Found this interesting? Used it in your own project? Got an idea to improve powahroot? Comment below!

How to quickly run TUI programs via SSH

Hello, and welcome to another blog post! I hope everyone had a lovely and restful Easter.

Very often, I want to run a command on a remote machine via SSH and leave it in a terminal in 1 corner of my screen whilst I work in another terminal on that same machine.

Up until now, I've always SSHed into the machine in question and then run the command manually:

user@local:~$ ssh bob@bobsrockets.com
# .....
bob@bobsrockets.com:~$ sudo htop

This is fine, but it takes a moment to connect & setup the terminal on the remote end. What if there was a way to specify the command to run remotely?

Well, it turns out there is. SSH lets you specify the command to run on the remote server instead of the default shell:

ssh sean@seanssatellites.io apt search beanstalk

Sadly, this doesn't always yield the results expected. Colour disappears from the output, and sometimes things like htop (ssh bill@billsboosters.co.uk htop) and sudo (ssh edgar@edsengineering.eu sudo apt update) break altogether:

Error opening terminal: unknown.

I can't remember how I figured it out, but I discovered that the issue is that when you specify the command instead of letting the default shell initialise, it treats it as some sort of 'script-mode', and doesn't allocate a pseudo-terminal on the remote machine.

Thankfully, there's a way to force it to allocate a pseudo-terminal. This is done with the -t flag:

ssh -t bob@bobsrockets.com sudo htop

This then enables interactive commands to work as intended, and causes colour to be displayed again :D

Found this useful? Got another great SSH tip? Comment below!

Fixing recursive uploads with lftp: The tale of the rogue symbolic link

I've been setting up continuous deployment recently for an application I'm working on, and as part of this process I'm uploading the release with sftp, using a restricted user account that is both chrooted (though I use a subfolder of the home directory to be extra-sure) and doesn't have shell access.

Since the application is written in PHP, I use composer to manage the server-side PHP library dependencies - which works very well. The problems start when I try to upload the whole thing to the server - so I thought I'd make a quick post here on how I fixed it.

In a previous build step, I generate an archive for the release, and put it in the continuous integration (CI) archive folder.

In the deployment phase, it unpacks this compressed archive and then uploads it to the production server with lftp, because I need to do some fiddling about that I can't do with regular sftp (anyone up for a tutorial on this? I'd be happy to write a few posts on this). However, I kept getting this weird error in the CI logs:

lftp: MirrorJob.cc:242: void MirrorJob::JobFinished(Job*): Assertion `transfer_count>0' failed.
./lantern-build-engine/lantern.sh: line 173:  5325 Aborted                 $command_name $@

Very strange indeed! Apparently, lftp isn't known for outputting especially useful error messages when used in an automated script like this. I tried everything. I rewrote, refactored, and completely turned the whole thing upside-down multiple times. This, as you might have guessed, took quite a while.

Commits aside, it was only when I refactored it to do the upload via the regular sftp command like this that it became apparent what the problem was:

sftp -i "${SSH_KEY_PATH}" -P "${deploy_ssh_port}" -o PasswordAuthentication=no "${deploy_ssh_user}@${deploy_ssh_host}" << SFTPCOMMANDS
mkdir ${deploy_root_dir}/www-new
put -r ${source_upload_dir}/* ${deploy_root_dir}/www-new
bye
SFTPCOMMANDS

Thankfully, sftp outputs much more helpful error messages. I saw this in the CI logs:

.....
Entering /tmp/tmp.ssR3j7vGhC-air-quality-upload//vendor/nikic/php-parser/bin
Entering /tmp/tmp.ssR3j7vGhC-air-quality-upload//vendor/bin
php-parse: not a regular file

The last line there instantly told me what I needed to know: It was failing to upload a symbolic link.

The solution here was simple: Unwind the symbolic links into hard links instead, and then I'll still get the benefit of a link on the local disk, but sftp will treat it as a regular file and upload a duplicate.

This is done like so:

find "${temp_dir}" -type l -exec bash -c 'ln -f "$(readlink -m "$0")" "$0"' {} \;

Thanks to SuperUser for the above (though I would have expected to find it on the Unix Stack Exchange).

If you'd like to see the full deployment script I've written, you can find it here.

There's actually quite a bit of context to how I ended up encountering this problem in the first place - which includes things like CI servers, no small amount of bash scripting, git servers, and remote deployment.

In the future, I'd like to make a few posts about the exploration I've been doing in these areas - perhaps along the lines of "how did we get here?", as I think they'd make for interesting reading.....

TCP (Client) Networking in Pure Bash

Recently I re-remembered about /dev/tcp - a virtual bash file system that allows you to directly connect to a remote TCP endpoint - without the use of nc / netcat / ncat.

While it only allows you to connect (no listening, sadly), it's still a great bash built-in that helps avoid awkward platform-specific issues.

Here's how you'd listen for a connection with netcat, sending as many random numbers as possible to the poor unsuspecting client:

netcat -l 0.0.0.0 6666 </dev/urandom

Here's how you'd traditionally connect to that via netcat:

netcat X.Y.Z.W 6666 | pv >/dev/null

The pv command there is not installed by default, but is a great tool that shows the amount of data flowing through a pipe. It's available in the repositories for most Linux distributions without any special configuration required - so sudo apt install pv should be enough on Debian-based distributions.

Now, let's look at how we'd do this with pure bash:

pv >/dev/null </dev/tcp/X.Y.Z.W/6666

Very neat! We've been able to eliminate an extra child process. The question is though: how do they compare performance-wise? Well, that depends on how we measure it. In my case, I measured a single connection, downloading data as fast as it can for 60 seconds.

Another test would be to open many connections and download lots of small files. While I haven't done that here, I theorise that the pure-bash method would win out, as it doesn't have to spawn lots of subprocesses.

In my test, I did this:

# Traditional method
timeout 60 nc X.Y.Z.W 6666 | pv >/dev/null
# Pure-Bash method
timeout 60 pv >/dev/null </dev/tcp/X.Y.Z.W/6666

The timeout command kills the process after a given number of seconds. The server I connected to was just this:

while true; do nc -l 0.0.0.0 6666 </dev/urandom; done

Running the above test, I got the following output:

$ timeout 60 pv >/dev/null </dev/tcp/172.16.230.58/6666
 652MiB 0:00:59 [11.2MiB/s] [                                      <=>         ]
$ timeout 60 nc 172.16.230.58 6666 | pv >/dev/null
 599MiB 0:01:00 [11.1MiB/s] [                                     <=>          ]
Method Total Data Transferred
Traditional 599MiB
Pure Bash 652MiB

As it turns out, the pure bash method is apparently faster - by ~8.8%. I think this might have something to do with the lack of the additional sub-process, or some other optimisation that bash can apply when doing the TCP networking itself.

Found this interesting? Got a cool use for it? Discovered another awesome bash built-in? Comment below!

Bridging the gap between XMPP and shell scripts

In a previous post, I set up a semi-automated backup system for my Raspberry Pi using duplicity, sendxmpp, and an external drive. It's been working fabulously for a while now, but unfortunately the other week sendxmpp suddenly stopped working with no obvious explanation. Given the long list of arguments I had to pass it:

sendxmpp --file "${xmpp_config_file}" --resource "${xmpp_resource}" --tls --chatroom "${xmpp_target_chatroom}" ...........

....and the fact that I've had to tweak said arguments on a number of occasions, I thought it was time to switch it out for something better suited to the task at hand.

Unfortunately, finding such a tool proved to be a challenge. I even asked on Reddit - but nobody had anything that fit the bill (xmpp-bridge wouldn't compile correctly - and didn't support multi-user chatrooms anyway, and xmpppy was broken too).

If you're unsure as to what XMPP is, I'd recommend checkout out either this or this tutorial. They both give a great introduction to what it is, what it does, and how it works - and the rest of this post will make much more sense if you read that first :-)

To this end, I finally gave in and wrote my own tool, which I've called xmppbridge. It's a global Node.JS script that uses the simple-xmpp to forward the standard input to a given JID over XMPP - which can optionally be a group chat.

In this post, I'm going to look at how I put it together, some of the issues I ran into along the way, and how I solved them. If you're interested in how to install and use it, then the package page on npm will tell you everything you need to know:

xmppbridge on npm

Architectural Overview

The script consists of 3 files:

  • index.sh - Calls the main script with ES6 modules enabled
  • index.mjs - Parses the command-line arguments and environment variables out, and provides a nice CLI
  • XmppBridge.mjs - The bit that actually captures input from stdin and sends it via XMPP

Let's look at each of these in turn - starting with the command-line interface.

CLI Parsing

The CLI itself is relatively simple - and follows a paradigm I've used extensively in C♯ (although somewhat modified of course to get it to work in Node.JS, and without fancy ANSI colouring etc.).

#!/usr/bin/env node
"use strict";

import XmppBridge from './XmppBridge.mjs';

const settings = {
    jid: process.env.XMPP_JID,
    destination_jid: null,
    is_destination_groupchat: false,
    password: process.env.XMPP_PASSWORD
};

let extras = [];
// The first arg is the script name itself
for(let i = 1; i < process.argv.length; i++) {
    if(!process.argv[i].startsWith("-")) {
        extras.push(process.argv[i]);
        continue;
    }

    switch(process.argv[i]) {
        case "-h":
        case "--help":
            // ........
            break;

        // ........

        default:
            console.error(`Error: Unknown argument '${process.argv[i]}'.`);
            process.exit(2);
            break;
    }
}

We start with a shebang, telling Linux-based systems to execute the script with Node.JS. Following that, we import the XmppBridge class that's located in XmppBrdige.mjs (we'll come back to this later). Then, we define an object to hold our settings - and pull in the environment variables along with defining some defaults for other parameters.

With that setup, we can then parse the command-line arguments themselves - using the exact same paradigm I've used time and time again in C♯.

Once the command-line arguments are parsed, we validate the final settings to ensure that the user hasn't left any required parameters undefined:

for(let environment_varable of ["XMPP_JID", "XMPP_PASSWORD"]) {
    if(typeof process.env[environment_varable] == "undefined") {
        console.error(`Error: The environment variable ${environment_varable} wasn't found.`);
        process.exit(1);
    }
}

if(typeof settings.destination_jid != "string") {
    console.error("Error: No destination jid specified.");
    process.exit(5);
}

That's basically all that index.mjs does. All that's really left is passing the parameters to an instance of XmppBridge:

const bridge = new XmppBridge(
    settings.destination_jid,
    settings.is_destination_groupchat
);
bridge.start(settings.jid, settings.password);

Shebang Trouble

Because I've used ES6 modules here, currently Node must be informed of this via the --experimental-modules CLI argument like this:

node --experimental-modules ./index.mjs

If we're going to make this a global command-line tool via the bin directive in package.json, then we're going to have to ensure that this flag gets passed to Node and not our program. While we could alter the shebang, that comes with the awkward problem that not all systems (in fact relatively few) support using both env and passing arguments. For example, this:

#!/usr/bin/env node --experimental-modules

Wouldn't work, because env doesn't recognise that --experimental-modules is actually a command-line argument and not part of the binary name that it should search for. I did see some Linux systems support env -S to enable this functionality, but it's hardly portable and doesn't even appear to work all the time anyway - so we'll have to look for another solution.

Another way we could do it is by dropping the env entirely. We could do this:

#!/usr/local/bin/node --experimental-modules

...which would work fine on my system, but probably not on anyone else's if they haven't installed Node to the same place. Sadly, we'll have to throw this option out the window too. We've still got some tricks up our sleeve though - namely writing a bash wrapper script that will call node telling it to execute index.mjs with the correct arguments. After a little bit of fiddling, I came up with this:

#!/usr/bin/env bash
install_dir="$(dirname "$(readlink -f $0)")";
exec node --experimental-modules "${install_dir}/index.mjs" $@

2 things are at play here. Firstly, we have to deduce where the currently executing script actually lies - as npm uses a symbolic link to allow a global command-line tool to be 'found'. Said symbolic link gets put in /usr/local/bin/ (which is, by default, in the user's PATH), and links to where the script is actually installed to.

To figure out the directory that we've been installed to is (and hence the location of index.mjs), we need to dereference the symbolic link and strip the index.sh filename away. This can be done with a combination of readlink -f (dereferences the symbolic link), dirname (get the parent directory of a given file path), and $0 (holds the path to the currently executing script in most circumstances) - which, in the case of the above, gets put into the install_dir variable.

The other issue is passing all the existing command-line arguments to index.mjs unchanged. We do this with a combination of $@ (which refers to all the arguments passed to this script except the script name itself) and exec (which replaces the currently executing process with a new one - in this case it replaces the bash shell with node).

This approach let's us customise the CLI arguments, while still providing global access to our script. Here's an extract from xmppbridge's package.json showing how I specify that I want index.sh to be a global script:

{
    .....

    "bin": {
        "xmppbridge": "./index.sh"
    },

    .....
}

Bridging the Gap

Now that we've got Node calling our script correctly and the arguments parsed out, we can actually bridge the gap. This is as simple as some glue code between simple-xmpp and readline. simple-xmpp is an npm package that makes programmatic XMPP interaction fairly trivial (though I did have to look at examples in the GitHub repository to figure out how to send a message to a multi-user chatroom).

readline is a Node built-in that allows us to read the standard input line-by-line. It does other things too (and is great for interactive scripts amongst other things), but that's a tale for another time.

The first task is to create a new class for this to live in:

"use strict";

import readline from 'readline';

import xmpp from 'simple-xmpp';

class XmppBridge {

    /**
     * Creates a new XmppBridge instance.
     * @param   {string}    in_login_jid        The JID to login with.
     * @param   {string}    in_destination_jid  The JID to send stdin to.
     * @param   {Boolean}   in_is_groupchat     Whether the destination JID is a group chat or not.
     */
    constructor(in_destination_jid, in_is_groupchat) {
        // ....
    }
}

export default XmppBridge;

Very cool! That was easy. Next, we need to store those arguments and connect to the XMPP server in the constructor:

this.destination_jid = in_destination_jid;
this.is_destination_groupchat = in_is_groupchat;

this.client = xmpp;
this.client.on("online", this.on_connect.bind(this));
this.client.on("error", this.on_error.bind(this));
this.client.on("chat", ((_from, _message) => {
    // noop
}).bind(this));

I ended up having to define a chat event handler - even though it's pointless, as I ran into a nasty crash if I didn't do so (I suspect that this use-case wasn't considered by the original package developer).

The next area of interest is that online event handler. Note that I've bound the method to the current this context - this is important, as it would be able to access the class instance's properties otherwise. Let's take a look at the code for that handler:

console.log(`[XmppBridge] Connected as ${data.jid}.`);
if(this.is_destination_groupchat) {
    this.client.join(`${this.destination_jid}/bot_${data.jid.user}`);
}
this.stdin = readline.createInterface({
    input: process.stdin,
    output: process.stdout,
    terminal: false
});
this.stdin.on("line", this.on_line_handler.bind(this));
this.stdin.on("close", this.on_stdin_close_handler.bind(this));

This is the point at which we open the standard input and start listening for things to send. We don't do it earlier, as we don't want to end up in a situation where we try sending something before we're connected!

If we're supposed to be sending to a multi-user chatroom, this is also the point at which it joins said room. This is required as you can't send a message to a room that you haven't joined.

The resource (the bit after the forward slash /), for a group chat, specifies the nickname that you want to give to yourself when joining. Here, I automatically set this to the user part of the JID that we used to login prefixed with bot_.

The connection itself is established in the start method:

start(jid, password) {
    this.client.connect({
        jid,
        password
    });
}

And every time we receive a line of input, we execute the send() method:

on_line_handler(line_text) {
    this.send(line_text);
}

I used a full method here, as initially I had some issues and wanted to debug which methods were being called. That send method looks like this:

send(message) {
    this.client.send(
        this.destination_jid,
        message,
        this.is_destination_groupchat
    );
}

The last event handler worth mentioning is the close event handler on the readline interface:

on_stdin_close_handler() {
    this.client.disconnect();
}

This just disconnects from the XMXPP server so that Node can exit cleanly.

That basically completes the script. In total, the entire XmppBridge.mjs class file is 72 lines. Not bad going!

You can install this tool for yourself with sudo npm install -g xmppbridge. I've documented how it use it in the README, so I'd recommend heading over there if you're interested in trying it out.

Found this interesting? Got a cool use for XMPP? Comment below!

Sources and Further Reading

Setup your very own VPN in 10 minutes flat

Hey! Happy new year :-)

I've been looking to setup a personal VPN for a while, and the other week I discovered a rather brilliant project called PiVPN, which greatly simplifies the process of setting one up - and managing it thereafter.

It's been working rather well so far, so I thought I'd post about it so you can set one up for yourself too. But first though, we should look at the why. Why a VPN? What does it do?

Basically, a VPN let you punch a great big hole in the network that you're connected to and appear as if you're actually on a network elsewhere. The extent to which this is the case varies depending on the purpose, (for example a University or business might setup a VPN that allows members to access internal resources, but doesn't route all traffic through the VPN), but the general principle is the same.

It's best explained with a diagram. Imagine you're at a Café:

Everyone on the Café's WiFi can see the internet traffic you're sending out. If any of it is unencrypted, then they can additionally see the content of said traffic - e.g. emails you send, web pages you load, etc. Even if it's encrypted, statistical analysis can reveal which websites you're visiting and more.

If you don't trust a network that you're connected to, then by utilising a VPN you can create an encrypted tunnel to another location that you do trust:

Then, all that the other users of the Café's WiFi will see is an encrypted stream of packets - all heading for the same destination. All they'll know is roughly how much traffic you're sending and receiving, but not to where.

This is the primary reason that I'd like my own VPN. I trust the network I've got setup in my own house, so it stands to reason that I'd like to setup a VPN server there, and pretend that my devices when I'm out and about are still at home.

In theory, I should be able to access the resources on my home network too when I'm using such a VPN - which is an added bonus. Other reasons do exist for using a VPN, but I won't discuss them here.

In terms of VPN server software, I've done a fair amount of research into the different options available. My main criteria are as follows:

  • Fairly easy to install
  • Easy to understand what it's doing once installed (transparency)
  • Easy to manage

The 2 main technologies I came across were OpenVPN and IPSec. Each has their own strengths & weaknesses. An IPSec VPN is, apparently, more efficient - especially since it executes on the client in kernel-space instead of user-space. It's a lighter protocol, too - leading to less overhead. It's also much more likely to be detected and blocked when travelling through strict firewalls, making me slightly unsure about it.

OpenVPN, on the other hand, executes entirely in user-space on both the client and the server - leading to a slightly greater overhead (especially with the mitigations for the recent Spectre & Meltdown hardware bugs). It does, however, use TLS (though over UDP by default). This characteristic makes it much more likely it'll slip through stricter firewalls. I'm unsure if that's a quality that I'm actually after or not.

Ultimately, it's the ease of management that points the way to my final choice. Looking into it, with both choices there's complex certificate management to be done whenever you want to add a new client to the VPN. For example, with StrongSwan (an open-source IPSec VPN program), you've got to generate a number of certificates with a chain of rather long commands - and the users themselves have passwords stored in plain text in a file!

While I've got no problem with reading and understanding such commands, I do have a problem with rememberability. If I want to add a new client, how easy is that to do? How long would I have to spend re-reading documentation to figure out how to do it?

Sure, I could write a program to manage the configuration files for me, but that would also require maintenance - and probably take much longer than I anticipate to write.

I forget where I found it, but it is for this reason that I ultimately decided to choose PiVPN. It's a set of scripts that sets up and manages one's an OpenVPN installation. To this end, it provides a single command - pivpn - that can be used to add, remove, and list clients and their statistics. With a concise help text, it makes it easy to figure out how to perform common tasks utilising existing terminal skills by conforming to established CLI interface norms.

If you want to install it yourself, then simply do this:

curl -L https://install.pivpn.io | bash

Of course, simply downloading and executing a random script from the Internet is never a good idea. Let's read it first:

curl -L https://install.pivpn.io | less

Once you're happy that it's not going to do anything malign to your system, proceed with the installation by executing the 1st command. It should guide you through a number of screens. Some important points I ran into:

  • The static IP address it talks about is the IP address of your server on the local network. The installation asks about the public IP address in a later step. If you've already got a static IP setup on your server (and you probably have), then you don't need to worry about this.
  • It asks you to install and enable unattended-upgrades. You should probably do this, but I ended up skipping this - as I've already got apticron setup and sending me regular emails - as I rather like to babysit the upgrade of packages on the main machines I manage. I might look into unattended-upgrades in the future if I acquire more servers than are comfortable to manage this way.
  • Make sure you fully update your system before running the installation. I use this command: sudo apt update && sudo apt-get dist-upgrade && sudo apt-get autoclean && sudo apt-get autoremove
  • Changing the port of the VPN isn't a bad idea, since PiVPN will automatically assemble .ovpn configuration files for you. I didn't end up doing this to start with, but I can always change it in the NAT rule I configured on my router later.
  • Don't forget to allow OpenVPN through your firewall! For ufw users (like me), then it's something like sudo ufw allow <port_number>/udp.
  • Don't forget to setup a NAT rule / port forwarding on your router if said server doesn't have a public IP address (if it's IPv4 it probably doesn't). If you're confused on this point, comment below and I'll blog about it. It's..... a complicated topic.

If you'd like a more in-depth guide to setting up PiVPN, then I can recommend this guide. It's a little bit dated (PiVPN now uses elliptical-curve cryptography by default), but still serves to illustrate the process pretty well.

If you're confused about some of the concepts I've presented here - leave a comment below! I'm happy to explain them in more detail. Who knows - I might end up writing another blog post on the subject....

Write an XMPP bot in half an hour

Recently I've looked at using AI to extract key information from natural language, and creating a system service with systemd. The final piece of the puzzle is to write the bot itself - and that's what I'm posting about today.

Since not only do I use XMPP for instant messaging already but it's an open federated standard, I'll be building my bot on top of it for maximum flexibility.

To talk over XMPP programmatically, we're going to need library. Thankfully, I've located just such a library which appears to work well enough, called S22.XMPP. Especially nice is the comprehensive documentation that makes development go much more smoothly.

With our library in hand, let's begin! Our first order of business is to get some scaffolding in place to parse out the environment variables we'll need to login to an XMPP account.

using System;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;

using S22.Xmpp;
using S22.Xmpp.Client;
using S22.Xmpp.Im;

namespace XmppBotDemo
{
    public static class MainClass
    {
        // Needed later
        private static XmppClient client;

        // Settings
        private static Jid ourJid = null;
        private static string password = null;

        public static int Main(string[] args)
        {
            // Read in the environment variables
            ourJid = new Jid(Environment.GetEnvironmentVariable("XMPP_JID"));
            password = Environment.GetEnvironmentVariable("XMPP_PASSWORD");

            // Ensure they are present
            if (ourJid == null || password == null) {
                Console.Error.WriteLine("XMPP Bot Demo");
                Console.Error.WriteLine("=============");
                Console.Error.WriteLine("");
                Console.Error.WriteLine("Usage:");
                Console.Error.WriteLine("    ./XmppBotDemo.exe");
                Console.Error.WriteLine("");
                Console.Error.WriteLine("Environment Variables:");
                Console.Error.WriteLine("    XMPP_JID         Required. Specifies the JID to login with.");
                Console.Error.WriteLine("    XMPP_PASSWORD    Required. Specifies the password to login with.");
                return 1;
            }

            // TODO: Connect here           

            return 0;
        }
    }
}

Excellent! We're reading in & parsing 2 environment variables: XMPP_JID (the username), and XMPP_PASSWORD. It's worth noting that you can call these environment variables anything you like! I chose those names as they describe their contents well. It's also worth mentioning that it's important to use environment variables for secrets passing them as command-line arguments cases them to be much more visible to other uses of the system!

Let's connect to the XMPP server with our newly read-in credentials:

// Create the client instance
client = new XmppClient(ourJid.Domain, ourJid.Node, password);

client.Error += errorHandler;
client.SubscriptionRequest += subscriptionRequestHandler;
client.Message += messageHandler;

client.Connect();

// Wait for a connection
while (!client.Connected)
    Thread.Sleep(100);

Console.WriteLine($"[Main] Connected as {ourJid}.");

// Wait forever.
Thread.Sleep(Timeout.Infinite);

// TODO: Automatically reconnect to the server when we get disconnected.

Cool! Here, we create a new instance of the XMPPClient class, and attach 3 event handlers, which we'll look at later. We then connect to the server, and then wait until it completes - and then write a message to the console. It looks like S22.Xmpp spins up a new thread, so unfortunately we can't catch any errors it throws with a traditional try-catch statement. Instead, we'll have to ensure we're really careful that we catch any exceptions we throw accidentally - otherwise we'll get disconnected!

It does appear that XmppClient catches some errors though, which trigger the Error event - so we should attach an event handler to that.

/// <summary>
/// Handles any errors thrown by the XMPP client engine.
/// </summary>
private static void errorHandler(object sender, ErrorEventArgs eventArgs) {
    Console.Error.WriteLine($"Error: {eventArgs.Reason}");
    Console.Error.WriteLine(eventArgs.Exception);
}

Before a remote contact is able to talk to our bot, they will send us a subscription request - which we'll need to either accept or reject. This is also done via an event handler. It's the SubscriptionRequest one this time:

/// <summary>
/// Handles requests to talk to us.
/// </summary>
/// <remarks>
/// Only allow people to talk to us if they are on the same domain we are.
/// You probably don't want this for production, but for developmental purposes
/// it offers some measure of protection.
/// </remarks>
/// <param name="from">The JID of the remote user who wants to talk to us.</param>
/// <returns>Whether we're going to allow the requester to talk to us or not.</returns>
public static bool subscriptionRequestHandler(Jid from) {
    Console.WriteLine($"[Handler/SubscriptionRequest] {from} is requesting access, I'm saying {(from.Domain == ourJid.Domain?"yes":"no")}");
    return from.Domain == ourJid.Domain;
}

This simply allows anyone on our own domain to talk to us. For development purposes this will offer us some measure of protection, but for production you should probably implement a whitelisting or logging system here.

The other interesting thing we can do here is send a user a chat message to either welcome them to the server, or explain why we rejected their request. To do this, we need to write a pair of utility methods, as sending chat messages with S22.Xmpp is somewhat over-complicated:

#region Message Senders

/// <summary>
/// Sends a chat message to the specified JID.
/// </summary>
/// <param name="to">The JID to send the message to.</param>
/// <param name="message">The messaage to send.</param>
private static void sendChatMessage(Jid to, string message)
{
    //Console.WriteLine($"[Bot/Send/Chat] Sending {message} -> {to}");
    client.SendMessage(
        to, message,
        null, null, MessageType.Chat
    );
}
/// <summary>
/// Sends a chat message in direct reply to a given incoming message.
/// </summary>
/// <param name="originalMessage">Original message.</param>
/// <param name="reply">Reply.</param>
private static void sendChatReply(Message originalMessage, string reply)
{
    //Console.WriteLine($"[Bot/Send/Reply] Sending {reply} -> {originalMessage.From}");
    client.SendMessage(
        originalMessage.From, reply,
        null, originalMessage.Thread, MessageType.Chat
    );
}

#endregion

The difference between these 2 methods is that one sends a reply directly to a message that we've received (like a threaded reply), and the other simply sends a message directly to another contact.

Now that we've got all of our ducks in a row, we can write the bot itself! This is done via the Message event handler. For this demo, we'll write a bot that echo any messages to it in reverse:

/// <summary>
/// Handles incoming messages.
/// </summary>
private static void messageHandler(object sender, MessageEventArgs eventArgs) {
    Console.WriteLine($"[Bot/Handler/Message] {eventArgs.Message.Body.Length} chars from {eventArgs.Jid}");
    char[] messageCharArray = eventArgs.Message.Body.ToCharArray();
    Array.Reverse(messageCharArray);
    sendChatReply(
        eventArgs.Message,
        new string(messageCharArray)
    );
}

Excellent! That's our bot complete. The full program is at the bottom of this post.

Of course, this is a starting point - not an ending point! A number of issues with this demo stand out. There isn't a whitelist, and putting the whole program in a single file doesn't sound like a good idea. The XMPP logic should probably be refactored out into a separate file, in order to keep the input settings parsing separate from the bot itself.

Other issues that probably need addressing include better error handling and more - but fixing them all here would complicate the example rather.

Edit: The code is also available in a git repository if you'd like to clone it down and play around with it :-)

Found this interesting? Got a cool use for it? Still confused? Comment below!

Complete Program

using System;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;
using S22.Xmpp;
using S22.Xmpp.Client;
using S22.Xmpp.Im;

namespace XmppBotDemo
{
    public static class MainClass
    {
        private static XmppClient client;
        private static Jid ourJid = null;
        private static string password = null;

        public static int Main(string[] args)
        {
            // Read in the environment variables
            ourJid = new Jid(Environment.GetEnvironmentVariable("XMPP_JID"));
            password = Environment.GetEnvironmentVariable("XMPP_PASSWORD");

            // Ensure they are present
            if (ourJid == null || password == null) {
                Console.Error.WriteLine("XMPP Bot Demo");
                Console.Error.WriteLine("=============");
                Console.Error.WriteLine("");
                Console.Error.WriteLine("Usage:");
                Console.Error.WriteLine("    ./XmppBotDemo.exe");
                Console.Error.WriteLine("");
                Console.Error.WriteLine("Environment Variables:");
                Console.Error.WriteLine("    XMPP_JID         Required. Specifies the JID to login with.");
                Console.Error.WriteLine("    XMPP_PASSWORD    Required. Specifies the password to login with.");
                return 1;
            }

            // Create the client instance
            client = new XmppClient(ourJid.Domain, ourJid.Node, password);

            client.Error += errorHandler;
            client.SubscriptionRequest += subscriptionRequestHandler;
            client.Message += messageHandler;

            client.Connect();

            // Wait for a connection
            while (!client.Connected)
                Thread.Sleep(100);

            Console.WriteLine($"[Main] Connected as {ourJid}.");

            // Wait forever.
            Thread.Sleep(Timeout.Infinite);

            // TODO: Automatically reconnect to the server when we get disconnected.

            return 0;
        }

        #region Event Handlers

        /// <summary>
        /// Handles requests to talk to us.
        /// </summary>
        /// <remarks>
        /// Only allow people to talk to us if they are on the same domain we are.
        /// You probably don't want this for production, but for developmental purposes
        /// it offers some measure of protection.
        /// </remarks>
        /// <param name="from">The JID of the remote user who wants to talk to us.</param>
        /// <returns>Whether we're going to allow the requester to talk to us or not.</returns>
        public static bool subscriptionRequestHandler(Jid from) {
            Console.WriteLine($"[Handler/SubscriptionRequest] {from} is requesting access, I'm saying {(from.Domain == ourJid.Domain?"yes":"no")}");
            return from.Domain == ourJid.Domain;
        }

        /// <summary>
        /// Handles incoming messages.
        /// </summary>
        private static void messageHandler(object sender, MessageEventArgs eventArgs) {
            Console.WriteLine($"[Handler/Message] {eventArgs.Message.Body.Length} chars from {eventArgs.Jid}");
            char[] messageCharArray = eventArgs.Message.Body.ToCharArray();
            Array.Reverse(messageCharArray);
            sendChatReply(
                eventArgs.Message,
                new string(messageCharArray)
            );
        }

        /// <summary>
        /// Handles any errors thrown by the XMPP client engine.
        /// </summary>
        private static void errorHandler(object sender, ErrorEventArgs eventArgs) {
            Console.Error.WriteLine($"Error: {eventArgs.Reason}");
            Console.Error.WriteLine(eventArgs.Exception);
        }

        #endregion

        #region Message Senders

        /// <summary>
        /// Sends a chat message to the specified JID.
        /// </summary>
        /// <param name="to">The JID to send the message to.</param>
        /// <param name="message">The messaage to send.</param>
        private static void sendChatMessage(Jid to, string message)
        {
            //Console.WriteLine($"[Rhino/Send/Chat] Sending {message} -> {to}");
            client.SendMessage(
                to, message,
                null, null, MessageType.Chat
            );
        }
        /// <summary>
        /// Sends a chat message in direct reply to a given incoming message.
        /// </summary>
        /// <param name="originalMessage">Original message.</param>
        /// <param name="reply">Reply.</param>
        private static void sendChatReply(Message originalMessage, string reply)
        {
            //Console.WriteLine($"[Rhino/Send/Reply] Sending {reply} -> {originalMessage.From}");
            client.SendMessage(
                originalMessage.From, reply,
                null, originalMessage.Thread, MessageType.Chat
            );
        }

        #endregion
    }
}
Art by Mythdael