Mailing List Articles Atom Feed Comments Atom Feed Twitter

Tag Cloud

3d account algorithms announcement archives arduino artificial intelligence assembly async audio bash batch blog bookmarklet booting c sharp c++ challenge chrome os code codepen coding conundrums coding conundrums evolved command line compiling css dailyprogrammer debugging demystification distributed computing downtime embedded systems encryption es6 features event experiment external first impressions future game github github gist graphics hardware hardware meetup holiday html html5 html5 canvas interfaces internet io.js jabber javascript js bin labs learning library linux low level lua maintenance network networking node.js operating systems performance photos php pixelbot portable privacy programming problems project projects prolog protocol protocols pseudo 3d python reddit reference release releases resource review rust secrets security series list server software sorting source code control statistics svg technical terminal textures three thing game three.js tool tutorial tutorials twitter ubuntu university update updates upgrade version control visual web website windows windows 10 xmpp

Debug your systemd services with journalctl

Some nice bulbs and bits of wood.

The chances are that if you're using linux, you will probably have run into systemd. If you find yourself in the situation where you've got a systemd service that keeps dying and you don't know why (I've been there before several times!), and there's nothing helpful in /var/log, before you give up, you might want to give journalctl a try. It's systemd's way of capturing the output of a service and storing it in it's logging system (or something).

When I first found out about it, I read that apparently journalctl -xe servicename would show me the logs for any given service. It turned out that it wasn't the case (it just threw a nasty error), so I went trawling through the man pages and found the correct command-line switch. If you've got a service called rocketbooster.service, and you want to see if systemd has any logs stored for it, then you can execute this command:

journalctl --unit rocketbooster.service

...or for short

journalctl -u rocketbooster.service

It should open the logs (if there are any) in less - with the oldest logs at the top, so you might need to scroll all the way down to the bottom to see anything that's relevant to your problem (shift + G will take you to the bottom of the file).

I've found that systemd has a habit of rotating the logs too - and journalctl doesn't appear to know how to access the rotated logs, so it's best if you use this command as soon as possible after failure (suggestions on how to access these rotated logs are welcome! Post down in the comment :D).

I thought I'd document it here in case it was useful to anyone - and so I don't forget myself! :P

Access your home linux box from anywhere with SSH tunnels

An abstract tunnel that doesn't hold much relevant to the blog post :P

(Header by GDJ from Source page)

....and other things! Recently, I bought a Raspberry Pi 3. Now that the rest of the components have arrived, I've got a rather nice little home server that's got a 1 terabyte WD PiDrive attached to it to provide lots of lovely shared storage, which is rather nice.

However, within a few weeks I was faced with a problem. How do I access my new box to configure it from my internship when I'm on lunch? Faced with such a challenge, I did what anyone would, and took to the internet to find a solution.

It didn't take long. A while ago I heard about these things called 'SSH tunnels', which, while not designed for a high throughput, are more than adequate for a low-intensity SSH connection that runs a few kilobytes a second in either direction. After reading this excellent answer by erik on the Unix & Linux StackExchange, I had an understanding of how SSH tunnels work, and was ready to put together a solution. You should go and read that answer if you'd like to understand SSH tunnels too - it explains it much better than I ever could :P

With that knowledge in hand, I went about planning the SSH tunnel. I already have a server a public IP address (it's hosting this website!), so I needed a reverse tunnel to allow me to access a port local to my linux box at home (called elessar - a virtual cookie for anyone who gets the reference!) from

Important! Ask yourself whether it's moral and ethical to set up an ssh tunnel before you think about following along with this article! If you find yourself behind a firewall or something similar, then the chances are that it's there for a good reason - and you might get into trouble if you try and circumvent it. I won't be held responsible for any loss or damages of any description caused by the reading of this post.

First job: create a limited account on for elessar to SSH into. That's easy:

sudo useradd --system ssh-tunnel

Then, with a few quick lines in /etc/ssh/sshd_config:

Match User ssh-tunnel
    ForceCommand echo 'This account can only be used for ssh tunnelling.'

....we can prevent the ssh-tunnel user from being abused to gain shell access to the server (let me know if there are any further measures I can put in place here).

Now that I had a user account to ssh in as, I could set up a public / private keypair to authenticate with, and cook up an SSH command for elessar that would set up the appropriate tunnel. After fiddling around a bit, I came up with this that did the job:

ssh -TN -R30582:localhost:5724

Very cool. So with that command executing on elessar, I could ssh into elessar from! In short, it sets up a tunnel that will make port 30582 on tunnel through to port 5724 on elessar - the port on elessar that has SSH running on it, without allocating a pseudo-tty to save resources. can, well, explain it in more detail if you're interested.

Having an SSH command that would set up the tunnel is nice, but it's not very useful, since I have to execute it first before I can actually SSH into elessar from afar.

The solution was actually a little bit complicated. First, I wrote a simple systemd service file (systemd is what I have installed, since it's vanilla raspbian - this should be easily adaptable to other systems and setups) to start the SSH tunnel automagically on boot:

Description=SSH tunnel from to local ssh server.

ExecStart=/usr/bin/ssh -TN -R30582:localhost:5724


I quickly realised that there were a few flaws with this approach. Firstly, it tried to start the SSH connection before my router had connected to the internet, since my router starts faster than the box that initialises the fibre connection to my ISP. Secondly, it fails to retry when the connection dies.

The first problem can be solved relatively easily, by wrapping the ssh command in a clever bit of shell scripting:

/bin/sh -c 'until ping -c1 &>/dev/null && sleep 5; do :; done && /usr/bin/ssh -TN -R30582:localhost:5724

The above tries to ping every 5 seconds until it succeeds, and only then does it attempt to open the SSH connection. This solves the first problem. To solve the second, we need to look at autossh. Autossh is a small tool that monitors an ssh connection in a variety of configurable ways and restarts the connection if ever dies for whatever reason. You can install it with your favourite package manager:

sudo apt install autossh

Substitute apt with whatever package manager you use on your system. With it installed, we can use a command like this:

autossh -o "UserKnownHostsFile /home/ssh-tunnel/.ssh/known_hosts" -o "IdentityFile /home/ssh-tunnel/.ssh/ssh-tunnel_ed25519" -o "PubkeyAuthentication=yes" -o "PasswordAuthentication=no" -o "ServerAliveInterval 900" -TN -R30582:localhost:5724 -p 7261

to automatically start our ssh tunnel, and restart it if anything goes wrong. Note all the extra settings I had to specify here. This is because even though I had many of them specified in ~/.ssh/config for the ssh-tunnel user, because of systemd's weird environment when it starts a service, I found I had to specify everything in the command line with absolute paths (ugh).

Basically, the above tells autossh where the known_hosts file is (important for automation!), that it should only attempt public / private keypair authentication and not password authentication, that it should check the server's still there every 15 minutes, and all the other things we figured out above.

Finally, I combined the solutions I came up with for both problems, which left me with this:

Description=SSH tunnel from to local ssh server.

ExecStart=/bin/sh -c 'until ping -c1 &>/dev/null && sleep 5; do :; done && /usr/bin/autossh -o "UserKnownHostsFile /home/pi/.ssh/known_hosts" -o "IdentityFile /home/pi/.ssh/ssh-tunnel_ed25519" -o "PubkeyAuthentication=yes" -o "PasswordAuthentication=no" -o "ServerAliveInterval 900" -TN -R30582:localhost:5724 -p 7261'


Here's a version that utilises the -f parameter of autossh to put the autossh into the background, which eliminates the sh parent process:

Description=SSH tunnel from to local ssh server.

ExecStartPre=/bin/mkdir -p /var/run/sbrl-ssh-tunnel
ExecStartPre=-/bin/chown ssh-tunnel:ssh-tunnel /var/run/sbrl-ssh-tunnel
ExecStart=/bin/sh -c 'until ping -c1 &>/dev/null && sleep 5; do :; done && /usr/bin/autossh -f -o "UserKnownHostsFile /home/pi/.ssh/known_hosts" -o "IdentityFile /home/pi/.ssh/ssh-tunnel_ed25519" -o "PubkeyAuthentication=yes" -o "PasswordAuthentication=no" -o "ServerAliveInterval 900" -TN -R30582:localhost:5724 -p 7261'


I ended up further modifying the above to set up an additional tunnel to allow elessar to send emails via the postfix email server that's running on Let me know if you'd be interested in a tutorial on this!

Sources and Futher Reading

Make your linux learning experience painless with tldr-pages!

If you've been learning linux for a little while, you'll probably have encountered man pages. They are the complete documentation of all the tools, commands(, and kernel functions) available on the system you're currently on (read them online here!). If you have encountered them, you'll also know that they usually are somewhat... verbose.

Enter stage left: tldr-pages!

tldr-pages is an ongoing effort to create a repository of simplified man pages, that document the most common usages of a command. How about this, for the tar command?

# tar

> Archiving utility.
> Often combined with a compression method, such as gzip or bzip.

- Create an archive from files:

`tar cf {{target.tar}} {{file1 file2 file3}}`

- Create a gzipped archive:

`tar czf {{target.tar.gz}} {{file1 file2 file3}}`

- Extract an archive in a target folder:

`tar xf {{source.tar}} -C {{folder}}`

- Extract a gzipped archive in the current directory:

`tar xzf {{source.tar.gz}}`

- Extract a bzipped archive in the current directory:

`tar xjf {{source.tar.bz2}}`

- Create a compressed archive, using archive suffix to determine the compression program:

`tar caf {{target.tar.xz}} {{file1 file2 file3}}`

- List the contents of a tar file:

`tar tvf {{source.tar}}`

...or this for git reset?

# git reset

> Undo commits or unstage changes, by resetting the current git HEAD to the specified state.
> If a path is passed, it works as "unstage"; if a commit hash or branch is passed, it works as "uncommit".

- Unstage everything:

`git reset`

- Unstage specific file(s):

`git reset {{path/to/file(s)}}`

- Unstage portions of a file:

`git reset -p {{path/to/file}}`

- Undo the last commit, keeping its changes (and any further uncommitted changes) in the filesystem:

`git reset HEAD~`

- Undo the last two commits, adding their changes to the index, i.e. staged for commit:

`git reset --soft HEAD~2`

- Discard any uncommitted changes, staged or not (for only unstaged changes, use `git checkout`):

`git reset --hard`

- Reset the repository to a given commit, discarding committed, staged and uncommitted changes since then:

`git reset --hard {{commit}}`

For those learning linux and the terminal, I think it's an invaluable tool. It helps you out by showing you how to perform common tasks. As you get more experienced though, it becomes useful in another way: showing you how to do those things that you don't do often enough to remember off the top of your head.

I'm probably a bit biased, since I've been contributing to the project for a while (and the nice folks over there recently promoted me to the rank of maintainer :D), so you should check it out for yourself! There's even an online client that you can use without installing anything :-) Once you're ready to install a client directly in your terminal, there's an extensive list of clients documented on the repository wiki, with one available for every environment and platform.

If you encounter a command that hasn't been documented yet, then they've also made it easy to contribute a page yourself.

I think the idea is rather cool, actually - as you've probably guessed by now! Let me know what you think of it in the comments.

How to set up a WebDav share with Nginx

I've just been setting up a WebDav share on a raspberry pi 3 for my local network (long story), and since it was a bit of a pain to set up (and I had to combine a bunch of different tutorials out there to make mine work), I thought I'd share how I did it here.

I'll assume you have a raspberry pi all set up and up-to-date in headless mode, with a ufw for your firewall (if you need help with this, post in the comments below or check out the Raspberry Pi Stack Exchange). To start with, we need to install the nginx-full package:

sudo apt update
sudo apt install  nginx-full

Note that we need the nginx-full package here, because the nginx-extras or just simply nginx packages don't include the required additional webdav support modules. Next, we need to configure Nginx. Nginx's configuration files live at /etc/nginx/nginx.conf, and in /etc/nginx/conf.d. I did something like this for my nginx.conf:

user www-data;
worker_processes 4;
pid /run/;

events {
    worker_connections 768;
    # multi_accept on;

http {

    # Basic Settings

    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 65;
    types_hash_max_size 2048;
    # server_tokens off;

    # server_names_hash_bucket_size 64;
    # server_name_in_redirect off;

    include /etc/nginx/mime.types;
    default_type application/octet-stream;

    # SSL Settings

    ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
    ssl_prefer_server_ciphers on;

    # Logging Settings

    access_log /var/log/nginx/access.log;
    error_log /var/log/nginx/error.log;

    # Gzip Settings

    gzip on;

    gzip_vary on;
    gzip_proxied any;
    gzip_comp_level 6;
    gzip_buffers 16 8k;
    gzip_http_version 1.1;
    gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

    # Virtual Host Configs

    include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-enabled/*;

Not many changes here. Then, I created a file called 0-webdav.conf in the conf.d directory, and this is what I put in it:

server {
    listen 80;
    listen [::]:80;


    auth_basic              realm_name;
    auth_basic_user_file    /etc/nginx/.passwords.list;

    dav_methods     PUT DELETE MKCOL COPY MOVE;
    dav_ext_methods PROPFIND OPTIONS;
    dav_access      user:rw group:rw all:r

    client_body_temp_path   /tmp/nginx/client-bodies;
    client_max_body_size    0;
    create_full_put_path    on;

    root /mnt/hydroplans;

Now this is where the magic happens. The dav_access directive tells nginx to allow everyone to read, but only logged in users to write to the share. This isn't actually particularly relevant, because of the auth_basic and auth_basic_user_file directives, which tell nginx to require people to login to the share before they are allowed to access it.

It's also important to note that I found that Windows (10, at least), didn't like the basic authentication - even though Ubuntu's Nautilus accepted it just fine - so I had to comment that bit out :-(

If you do still want authentication (hey! May you'll have better luck than I :P), then you'll need to set up the passwords file. Here's how you create it:

echo -n 'helen:' | sudo tee /etc/nginx/.passwords.list
openssl passwd -apr1 | sudo tee -a /etc/nginx/.passwords.list 

The above creates a user called helen, and asks you to type a password. If you're adding another user to the file, simply change the first tee to be tee -a to avoid overwriting the first one.

With that all configured, it's time to test the configuration file, and, if we're lucky, restart nginx!

sudo nginx -t
sudo systemctl restart nginx

That's all you should need to do to set up a simple WebDav share. Remember that this is a starting point, and not an ending point - there are a few big holes in the above that you'll need to address, depending on your use case (for example, I haven't included the setup of https / encryption - try letsencrypt for that).

Here are the connection details for the above for a few different clients:

  • Ubuntu / Nautilus: (Go to "Other Locations" and paste this into the "Connect to Server" box) dav://
  • Windows: (Go to "Map Network Drive" and paste this in)

Did this work for you? Have any problems? Got instructions for a WebDav client not listed here? Let me know in the comments!

Running Prolog on Linux

The learning prolog banner. Hello! I hope you had a nice restful Easter. I've been a bit busy this last 6 months, but I've got a holiday at the moment, and I've just received a lovely email about my learning prolog series. It's reminded me about how much I actually rather like (swi) prolog (I'm a bit nuts, I know :P), and I've decided that it would be splendid if I could get back into it a bit.

I'm not sure whether it'll go anywhere, but I'm going to look into web crawling and parsing HTML. Anyway, since I'm re-discovering prolog, I had a teensy bit of trouble getting prolog to run one of my old scripts just now, so I thought I'd blog about it to save me the trouble next time :D

Running prolog scripts on linux isn't actually that hard. First, you need to install SWI-Prolog:

sudo add-apt-repository ppa:swi-prolog/stable
sudo apt install swi-prolog-nox

Then, you run a prolog script like this:

swipl -f ./
Welcome to SWI-Prolog (threaded, 64 bits, version 7.4.1)
SWI-Prolog comes with ABSOLUTELY NO WARRANTY. This is free software.
Please run ?- license. for legal details.

For online help and background, visit
For built-in help, use ?- help(Topic). or ?- apropos(Word).


...then you can interact with your Prolog knowledge bases as normal. When you want to exit, the easiest way I've found is if you press CTRL + D.

If, however, you're lazy and don't want to type prolog -f out every time you want to run your script, you can prefix your file with the following shebang:

#!/usr/bin/prolog -f 

Make sure it's the very first line in the file. Prolog appears to ignore it and carry on regardless. Then you can run it like this (making sure you've chmod +x'd it first:


zram: A seriously cool way to do more with your ram

Got memory issues? Keep running out of ram? Want to store a ridiculously large file in ram because your disk (or network) is slow? If you're running linux, then look no further than zram - a hidden gem available since 3.14.

It's a little bit underdocumented, but thankfully easy to figure out, so I'm gathering everything I can find about it into one place.

zram is a kernel module for linux version 3.14 and above that allows you to use a portion of your ram to create virtual block devices (like /dev/sda1 or /dev/sdb3 for example) that actually compress data you put on them and store it in ram. It's installed by default (at least in ubuntu 16.10!) - just not enabled - so you should already have access to it if you're running a fairly recent kernel.

To start using it, you first have to enable it and load it into your running kernel. Despite how dangerous that might sound, it's actually quite safe (I think :P). Obviously, you'll need sudo (or 'Administrator' for Windows fans) privileges to do this - and if you want it to persist across reboots, you'll have to edit some file like /etc/rc.local or similar (though writing an init.d/upstart/system script/service file would probably be a better way to do it).

Here's the command to load the require kernel module:

sudo modprobe zram num_devices=2

In the above, I load the zram kernel module, and tell it to create 2 virtual devices - /dev/zram0 and /dev/zram1 (why it's called modprobe when it actually loads the module I have no idea). You can have as many devices as you want here - they don't do anything until they are initialised.

Next, as you may have guessed, you need to initialise one of the new virtual devices in order to use it. That's fairly easy too:

sudo zramctl --find --streams 2 --size 2GB --algorithm lz4

I've specified a few more options here, so I'll go through them in turn.

  • --find - This tells it to use the next available and uninitialised device. It'll echo the path to the device it ended up using to your terminal once it's done initialising.
  • --streams 2 - Sets the number of compression streams the device has. I'd recommend setting this to something around the number of cpu cores you have.
  • --size 2GB - Here I tell it the size of the virtual disk I want it to create. At most the device will use this amount of memory. I've specified 2GB here, but you should allocate a different amount based on your available ram - KB and MB are supported as suffixes here too.
  • --algorithm lz4 - This is the compression algorithm I told it to use. Currently it only supports 'lzo' and 'lz4' - I'm not sure which one is better. If you do, let me know in the comments :-)

Now that the zram device is all initialised, we can start doing things with it! Need a place to put a few files? Try this command:

sudo mke2fs -t ext4 -O ^has_journal -L "zram device" /dev/zram0

In the above I format /dev/zram0 to an ext4 partition. Note that I also disable journaling - journaling is system that ensures integrity in the event of an unplanned shutdown or loss of power - great for persistent drives like the one holding your operating system, but not so useful for one that'll get deleted automatically on shutdown anyway :-)

Perhaps you need a bit of extra speedy swap space because you don't have quite enough ram? That's easy too:

sudo mkswap --label 'zram swap parition' /dev/zram0
sudo swapon --priority 10 /dev/zram0

In the above I format /dev/zram0 to be a swap partition, and then turn it on. The final command lists the current swap devices that are currently active. You can learn more about swap partitions here.

That concludes this post. Got any cool uses for this? Found something I've missed? Write a comment below!

Sources and further reading

How to update your linux kernel version on a KimSufi server

(Or why PHP throws random errors in the latest update)

Hello again!

Since I had a bit of a time trying to find some clear information on the subject, I'm writing the blog post so that it might help others. Basically, yesterday night I updated the packages on my server (the one that runs this website!). There was a PHP update, but I didn't think much of it.

This morning, I tried to access my ownCloud instance, only to discover that it was throwing random errors and refusing to load. I'm running PHP version It was spewing errors like this one:

PHP message: PHP Fatal error:  Uncaught Exception: Could not gather sufficient random data in /srv/owncloud/lib/private/Security/SecureRandom.php:80
Stack trace:
#0 /srv/owncloud/lib/private/Security/SecureRandom.php(80): random_int(0, 63)
#1 /srv/owncloud/lib/private/AppFramework/Http/Request.php(484): OC\Security\SecureRandom->generate(20)
#2 /srv/owncloud/lib/private/Log/Owncloud.php(90): OC\AppFramework\Http\Request->getId()
#3 [internal function]: OC\Log\Owncloud::write('PHP', 'Uncaught Except...', 3)
#4 /srv/owncloud/lib/private/Log.php(298): call_user_func(Array, 'PHP', 'Uncaught Except...', 3)
#5 /srv/owncloud/lib/private/Log.php(156): OC\Log->log(3, 'Uncaught Except...', Array)
#6 /srv/owncloud/lib/private/Log/ErrorHandler.php(67): OC\Log->critical('Uncaught Except...', Array)
#7 [internal function]: OC\Log\ErrorHandler::onShutdown()
#8 {main}
  thrown in /srv/owncloud/lib/private/Security/SecureRandom.php on line 80" while reading response header from upstream, client: x.y.z.w, server: ownc

That's odd. After googling around a bit, I found this page on the Arch Linux bug tracker. I'm not using arch (Ubuntu 16.04.2 LTS actually), but it turned out that this comment shed some much-needed light on the problem.

Basically, PHP have changed the way they ask the Linux Kernel for random bytes. They now use the getrandom() kernel function instead of /dev/urandom as they did before. The trouble is that getrandom() was introduced in linux 3.17, and I was running OVH's custom 3.14.32-xxxx-grs-ipv6-64 kernel.

Thankfully, after a bit more digging, I found this article. It suggests installing the kernel you want and moving one of the grub config file generators to another directory, but I found that simply changing the permissions did the trick.

Basically, I did the following:

apt update
apt full-upgrade
apt install linux-image-generic
chmod -x /etc/grub.d/06_OVHkernel

Basically, the above first updates everything on the system. Then it installs the linux-image-generic package. linux-image-generic is the pseudo-package that always depends on the latest stable kernel image available.

Next, I remove execute privileges on the file /etc/grub.d/06_OVHkernel. This is the file that gives the default installed OVH kernel priority over any other instalaled kernels, so it's important to exclude it from the grub configuration process.

Lastly, I update my grub configuration with update-grub and then reboot. You need to make sure that you update your grub configuration file, since if you don't it'll still use the old OVH kernel!

With that all done, I'm now running 4.4.0-62-generic according to uname -a. If follow these steps yourself, make sure you have a backup! While I am happy to try and help you out in the comments below, I'm not responsible for any consequences that may arise as a result of following this guide :-)

Fancy message of the day over SSH

Since my time to sit down for a good chunk of time and write some code has been extremely limited as of late, I've been playing around with a few smaller projects. One of those is a fancy message of the day when you log into a remote machine (in my case the server this website is hosted on!), and I thought I'd share it here.

My take on a fancy SSH message of the day.

The default message shown at the top when you login via ssh is actually generated by something called update-motd, and is generated from a set of scripts in /etc/update-motd.d. By customising these scripts, we can do almost anything we like!

To start off with, I disabled the execution of all the scripts in the directory (sudo chmod -x /etc/update-motd.d/*), and created a subfolder to store the script in that actually generated the system information (sudo mkdir /etc/update-motd.d/parts). Here's the script I wrote to generate the system information:

#!/usr/bin/env bash

. /etc/lsb-release

LOAD=$(cat /proc/loadavg | cut -d' ' -f 2);

CPU_COUNT=$(cat /proc/cpuinfo | grep -i "core id" | uniq | wc -l);
THREAD_COUNT=$(cat /proc/cpuinfo | grep -i "core id" | wc -l);

APT_UPDATE_DETAILS="$(/usr/lib/update-notifier/apt-check --human-readable | fold -w 40 -s)"

IPV4_ADDRESS=$(dig +short A
IPV6_ADDRESS=$(dig +short AAAA @2620:0:ccc::2);

LAST_LOGIN=$(last -1 | head -n 1 | awk '{ print $1,"at",$4,$5,$6,$7,"from",$3 }');


echo Welcome to $(hostname)
echo "  running ${DISTRIB_DESCRIPTION}"
echo Kernel: $(uname -r)
echo Uptime: $(uptime --pretty | sed -e 's/up //')
echo Load: ${LOAD}
#echo Last login: ${LAST_LOGIN}

exit 0

Basically, I collect a bunch of information from random places on my system (several of which were taken from the existing scripts in /etc/update-motd.d/) and re-output them in a different format.

Then, I converted an image of my favicon logo with the brilliant catimg by posva to a set of unicode characters and sent that to a file (catimg -w 35 image.png >/etc/update-motd.d/sbrl-logo.txt) - you could alternatively use some ascii art from the internet (e.g. this site). Once done, I put the two together with the following script directly in my /etc/update-motd.d/ folder:

#!/usr/bin/env bash

### Settings ###



/etc/update-motd.d/parts/sysinfo >$TMP_FILENAME

### Output ###
pr -mtJ /etc/update-motd.d/sbrl-logo.txt $TMP_FILENAME


### Cleanup ###


Finally, I manually cleared and regenerated the message of of the day with sudo update-motd, giving the result you see at the top of this blog post. I also made sure to re-enable the execution of the other scripts I didn't use in my fancy motd so as to not miss out on their notifications.

If you're interested, I've generated an archive of my final /etc/update-motd.d folder (minus my logo in text format), which you can find here: 20170203-Fancy-Motd.7z.

Can you do better? Got a cool enhancement of your own? Post about it below!

Developing and Running C# Programs on Linux

Recently I was asked about running C# on Linux, and I remembered that I haven't actually written a blog post on it! This is that blog post I never wrote: A beginner's guide on how to develop, compile, and run C# programs on Linux. Here I assume a debian-based system (specifically Ubuntu 16.04), but it can be just as easily adapted to work with other flavours.

To start off, you'll need to add the mono repository to your system's apt repository list. Taken from the official mono website, here are the commands to do that:

sudo apt-key adv --keyserver hkp:// --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF
echo "deb wheezy main" | sudo tee /etc/apt/sources.list.d/mono-xamarin.list
sudo apt update

The more experienced may notice that although I'm installing on a, Ubuntu 16.04 system, I'm still specifying wheezy when installing mono. This shouldn't make too much of a difference - currently mono only supports wheezy upon install.

Next, it's time to actually install mono itself. This is easy:

sudo apt install mono-mcs

The above ought to install the mono runtime and compiler. If you experience issues down the line, simply install the mono-complete package instead.With this installed, you should now be able to launch compiled programs written C♯ by double-clicking them in your file manager - without any modifications. C♯ programs compiled on Windows can be run on Linux, and vice versa (this is because C♯ actually compiled into something called the Common Intermediate Language, or CIL).

If you're on a server, then you can run your programs by prefixing the executable name with mono, like this:

mono program-of-awesomeness.exe

If your program crashes, however, the output is not very helpful at all. Thankfully though there's a way to remedy that. First, make sure your program is compiled in debug mode, and then add the --debug flag when running your program:

mono --debug another-awesome-program.exe

This is all very well, but what about compiling? That's relatively easy as well. mcs is the linux version of csc on Windows, and behaves almost identically with some minor syntactical changes (read up about it by typing mcs --help or man mcs). For Visual Studio solutions, there's xbuild. Xbuild is a new-ish build too for Linux that is capable of compiling almost any Visual Studio solution file without any modifications (though there are some undocumented difficulties that you might run into as an advanced user).

To use it, first install the mono-xbuild package (sudo apt install mono-xbuild), and then, in a terminal, cd into the directory that contains the solution you want to compile, and then type xbuild and hit enter. That's it!

All this work in the terminal is cool for running C♯ programs on GUI-less boxes and servers, but it's no way to develop a larger application. Thankfully, there's a solution to that too! Monodevelop is the best C♯ IDE out there at the moment - it's like Visual Studio for LInux. It's easy to install, too - simply install the monodevelop package (sudo apt install monodevelop).


(Above: Monodevelop running on my Linux laptop. The project open here is my sprite packing tool.)

The package in the official repositories should be good enough for general use (though it's probably out of date). For the latest version with all the latest features though, you'll have to compile it from source.

Sadly this is not a trivial process. To do it you need to be comfortable with the terminal and know your way reasonably well around a Linux system. If you still want to go ahead anyway, start by downloading the latest release and follow the instructions. You'll probably find it keeps complaining about things not existing - usually a quick apt search {thing} reveals which package you need to install in order to get it to work. If you have trouble, post a comment below and I'll try to help you out.

Even without compiling monodevelop from source, it's still a pretty good IDE. It lets you create Visual-Studio-compatible solution files, and compile your code on the fly at the touch of a button.

That just about covers the basics of running C♯ on Linux. If there's anything I've missed or you'd like to ask about, post a comment below!

Importing your friends' public keys automatically with gpg

Today's post is just a quick one, as I've had a rather busy week that I'm now recovering from :)

Several times over the past few weeks I've been finding myself googling around to try and find a way to download someone's GPG / PGP public key and import it into GPG automatically, so that I can verify their GPG signatures. I'm posting the command here so that I don't have to keep looking for it :D

Here's the command to search for a someone's public key:

gpg --keyserver --search

The above asks the key server at for GPG keys associated with the email address You don't actually have to provide the keyserver - GPG will default to searching

Sources and further reading

Art by Mythdael