Starbeamrainbowlabs

Stardust
Blog


Archive

Mailing List Articles Atom Feed Comments Atom Feed Twitter

Tag Cloud

3d account algorithms announcement archives arduino artificial intelligence assembly async audio bash batch blog bookmarklet booting c sharp c++ challenge chrome os code codepen coding conundrums coding conundrums evolved command line compiling css dailyprogrammer debugging demystification distributed computing downtime electronics email embedded systems encryption es6 features event experiment external first impressions future game github github gist graphics hardware hardware meetup holiday html html5 html5 canvas infrastructure interfaces internet io.js jabber javascript js bin labs learning library linux low level lua maintenance manjaro network networking node.js operating systems performance photos php pixelbot portable privacy programming problems project projects prolog protocol protocols pseudo 3d python reddit reference release releases resource review rust secrets security series list server software sorting source code control statistics svg technical terminal textures three thing game three.js tool tutorial tutorials twitter ubuntu university update updates upgrade version control virtualisation visual web website windows windows 10 xmpp xslt

Deep dive: Email, Trust, DKIM, SPF, and more

Lots of parcels (Above: Lots of parcels. Hopefully you won't get this many through the door at once..... Source)

Now that I'm on holiday, I've got some time to write a few blog posts! As I've promised a few people a post on the email system, that's what I'll look at this this post. I'm going to take you on a deep dive through the email system and trust. We'll be journeying though the fields of DKIM signatures, and climb the SPF mountain. We'll also investigate why the internet needs to take this journey in the first place, and look at some of the challenges one faces when setting up their own mail server.

Hang on to your hats, ladies and gentlemen! If you get to the end, give yourself a virtual cookie :D

Before we start though, I'd like to mention that I'll be coming at this from the perspective of my own email server that I set up myself. Let me introduce to you the cast: Postfix (the SMTP MTA), Dovecot (the IMAP MDA), rspamd (the spam filter), and OpenDKIM (the thing that deals with DKIM signatures).

With that out of the way, let's begin! We'll start of our journey by mapping out the journey a typical email undertakes.

The path a typical email takes. See the explanation below.

Let's say Bob Kerman wants to send Bill an email. Here's what happens:

  1. Bill writes the email and hits send. His email client connects to his email server, logs in, and asks the server to deliver a message for him.
  2. The server takes the email and reads the From header (in this case it's bill@billsboosters.com), figures out where the mail server is located, connects to it, and asks it to deliver Bob's message to Bill. mail.billsboosters.com takes the email and files it in Bill's inbox.
  3. Bill connects to his mail server and retrieves Bob's message.

Of course, this is simplified in several places. mail.bobsrockets.com will obviously need to do a few DNS lookups to find billsboosters.com's mail server and fiddle with the headers of Bob's message a bit (such as adding a Received header etc.), and smtp.billsboosters.com won't just accept the message for delivery without checking out the server it came from first. How does it check though? What's preventing seanssatellites.net pretending to be bobsrockets.com and sending an imposter?

Until relatively recently, the answer was, well, nothing really. Anyone could send an email to anyone else without having to prove that they could indeed send email in the name of a domain. Try it out for yourself by telnetting to a mail server on port 25 (unencrypted SMTP) and trying in something like this:

HELO mail.bobsrockets.com
MAIL From: <frank@franksfuel.io>
RCPT TO <bill@billsboosters.com>
DATA
From: sean@seanssatellites.net
To: bill@billsboosters.com

Hello! This is a email to remind you.....
.
QUIT

Oh, my! Frank at franksfuel.io can connect to any mail server and pretend that sean@seanssatellites.net is sending a message to bill@billsboosters.com! Mail servers that allow this are called open relays, and today they usually find themselves on several blacklists within minutes. Ploys like these are easy to foil, thankfully (by only accepting mail for your own domains), but it still leaves the problem of what to do about random people connecting to your mail server delivering spam to your inbox that claims to be from someone they aren't supposed to be sending mail for.

In response, some mail servers demanded things like the IP that connects to send an email must reverse to the domain name that they want to send email from. Clever, but when you remember that anyone can change their own PTR records, you realise that it's just a minor annoyance to the determined spammer, and another hurdle to the legitimate person in setting up their own mail server!

Clearly, a better solution is needed. Time to introduce our first destination: SPF. SPF stands for sender policy framework, and defines a mechanism by which a mail server can determine which IP addresses a domain allows mail to be sent from in it's name. It's a TXT record that sites at the root of a domain. It looks something like this:

v=spf1 a mx ptr ip4:5.196.73.75 ip6:2001:41d0:e:74b::1 a:starbeamrainbowlabs.com a:mail.starbeamrainbowlabs.com -all

The above is my SPF TXT record for starbeamrainbowlabs.com. It's quite simple, really - let's break it down.

v=spf1

This just defines the version of the SPF standard. There's only one version so far, so we include this to state that this record is an SPF version 1 record.

a mx ptr

This says that the domain that the sender claims to be from must have an a and an mx record that matches the IP address that's sending the email. It also says that the ptr record associated with the sender's IP must resolve to the domain the sender claims to be sending from, as described above (it does help with dealing with infected machines and such).

ip4:5.196.73.75 ip6:2001:41d0:e:74b::1

This bit says that the IP addresses 5.196.73.75 and 2001:41d0:e:74d::1 are explicitly allowed to send mail in the name of starbeamrainbowlabs.com.

a:starbeamrainbowlabs.com a:mail.starbeamrainbowlabs.com

After all of the above, this bit isn't strictly necessary, but it says that all the IP addresses found in the a records for starbeamrainbowlabs.com and mail.starbeamrainbowlabs.com are allowed to send mail in the name of starbeamrainbowlabs.com.

-all

Lastly, this says that if you're not on the list, then your message should be rejected! Other variants on this include ~all (which says "put it in the spam box instead"), and +all (which says "accept it anyway", though I can't see how that's useful :P).

As you can see, SPF allows a mail server to verify if a given client is indeed allowed to send an email in the name of any particular domain name. For a while, this worked a treat - until a new problem arose.

Many of the mail servers on the internet don't (and probably still don't!) support encryption when connecting to and delivering mail, as certificates were expensive and difficult to get hold of (nowadays we've got LetsEncrypt who give out certificates for free!). The encryption used when mail servers connect to one another is practically identical to that used in HTTPS - so if done correctly, the identity of the remote server can be verified and the emails exchanged encrypted, if the world's certification authorities aren't corrupted, of course.

Since most emails weren't encrypted when in transit, a new problem arose: man-in-the-middle attacks, whereby an email is altered by one or more servers in the delivery chain. Thinking about it - this could still happen today even with encryption, if any one server along an email's route is compromised. To this end, another mechanism was desperately needed - one that would allow the receiving mail server to verify that an email's content / headers hadn't been surreptitiously altered since it left the origin mail server - potentially preventing awkward misunderstandings.

Enter stage left: DKIM! DKIM stands for Domain Keys Identified Mail - which, in short, means that it provides a method by which a receiving mail server can cryptographically prove that a message hasn't been altered during transit.

It works by having a public-private keypair, in which the public key can only decrypt things, but the private key is capable of encrypting things. A hash of the email's headers / content is computed and encrypted with the private key. Then the encrypted hash is attached to the email in the DKIM-Signature header.

The receiving mail server does a DNS lookup to find the public key, and decrypts the hash. It then computes it's own hash of the email headers / content, and compares it against the decrypted hash. If it matches, then the email hasn't been fiddled with along the way!

Of course, not all the headers in the email are hashed - only a specific subset are included in the hash, since some headers (like Received and X-Spam-Result) are added and altered during transit. If you're interested in implementing DKIM yourself - DigitalOcean have a smashing tutorial on the subject, which should adapt easily to whatever system you're running yourself.

With both of those in place, billsboosters.com's mail server can now verify that mail.bobsrockets.com is allowed to send the email on behalf of bobsrockets.com, and that the message content hasn't been tampered with since it left mail.bobsrockets.com. mail.billsboosters.com can also catch franksfuel.io in the act of trying to deliver spam from seanssatellites.net!

There is, however, one last piece of the puzzle left to reveal. With all this in place, how do you know if your mail was actually delivered? Is it possible to roll SPF and DKIM out gradually so that you can be sure you've done it correctly? This can be a particular issue for businesses and larger email server setups.

This is where DMARC comes in. It's a standard that lets you specify an email address you'd like to receive DMARC reports at, which contain statistics as to how many messages receiving mail servers got that claimed to be from you, and what they did with them. It also lets you specify what percentage of messages should be subject to DMARC filtering, so you can roll everything out slowly. Finally, it lets you specify what should happen to messages that fail either SPF, DKIM, or both - whether they should be allowed anyway (for testing purposes), quarantined, or rejected.

DMARC policies get specified (yep, you guessed it!) in a DNS record. unlike SPF though, they go in _dmarc.megsmicroprocessors.org as a TXT record, substituting megsmicroprocessors.org for your domain name. Here's an example:

v=DMARC1; p=none; rua=mailto:dmarc@megsmicroprocessors.org

This is just a simple example - you can get much more complex ones than this! Let's go through it step by step.

v=DMARC1;

Nothing to see here - just a version number as in SPF.

p=none;

This is the policy of what should happen to messages that fail. In this example we've used none, so messages that fail will still pass right on through. You can set it to quarantine or even reject as you gain confidence in your setup.

rua=mailto:dmarc@megsmicroprocessors.org

This specifies where you want DMARC reports to be sent. Each mail server that receives mail from your mail server will bundle up statistics and send them once a day to this address. The format is in XML (which won't be particularly easy to read), but there are free DMARC record parsers out there on the internet that you can use to decode the reports, like dmarcian.

That completes the puzzle. If you're still reading, then congratulations! Post in the comments and say hi :D We've climbed the SPF mountain and discovered how email servers validate who is allowed to send mail in the name of another domain. We've visited the DKIM signature fields and seen how the content of email can be checked to see if it's been altered during transit. Lastly, we took a stroll down DMARC lane to see how it's possible to be sure what other servers are doing with your mail, and how a large email server setup can implement DMARC, DKIM, and SPF more easily.

Of course, I'm not perfect - if there's something I've missed or got wrong, please let me know! I'll try to correct it as soon as possible.

Lastly, this is, as always, a starting point - not an ending point. An introduction if you will - it's up to you to research each technology more thoroughly - especially if you're thinking of implementing them yourself. I'll leave my sources at the bottom of this post if you'd like somewhere to start looking :-)

Sources and Further Reading

Semi-automated backups with duplicity and an external drive

A bunch of hard drives. (Above: A bunch of hard drives. The original can be found here.)

Since I've recently got myself a raspberry pi to act as a server, I naturally needed a way to back it up. Not seeing anything completely to my tastes, I ended up putting something together that did the job for me. For this I used an external hard drive, duplicity, sendxmpp (sudo apt install sendxmpp), and a bit of bash.

Since it's gone rather well for me so far, I thought I'd write a blog post on how I did it. It still needs some tidying up, of course - but it works in it's current state, and perhaps it will help someone else put together their own system!

Step 1: Configuring the XMPP server

I use XMPP as my primary instant messaging server, so it's only natural that I'd want to integrate the system in with it to remind me when to plug in the external drive, and so that it can tell me when it's done and what happened. Since I use prosody as my XMPP server, I can execute the following on the server:

sudo prosodyctl adduser rasperrypi@bobsrockets.com

...and then enter a random password for the new account. From there, I set up a new private persistent multi-user chatroom for the messages to filter into, and set my client to always notify when a message is posted.

After that, it was a case of creating a new config file in a format that sendxmpp will understand:

rasperrypi@bobsrockets.com:5222 thesecurepassword

Step 2: Finding the id of the drive partition

With the XMPP side of things configured, next I needed a way to detect if the drie was plugged in or not. Thankfully all partitions have a unique id built-in, which you can use to see if it's plugged in or not. It's easy to find, too:

sudo blkid

The above will list all available partitions and their UUID - the unique id I mentioned. With that in hand, we can now check if it's plugged in or not with a cleverly crafted use of the readlink command:

readlink /dev/disk/by-uuid/${partition_uuid} 1>/dev/null 2>&2;
partition_found=$?
if [[ "${partition_found}" -eq "0" ]]; then
    echo "It's plugged in!";
else
    echo "It's not plugged in :-(";
fi

Simple, right? readlink has an exit code of 0 if it managed to read the symbolik link in /dev/disk/by-uuid ok, and 1 if it didn't. The symbolic links in /deve/disk/by-uuid are helpfuly created automatically for us :D From here, we can take it a step further to wait until the drive is plugged in:

# Wait until the drive is available
while true
do
    readlink "${partition_uuid}";

    if [[ "$?" -eq 0 ]]; then
        break
    fi

    sleep 1;
done

Step 3: Mounting and unmounting the drive

Raspberry Pis don't mount drive automatically, so we'll have do that ourselves. Thankfully, it's not so tough:

# Create the fodler to mount the drive into
mkdir -p ${backup_drive_mount_point};
# Mount it in read-write mode
mount "/dev/disk/by-uuid/${partition_uuid}" "${backup_drive_mount_point}" -o rw;

# Do backup thingy here

# Sync changes to disk
sync
# Unmount the drive
umount "${backup_drive_mount_point}";

Make sure you've got the ntfs-3g package installed if you want to back up to an NTFS volume (Raspberry Pis don't come with it by default!).

Step 4: Backup all teh things!

There are more steps involved in getting to this point than I thought there were, but if you've made it this far, than congrats! Have a virtual cookie :D 🍪

The next part is what you probably came here for: duplicity itself. I've had an interesting time getting this to work so far, actually. It's probably easier if I show you the duplicity commands I came up with first.

# Create the archive & temporary directories
mkdir -p /mnt/data_drive/.duplicity/{archives,tmp}/{os,data_drive}
# Do a new backup
PASSPHRASE=${encryption_password} duplicity --full-if-older-than 2M --archive-dir /mnt/data_drive/.duplicity/archives/os --tempdir /mnt/data_drive/.duplicity/tmp/os --exclude /proc --exclude /sys --exclude /tmp --exclude /dev --exclude /mnt --exclude /var/cache --exclude /var/tmp --exclude /var/backups / file://${backup_drive_mount_point}/duplicity-backups/os/
PASSPHRASE=${data_drive_encryption_password} duplicity --full-if-older-than 2M --archive-dir /mnt/data_drive/.duplicity/archives/data_drive --tempdir /mnt/data_drive/.duplicity/tmp/data_drive /mnt/data_drive --exclude '**.duplicity/**' file://${backup_drive_mount_point}/duplicity-backups/data_drive/

# Remove old backups
PASSPHRASE=${encryption_password} duplicity remove-older-than 6M --force --archive-dir /mnt/data_drive/.duplicity/archives/os file:///${backup_drive_mount_point}/duplicity-backups/os/
PASSPHRASE=${data_drive_encryption_password} duplicity remove-older-than 6M --force --archive-dir /mnt/data_drive/.duplicity/archives/data_drive file:///${backup_drive_mount_point}/duplicity-backups/data_drive/

Path names have been altered for privacy reasons. The first duplicity command in the above was fairly straight forward - backup everything, except a few folders with cache files / temporary / weird stuff in them (like /proc).

I ended up having to specify the archive and temporary directories here to be on another disk because the Raspberry Pi I'm running this on has a rather... limited capacity on it's internal micro SD card, so the default location for both isn't a good idea.

The second duplicity call is a little more complicated. It backs up the data disk I have attached to my Raspberry Pi to the external drive I've got plugged in that we're backing up to. The awkward bit comes when you realise that the archive and temporary directories are located on this same data-disk that we're trying to back up. To this end, I eventually found (through lots of fiddling) that you can exclude a folder duplicity via the --exclude '**.duplicity/**' syntax. I've no idea why it's different when you're not backing up the root of the filesystem, but it is (--exclude ./.duplicity/ didn't work, and neither did /mnt/data_drive/.duplicity/).

The final two duplicity calls just clean up and remove old backups that are older than 6 months, so that the drive doesn't fill up too much :-)

Step 5: What? Where? Who?

We've almost got every piece of the puzzle, but there's still one left: letting us know what's going on! This is a piece of cake in comparison to the above:

function xmpp_notify {
        echo $1 | sendxmpp --file "${xmpp_config_file}" --resource "${xmpp_resource}" --tls --chatroom "${xmpp_target_chatroom}"
}

Easy! All we have to do is point sendxmpp at our config file we created waaay in step #1, and tell it where the chatroom is that we'd like it to post messages in. With that, we can put all the pieces of the puzzle together:

#!/usr/bin/env bash

source .backup-settings

function xmpp_notify {
    echo $1 | sendxmpp --file "${xmpp_config_file}" --resource "${xmpp_resource}" --tls --chatroom "${xmpp_target_chatroom}"
}

xmpp_notify "Waiting for the backup disk to be plugged in.";

# Wait until the drive is available
while true
do
    readlink "${backup_drive_dev}";

    if [[ "$?" -eq 0 ]]; then
        break
    fi

    sleep 1;
done

xmpp_notify "Backup disk detected - mounting";

mkdir -p ${backup_drive_mount_point};

mount "${backup_drive_dev}" "${backup_drive_mount_point}" -o rw

xmpp_notify "Mounting complete - performing backup";

# Create the archive & temporary directories
mkdir -p /mnt/data_drive/.duplicity/{archives,tmp}/{os,data_drive}

echo '--- Root Filesystem ---' >/tmp/backup-status.txt
# Create the archive & temporary directories
mkdir -p /mnt/data_drive/.duplicity/{archives,tmp}/{os,data_drive}
# Do a new backup
PASSPHRASE=${encryption_password} duplicity --full-if-older-than 2M --archive-dir /mnt/data_drive/.duplicity/archives/os --tempdir /mnt/data_drive/.duplicity/tmp/os --exclude /proc --exclude /sys --exclude /tmp --exclude /dev --exclude /mnt --exclude /var/cache --exclude /var/tmp --exclude /var/backups / file://${backup_drive_mount_point}/duplicity-backups/os/ 2>&1 >>/tmp/backup-status.txt
echo '--- Data Disk ---' >>/tmp/backup-status.txt
PASSPHRASE=${data_drive_encryption_password} duplicity --full-if-older-than 2M --archive-dir /mnt/data_drive/.duplicity/archives/data_drive --tempdir /mnt/data_drive/.duplicity/tmp/data_drive /mnt/data_drive --exclude '**.duplicity/**' file://${backup_drive_mount_point}/duplicity-backups/data_drive/ 2>&1 >>/tmp/backup-status.txt

xmpp_notify "Backup complete!"
cat /tmp/backup-status.txt | sendxmpp --file "${xmpp_config_file}" --resource "${xmpp_resource}" --tls --chatroom "${xmpp_target_chatroom}"
rm /tmp/backup-status.txt

xmpp_notify "Performing cleanup."

PASSPHRASE=${encryption_password} duplicity remove-older-than 6M --force --archive-dir /mnt/data_drive/.duplicity/archives/os file:///${backup_drive_mount_point}/duplicity-backups/os/
PASSPHRASE=${data_drive_encryption_password} duplicity remove-older-than 6M --force --archive-dir /mnt/data_drive/.duplicity/archives/data_drive file:///${backup_drive_mount_point}/duplicity-backups/data_drive/

sync;
umount "${backup_drive_mount_point}";

xmpp_notify "Done! Backup completed. You can now remove the backup disk."

I've tweaked a few of the pieces to get them to work better together, and created a separate .backup-settings file to store all the settings in.

That completes my backup script! Found this useful? Got an improvement? Use a different strategy? Post a comment below!

An (unscientific) Introduction to I2C

I've recently bought an LCD display for a project. Since I don't have many pins to play with, I ended up buying an I2C-driven display to cut the data pins down to just 2: One for outgoing messages, and one for receiving incoming messages from other devices.

It's taken me some time to get to grips with the idea of I2C, so I thought I'd write up what I've learnt so far here, along with some helpful hints if you run into problems yourself.

In effect, I2C is a wire protocol that allows multiple devices to talk to each other over a single pair of cables. Every I2C device has an 8 bit hardware address burned into it that it uses to address itself - much like the Internet Protocol when it comes to it, actually. Devices can send messages to one another using these addresses - though not all at the same time, obviously!

If you want to talk directly over I2C with a device, then Wire.h is the library you want to use. Normally though, devices will come with their own library that utilises Wire.h and communicates with it for you.

As a good first test to see if I2C is working, I found an I2C scanner that scans for connected devices. Since the address space is so limited, it doesn't take long at all:

/* --------------------------------------
 * i2c_scanner
 * 
 * Version 1
 *  This program (or code that looks like it)
 *  can be found in many places.
 *  For example on the Arduino.cc forum.
 *  The original author is not know.
 * Version 2, Juni 2012, Using Arduino 1.0.1
 *  Adapted to be as simple as possible by Arduino.cc user Krodal
 * Version 3, Feb 26  2013
 *  V3 by louarnold
 * Version 4, March 3, 2013, Using Arduino 1.0.3
 *  by Arduino.cc user Krodal.
 *  Changes by louarnold removed.
 *  Scanning addresses changed from 0...127 to 1...119,
 *  according to the i2c scanner by Nick Gammon
 * 
 * Version 5, March 28, 2013
 *  As version 4, but address scans now to 127.
 *  A sensor seems to use address 120.
 * Version 6, November 27, 2015.
 *  Added waiting for the Leonardo serial communication.
 * This sketch tests the standard 7-bit addresses
 * Devices with higher bit address might not be seen properly.
 */

#include <Wire.h>

void setup()
{
    Wire.begin();

    Serial.begin(9600);
    while (!Serial);             // Leonardo: wait for serial monitor
    Serial.println("\nI2C Scanner");
}

void loop()
{
    byte error, address;
    int nDevices;

    Serial.println("Scanning...");

    nDevices = 0;
    for(address = 1; address < 127; address++ )
    {
        // The i2c_scanner uses the return value of
        // the Write.endTransmission to see if
        // a device did acknowledge to the address.
        Wire.beginTransmission(address);
        error = Wire.endTransmission();

        if (error == 0)
        {
            Serial.print("I2C device found at address 0x");
            if (address<16)
                Serial.print("0");
            Serial.print(address,HEX);
            Serial.println("  !");

            nDevices++;
        }
        else if (error==4)
        {
            Serial.print("Unknown error at address 0x");
            if (address<16)
                Serial.print("0");
            Serial.println(address,HEX);
        }
    }
    if (nDevices == 0)
    Serial.println("No I2C devices found\n");
    else
    Serial.println("done\n");

    delay(5000);           // wait 5 seconds for next scan
}

As the initial comment mentions, I can't claim ownership of this code! I got it from here.

With the code in mind, it's time to look at the circuit design.

A simple I2C circuit connecting an Arduino Uno v3 and an LCD display.

(Above: A simple I2C circuit. Credits go to openclipart.org for the images.)

The above connects an Arduino Uno version 3 with a simple LCD display via a breadboard to allow for expansion to connect future devices. The power (the red and blue cables) link the 5V and GND pins from the Arduino to the appropriate pins on the back of the LCD (the image of an LCD I found didn't have the pins showing :P), and the I2C pins (green and yellow) connect the SDA and SCL pins on the Arduino to the LCD display.

With the circuit done, that completes the system! All that remains now is to build something cool with the components we've put together :D

The final product of the above IRL!

Access your home linux box from anywhere with SSH tunnels

An abstract tunnel that doesn't hold much relevant to the blog post :P

(Header by GDJ from openclipart.org. Source page)

....and other things! Recently, I bought a Raspberry Pi 3. Now that the rest of the components have arrived, I've got a rather nice little home server that's got a 1 terabyte WD PiDrive attached to it to provide lots of lovely shared storage, which is rather nice.

However, within a few weeks I was faced with a problem. How do I access my new box to configure it from my internship when I'm on lunch? Faced with such a challenge, I did what anyone would, and took to the internet to find a solution.

It didn't take long. A while ago I heard about these things called 'SSH tunnels', which, while not designed for a high throughput, are more than adequate for a low-intensity SSH connection that runs a few kilobytes a second in either direction. After reading this excellent answer by erik on the Unix & Linux StackExchange, I had an understanding of how SSH tunnels work, and was ready to put together a solution. You should go and read that answer if you'd like to understand SSH tunnels too - it explains it much better than I ever could :P

With that knowledge in hand, I went about planning the SSH tunnel. I already have a server a public IP address (it's hosting this website!), so I needed a reverse tunnel to allow me to access a port local to my linux box at home (called elessar - a virtual cookie for anyone who gets the reference!) from starbeamrainbowlabs.com.

Important! Ask yourself whether it's moral and ethical to set up an ssh tunnel before you think about following along with this article! If you find yourself behind a firewall or something similar, then the chances are that it's there for a good reason - and you might get into trouble if you try and circumvent it. I won't be held responsible for any loss or damages of any description caused by the reading of this post.

First job: create a limited account on starbeamrainbowlabs.com for elessar to SSH into. That's easy:

sudo useradd --system ssh-tunnel

Then, with a few quick lines in /etc/ssh/sshd_config:

Match User ssh-tunnel
    ForceCommand echo 'This account can only be used for ssh tunnelling.'

....we can prevent the ssh-tunnel user from being abused to gain shell access to the server (let me know if there are any further measures I can put in place here).

Now that I had a user account to ssh in as, I could set up a public / private keypair to authenticate with starbeamrainbowlabs.com, and cook up an SSH command for elessar that would set up the appropriate tunnel. After fiddling around a bit, I came up with this that did the job:

ssh -TN -R30582:localhost:5724 ssh-tunnel@starbeamrainbowlabs.com

Very cool. So with that command executing on elessar, I could ssh into elessar from starbeamrainbowlabs.com! In short, it sets up a tunnel that will make port 30582 on starbeamrainbowlabs.com tunnel through to port 5724 on elessar - the port on elessar that has SSH running on it, without allocating a pseudo-tty to save resources. explainshell.com can, well, explain it in more detail if you're interested.

Having an SSH command that would set up the tunnel is nice, but it's not very useful, since I have to execute it first before I can actually SSH into elessar from afar.

The solution was actually a little bit complicated. First, I wrote a simple systemd service file (systemd is what I have installed, since it's vanilla raspbian - this should be easily adaptable to other systems and setups) to start the SSH tunnel automagically on boot:

[Unit]
Description=SSH tunnel from starbeamrainbowlabs.com to local ssh server.

[Service]
Type=simple
ExecStart=/usr/bin/ssh -TN -R30582:localhost:5724 ssh-tunnel@starbeamrainbowlabs.com

[Install]
WantedBy=network-online.target

I quickly realised that there were a few flaws with this approach. Firstly, it tried to start the SSH connection before my router had connected to the internet, since my router starts faster than the box that initialises the fibre connection to my ISP. Secondly, it fails to retry when the connection dies.

The first problem can be solved relatively easily, by wrapping the ssh command in a clever bit of shell scripting:

/bin/sh -c 'until ping -c1 starbeamrainbowlabs.com &>/dev/null && sleep 5; do :; done && /usr/bin/ssh -TN -R30582:localhost:5724 ssh-tunnel@starbeamrainbowlabs.com

The above tries to ping starbeamrainbowlabs.com every 5 seconds until it succeeds, and only then does it attempt to open the SSH connection. This solves the first problem. To solve the second, we need to look at autossh. Autossh is a small tool that monitors an ssh connection in a variety of configurable ways and restarts the connection if ever dies for whatever reason. You can install it with your favourite package manager:

sudo apt install autossh

Substitute apt with whatever package manager you use on your system. With it installed, we can use a command like this:

autossh -o "UserKnownHostsFile /home/ssh-tunnel/.ssh/known_hosts" -o "IdentityFile /home/ssh-tunnel/.ssh/ssh-tunnel_ed25519" -o "PubkeyAuthentication=yes" -o "PasswordAuthentication=no" -o "ServerAliveInterval 900" -TN -R30582:localhost:5724 -p 7261 ssh-tunnel@starbeamrainbowlabs.com

to automatically start our ssh tunnel, and restart it if anything goes wrong. Note all the extra settings I had to specify here. This is because even though I had many of them specified in ~/.ssh/config for the ssh-tunnel user, because of systemd's weird environment when it starts a service, I found I had to specify everything in the command line with absolute paths (ugh).

Basically, the above tells autossh where the known_hosts file is (important for automation!), that it should only attempt public / private keypair authentication and not password authentication, that it should check the server's still there every 15 minutes, and all the other things we figured out above.

Finally, I combined the solutions I came up with for both problems, which left me with this:

[Unit]
Description=SSH tunnel from starbeamrainbowlabs.com to local ssh server.

[Service]
Type=simple
ExecStart=/bin/sh -c 'until ping -c1 starbeamrainbowlabs.com &>/dev/null && sleep 5; do :; done && /usr/bin/autossh -o "UserKnownHostsFile /home/pi/.ssh/known_hosts" -o "IdentityFile /home/pi/.ssh/ssh-tunnel_ed25519" -o "PubkeyAuthentication=yes" -o "PasswordAuthentication=no" -o "ServerAliveInterval 900" -TN -R30582:localhost:5724 -p 7261 ssh-tunnel@starbeamrainbowlabs.com'

[Install]
WantedBy=network-online.target

Here's a version that utilises the -f parameter of autossh to put the autossh into the background, which eliminates the sh parent process:

[Unit]
Description=SSH tunnel from starbeamrainbowlabs.com to local ssh server.

[Service]
Type=forking
Environment=AUTOSSH_PIDFILE=/var/run/sbrl-ssh-tunnel/ssh-tunnel.pid
PIDFile=/var/run/sbrl-ssh-tunnel/ssh-tunnel.pid
ExecStartPre=/bin/mkdir -p /var/run/sbrl-ssh-tunnel
ExecStartPre=-/bin/chown ssh-tunnel:ssh-tunnel /var/run/sbrl-ssh-tunnel
ExecStart=/bin/sh -c 'until ping -c1 starbeamrainbowlabs.com &>/dev/null && sleep 5; do :; done && /usr/bin/autossh -f -o "UserKnownHostsFile /home/pi/.ssh/known_hosts" -o "IdentityFile /home/pi/.ssh/ssh-tunnel_ed25519" -o "PubkeyAuthentication=yes" -o "PasswordAuthentication=no" -o "ServerAliveInterval 900" -TN -R30582:localhost:5724 -p 7261 ssh-tunnel@starbeamrainbowlabs.com'

[Install]
WantedBy=network-online.target

I ended up further modifying the above to set up an additional tunnel to allow elessar to send emails via the postfix email server that's running on starbeamrainbowlabs.com. Let me know if you'd be interested in a tutorial on this!

Sources and Futher Reading

How to set up a WebDav share with Nginx

I've just been setting up a WebDav share on a raspberry pi 3 for my local network (long story), and since it was a bit of a pain to set up (and I had to combine a bunch of different tutorials out there to make mine work), I thought I'd share how I did it here.

I'll assume you have a raspberry pi all set up and up-to-date in headless mode, with a ufw for your firewall (if you need help with this, post in the comments below or check out the Raspberry Pi Stack Exchange). To start with, we need to install the nginx-full package:

sudo apt update
sudo apt install  nginx-full

Note that we need the nginx-full package here, because the nginx-extras or just simply nginx packages don't include the required additional webdav support modules. Next, we need to configure Nginx. Nginx's configuration files live at /etc/nginx/nginx.conf, and in /etc/nginx/conf.d. I did something like this for my nginx.conf:

user www-data;
worker_processes 4;
pid /run/nginx.pid;

events {
    worker_connections 768;
    # multi_accept on;
}

http {

    ##
    # Basic Settings
    ##

    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 65;
    types_hash_max_size 2048;
    # server_tokens off;

    # server_names_hash_bucket_size 64;
    # server_name_in_redirect off;

    include /etc/nginx/mime.types;
    default_type application/octet-stream;

    ##
    # SSL Settings
    ##

    ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
    ssl_prefer_server_ciphers on;

    ##
    # Logging Settings
    ##

    access_log /var/log/nginx/access.log;
    error_log /var/log/nginx/error.log;

    ##
    # Gzip Settings
    ##

    gzip on;

    gzip_vary on;
    gzip_proxied any;
    gzip_comp_level 6;
    gzip_buffers 16 8k;
    gzip_http_version 1.1;
    gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

    ##
    # Virtual Host Configs
    ##

    include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-enabled/*;
}

Not many changes here. Then, I created a file called 0-webdav.conf in the conf.d directory, and this is what I put in it:

server {
    listen 80;
    listen [::]:80;

    server_name plans.helenshydrogen.be;

    auth_basic              realm_name;
    auth_basic_user_file    /etc/nginx/.passwords.list;

    dav_methods     PUT DELETE MKCOL COPY MOVE;
    dav_ext_methods PROPFIND OPTIONS;
    dav_access      user:rw group:rw all:r

    client_body_temp_path   /tmp/nginx/client-bodies;
    client_max_body_size    0;
    create_full_put_path    on;

    root /mnt/hydroplans;
}

Now this is where the magic happens. The dav_access directive tells nginx to allow everyone to read, but only logged in users to write to the share. This isn't actually particularly relevant, because of the auth_basic and auth_basic_user_file directives, which tell nginx to require people to login to the share before they are allowed to access it.

It's also important to note that I found that Windows (10, at least), didn't like the basic authentication - even though Ubuntu's Nautilus accepted it just fine - so I had to comment that bit out :-(

If you do still want authentication (hey! May you'll have better luck than I :P), then you'll need to set up the passwords file. Here's how you create it:


echo -n 'helen:' | sudo tee /etc/nginx/.passwords.list
openssl passwd -apr1 | sudo tee -a /etc/nginx/.passwords.list 
Password:

The above creates a user called helen, and asks you to type a password. If you're adding another user to the file, simply change the first tee to be tee -a to avoid overwriting the first one.

With that all configured, it's time to test the configuration file, and, if we're lucky, restart nginx!


sudo nginx -t
sudo systemctl restart nginx

That's all you should need to do to set up a simple WebDav share. Remember that this is a starting point, and not an ending point - there are a few big holes in the above that you'll need to address, depending on your use case (for example, I haven't included the setup of https / encryption - try letsencrypt for that).

Here are the connection details for the above for a few different clients:

  • Ubuntu / Nautilus: (Go to "Other Locations" and paste this into the "Connect to Server" box) dav://plans.helenshydrogen.be/
  • Windows: (Go to "Map Network Drive" and paste this in) http://plans.helenshydrogen.be/

Did this work for you? Have any problems? Got instructions for a WebDav client not listed here? Let me know in the comments!

Website Integrations #3: Twitter cards

The twitter logo. (The posts featured in the above images are this one about my new Raspberry Pi 3, and the latest coding conundrums evolved post).

You have arrived in the third of three parts in my mini-series on how I implemented rich snippets. In the last two parts I tackled open graph and becoming an oEmbed provider. In this part, I'll be talking a bit about twitter cards, and how I implemented them.

Twitter's take on the problem seems to be much simpler than Facebook's, which makes for easy implementing :D Like in the other two protocols too, they decided to have multiple different types of, well, in this case, cards. I decided to implement the summary card type. Like open graph, it adds a bunch of <meta /> tags to the header. Sigh. Anyway, here are the property names I needed to implement:

  • twitter:card - The type of card. In my case this is set to summary
  • twitter:site - This one's confusing. Although it's called 'site', it should actually be set to your own twitter handle - mine is @SBRLabs.
  • twitter:title - The title of the content. Practically identical to open graph's og:title.
  • twitter:description - The description of the content. The same as og:description.
  • twitter:image - A url pointing to an image that should be displayed next to the title and description. Unlike Facebook's open graph, twitter appears to support https urls here with no problem at all.

Since after implementing open graph I already had 90% of the infrastructure and calculations in place already, throwing together values for the above wasn't too difficult. Here's an example set of twitter card <meta /> tags generated by the updated code:

<meta property="twitter:card" content="summary" />
<meta property="twitter:site" content="@SBRLabs" />
<meta property="twitter:title" content="Running Prolog on Linux" />
<meta property="twitter:description" content="Hello! I hope you had a nice restful Easter. I've been a bit busy this last 6 months, but I've got a holiday at the moment, and I've just received a .... (click to read more)" />
<meta property="twitter:image" content="https://starbeamrainbowlabs.com/blog/images/20151015-learning-swi-prolog-banner.svg" />

Easy peasy. Next up was testing time. Thankfully, Twitter made this easy too by providing an official testing tool. Interestingly, they whitelist domains based on whether the webmaster has run a url through their tool - so if you want twitter cards to show up, make sure you plug at least one of your website's page urls through their tool.

After a few tweaks, I got this:

The whitelisted message from the twitter card tester.

With that, my work was complete. This brings us to the end of my mini-series on rich-snippet integrations (unless I've missed a protocol O.o Comment below if I have)! I hope you've found it useful. If you have (or even if you haven't!) please let me know in the comments below :D

Profiling PHP with XDebug

(This post is a fork of a draft version of a tutorial / guide originally written as a whilst at my internship.)

The PHP and xdebug logos.

Since I've been looking into xdebug's profiling function recently, I've just been tasked with writing up a guide on how to set it up and use it, from start to finish - and I thought I'd share it here.

While I've written about xdebug before in my An easier way to debug PHP post, I didn't end up covering the profiling function - I had difficulty getting it to work properly. I've managed to get it working now - this post documents how I did it. While this is written for a standard Debian server, the instructions can easily be applied to other servers.

For the uninitiated, xdebug is an extension to PHP that aids in the debugging of PHP code. It consists of 2 parts: The php extension on the server, and a client built into your editor. With these 2 parts, you can create breakpoints, step through code and more - though these functions are not the focus of this post.

To start off, you need to install xdebug. SSH into your web server with a sudo-capable account (or just use root, though that's bad practice!), and run the following command:

sudo apt install php-debug

Windows users will need to download it from here and put it in their PHP extension direction. Users of other linux distributions and windows may need to enable xdebug in their php.ini file manually (windows users will need extension=xdebug.dll; linux systems use extension=xdebug.so instead).

Once done, xdebug should be loaded and working correctly. You can verify this by looking the php information page. To see this page, put the following in a php file and request it in your browser:

<?php
phpinfo();
?>

If it's been enabled correctly, you should see something like this somewhere on the resulting page:

Part of the php information page, showing the xdebug section.

With xdebug setup, we can now begin configuring it. Xdebug gets configured in php.ini, PHP's main configuration file. Under Virtualmin each user has their own php.ini because PHP is loaded via CGI, and it's usually located at ~/etc/php.ini. To find it on your system, check the php information page as described above - there should be a row with the name "Loaded Configuration File":

The 'loaded configuration file' directive on the php information page.

Once you've located your php.ini file, open it in your favourite editor (or type sensible-editor php.ini if you want to edit over SSH), and put something like this at the bottom:

[xdebug]
xdebug.remote_enable=1
xdebug.remote_connect_back=1
xdebug.remote_port=9000
xdebug.remote_handler=dbgp
xdebug.remote_mode=req
xdebug.remote_autostart=true

xdebug.profiler_enable=false
xdebug.profiler_enable_trigger=true
xdebug.profiler_enable_trigger_value=ZaoEtlWj50cWbBOCcbtlba04Fj
xdebug.profiler_output_dir=/tmp
xdebug.profiler_output_name=php.profile.%p-%u

Obviously, you'll want to customise the above. The xdebug.profiler_enable_trigger_value directive defines a secret key we'll use later to turn profiling on. If nothing else, make sure you change this! Profiling slows everything down a lot, and could easily bring your whole server down if this secret key falls into the wrong hands (that said, simply having xdebug loaded in the first place slows things down too, even if you're not using it - so you may want to set up a separate server for development work that has xdebug installed if you haven't already). If you're not sure on what to set it to, here's a bit of bash I used to generate my random password:

dd if=/dev/urandom bs=8 count=4 status=none | base64 |  tr -d '=' | tr '+/' '-_'

The xdebug.profiler_output_dir lets you change the folder that xdebug saves the profiling output files to - make sure that the folder you specify here is writable by the user that PHP is executing as. If you've got a lot of profiling to do, you may want to consider changing the output filename, since xdebug uses a rather unhelpful filename by default. The property you want to change here is xdebug.profiler_output_name - and it supports a number of special % substitutions, which are documented here. I can recommend something phpprofile.%t-%u.%p-%H.%R.%cachegrind - it includes a timestamp and the request uri for identification purposes, while still sorting chronologically. Remember that xdebug will overwrite the output file if you don't include something that differentiates it from request to request!

With the configuration done, we can now move on to actually profiling something :D This is actually quite simple. Simply add the XDEBUG_PROFILE GET (or POST!) parameter to the url that you want to test in your browser. Here are some examples:

https://localhost/carrots/moon-iter.php?XDEBUG_PROFILE=ZaoEtlWj50cWbBOCcbtlba04Fj
https://development.galacticaubergine.de/register?vegetable=yes&mode=plus&XDEBUG_PROFILE=ZaoEtlWj50cWbBOCcbtlba04Fj

Adding this parameter to a request will cause xdebug to profile that request, and spit out a cachegrind file according to the settings we configured above. This file can then be analysed in your favourite editor, or, if it doesn't have support, an external program like qcachegrind (Windows) or kcachegrind (Everyone else).

If you need to profile just a single AJAX request or similar, most browsers' developer tools let you copy a request as a curl or wget command (Chromium-based browsers, Firefox - has an 'edit and resend' option), allowing you to resend the request with the XDEBUG_PROFILE GET parameter.

If you need to profile everything - including all subrequests (only those that pass through PHP, of course) - then you can set the XDEBUG_PROFILE parameter as a cookie instead, and it will cause profiling to be enabled for everything on the domain you set it on. Here's a [bookmarklet]() that set the cookie:

javascript:(function(){document.cookie='XDEBUG_PROFILE='+'insert_secret_key_here'+';expires=Mon, 05 Jul 2100 00:00:00 GMT;path=/;';})();

(Source)

Replace insert_secret_key_here with the secret key you created for the xdebug.profiler_enable_trigger_value property in your php.ini file above, create a new bookmark in your browser, paste it in (making sure that your browser doesn't auto-remove the javascript: at the beginning), and then click on it when you want to enable profiling.

Was this helpful? Got any questions? Let me know in the comments below!

Sources and further reading

Website Integrations #2: oEmbed

Welcome to part 2 of this impromptu miniseries! In this second part of three, I'll be showing you a little about how I set up and tested a simple oEmbed provider for my blog posts - I've seen lots of oEmbed client information out there, but not much in the way of provider (or server) implementations.

If you haven't read part one about the open graph protocol yet, then you might find it interesting.

oEmbed is a bit different to open graph in that instead of throwing a bunch of meta tags into your <head />, you instead use a special <link /> element that points interested parties in the direction of some nice tasty json. Personally, I find this approach to be more sensible and easier to handle - the kind of thing you'd expect from an open standard.

To start with, I took a read of their specification, as I did with open graph. It doesn't have as many examples as I'd have liked, and I had to keep jumping around, but it's certainly not the worst I've seen.

oEmbed is built on the idea of providers (that's me!) and consumers (the programs and website you use). Providers, erm, provide machine-readable information about urls passed to them, and consumers take this information provided to them and display it to the user in a manner they think is appropriate.

To start with, I created a new PHP file to act as my provider over at https://starbeamrainbowlabs.com/blog/oembed.php and took a look at the different oEmbed types available - oEmbed has a type system of sorts, similar to open graph. I decided on link - while a rich would look cool, it would be almost impossible to test with every client out there, and I can't guarantee how the html would be rendered or what space it would have either.

With that decided, I made a list of the properties that I'd need to include in the json response:

  • version - The version of oEmbed. Currently 1.0 as of the time of typing.
  • type - The oEmbed type. I chose link here.
  • title - The title of the page
  • author_name - The name of the author
  • author_url - A link to the author's homepage.
  • provider_name - The provider's name.
  • provider_url - A link to the provider's homepage. I chose my blog index, since this script will only serve my blog.
  • cache_age - How long consumers should cache the response for. I put 1 hour (3600 seconds) here, since I usually correct mistakes after posting that I've missed, and I want them to go out fairly quickly.
  • thumbnail_url - A link to a suitable thumbnail picture.
  • thumbnail_width - The width of the thumbnail image, in pixels.
  • thumbnail_height - The width of the thumbnail image, in pixels.

Then I looked at the data I'd be getting from the client. It all comes in the form of GET parameters:

  • format - Either json or xml. Personally, I only support json.
  • url - The url to send oEmbed information for.

With all the information close at hand, I spent a happy hour or so writing code, and ended up with a script that outputs something like this:

{
    "version": "1.0",
    "type": "link",
    "title": "Website Integrations #1: Open Graph",
    "author_name": "Starbeamrainbowlabs",
    "author_url": "https:\/\/starbeamrainbowlabs.com\/",
    "provider_name": "Stardust | Starbeamrainbowlabs' Blog",
    "provider_url": "https:\/\/starbeamrainbowlabs.com\/blog\/",
    "cache_age": 3600,
    "thumbnail_url": "https:\/\/starbeamrainbowlabs.com\/images\/logos\/open-graph.png",
    "thumbnail_width": 300,
    "thumbnail_height": 300
}

(See it for yourself!)

Though the specification includes requirements for satisfying 2 extra GET parameters, maxwidth and maxheight, I chose to ignore them since writing a dynamic thumbnail rescaling script is both rather complicated and requires a not insignificant amount of processing power every time it is used.

After finishing the oEmbed script, I turned my attention to one final detail: The special <link /> tag required for auto-discovery. A quick bit of PHP in the article page renderer adds something like this to the header:

<link rel="alternate" type="application/json+oembed" href="https://starbeamrainbowlabs.com/blog/oembed.php?format=json&url=https%3A%2F%2Fstarbeamrainbowlabs.com%2Fblog%2Farticle.php%3Farticle%3Dposts%252F229-Website-Integrations-1-Open-Graph.html" />

and with that, my oEmbed provider implementation is complete - but it still needs testing! Unfortunately, testing tool for oEmbed are few and far between, but I did manage to find a few:

  • oEmbed Tester - A basic testing tool. Appears to work well for the most part - except the preview. Not sure why it says "Preview not available." all the time.
  • Iframely URL Debugger - Actually a testing tool for some commercial tool or other, but it still appears to accurately test not only oEmbed, but open graph and twitter cards (more on them in the next post!) too!

After testing and fixing a few bugs, my oEmbed provider was complete! Next time, I'll be taking a look at twitter's take on the subject: Twitter cards.

Found this interesting? Comment below! Share it with a friend!

Website Integrations #1: Open Graph

The logo of the Open Graph protocol.

These days, if you share a link to a website or a blog post with a friend or on a social networking site, sometimes the link expands to a preview of the link you've just posted. Personally, I find this behaviour to be quite helpful, as it lets me get an idea as to what it is that I'm about to click on.

Unfortunately, when it comes to the code behind these previews, there are no less than 3(!) different protocols that you need to implement in order to get it to work, since facebook, twitter, and the rest of the web community haven't been talking to each other quite like they should have been.

Anyway, after implementing these 3 protocols and having a bit of trouble with them, I thought I'd write up a mini-series on the process I went through, the problems I encountered, and how I solved them. In this post, I'm going to explain Facebook's Open Graph protocol.

I decided that I'd implement these 3 protocols on my home page and each blog post's page. Open Graph was the easiest - all it requires is a bunch of meta tags. These tags are split into 2 parts - the common tags, which all page types should have, and the type-specific tags, which depend on the type of page you're implementing them on. Here's the list of common tags I implemented:

  • og:title - The title of your page
  • og:description - A short description of your page
  • og:image, og:image:url, and og:image:secure_url - The url of an image that would fit as a preview for the page
  • og:url - The url of the page (not sure why this is required, since you have to know the url in order to require the page... :P Perhaps it's to help with deduplication - I'm not sure)

These were fairly easy for my home page:

<meta property="og:title" content="Starbeamrainbowlabs" />
<meta property="og:description" content="Hi! I am a computer science student who is in their second year at Hull University. I started out teaching myself about various web technologies, and then I managed to get a place at University, where I am now." />
<meta property="og:image" content="http://starbeamrainbowlabs.com/favicon.png" />
<meta property="og:image:url" content="http://starbeamrainbowlabs.com/favicon.png" />
<meta property="og:image:secure_url" content="https://starbeamrainbowlabs.com/favicon.png" />
<meta property="og:url" content="https://starbeamrainbowlabs.com/" />

When I went to test it using Facebook's official testing tool, the biggest problem I had was that the image wouldn't show up - no matter what I did. I eventually found this stackoverflow answer which explained that Facebook doesn't support https urls in anything other than the og:image:secure_url meta tag (even though they say they do) - so changing the urls to regular http solved the problem.

Next, I took a look at the type-specific tags. There's a whole bunch of them (check out this section of the spec) - I decided on the profile type for the index page of my website here:

<meta property="og:type" content="profile" />

The profile type has a few extra specific meta tags that need setting too, so I added those:

<meta property="profile:first_name" content="Starbeamrainbowlabs" />
<meta property="profile:last_name" content="Tjovik" />
<meta property="profile:username" content="Starbeamrainbowlabs" />

With that done, I turned my attention to my blog posts. Since the page is rendered in PHP (and typing out all those meta tags was a rather annoying), I created a teensy little framework to output the meta tags for me

$metaTags = [];
$metaTags["property"] = "value";

$renderedMetaTags = "";
foreach($metaTags as $metaKey => $metaValue)
    $renderedMetaTags .= "\t<meta property=\"$metaKey\" content=\"$metaValue\" />";

Now I can add as many meta tags as I like, with a fraction of the typing - and it looks neater too :D With that done, I implemented the basic meta tags. Here's some example output from the last post:

<meta property="og:title" content="4287 Reasons why your comments weren't posted" />
<meta property="og:description" content="I don't get a lot of real comments on here from what I can tell, as you've probably noticed. I don't particularly mind (though it's always awesome whe.... (click to read more)" />
<meta property="og:image" content="http://starbeamrainbowlabs.com/blog/images/20170406-Spammer-Mistakes.png" />
<meta property="og:image:url" content="http://starbeamrainbowlabs.com/blog/images/20170406-Spammer-Mistakes.png" />
<meta property="og:image:secure_url" content="https://starbeamrainbowlabs.com/blog/images/20170406-Spammer-Mistakes.png" />
<meta property="og:url" content="https://starbeamrainbowlabs.com/blog/article.php?article=posts%2F228-4287-Reasons-Your-Comments-Were-Not-Posted.html" />

That wasn't too tough. Next, I looked at the list of types again, and chose the article type for my blog posts.

<meta property="og:type" content="article" />

Like the profile type earlier, the article type also comes with a few type-specific meta tags (what they mean by not fitting into a 'vertical' I have no idea). I decided not to implement all the type-specific meta tags available here, since not all of them were practical to implement. Here's some more example output for the new tags:

<meta property="article:author" content="https://starbeamrainbowlabs.com/" />
<meta property="article:published_time" content="2017-04-08T12:56:46+01:00" />

Unfortunately, the article published time is really awkward to get hold of actually (even though it's outputted at the bottom of every article) , so I went with the 'last modified' time instead. The published time is marked up with html microdata Hopefully it doesn't cause too many issues later - though I can always change it :P

With that (and a final test), it looked like my Open Graph implementation was working as intended. Next time, I'll show you how I implemented a simple oEmbed provider.

Useful Links

4287 Reasons why your comments weren't posted

I don't get a lot of real comments on here from what I can tell, as you've probably noticed. I don't particularly mind (though it's always awesome when I do get one!) - but what I do mind about is the spam. Since February 2015, I've gotten 4287 spam comments. 4287! It's actually quite silly, when you think about it.

The other day I was fiddling with the code behind this blog (posts about that coming soon!), and I discovered that I implemented a log ages ago that records each and every spammer, and the mistake they made - and I thought I'd share some statistics here, and some tips for dealing with spam yourself (I've posted about tactics before here.

Here's an extract from my logs (full logs available on request):

[ Sun, 02 Apr 2017 23:17:25 +0100] invalid comment | ip: 94.181.153.194 | name: ghkkll | articlepath: posts/120-Cpu-Registers.html | mistake: shortcomment
[ Mon, 03 Apr 2017 02:16:58 +0100] invalid comment | ip: 191.96.242.17 | name: exercise pants | articlepath: posts/010-Gif-Renderer.html | mistake: invalidkey
[ Mon, 03 Apr 2017 02:16:58 +0100] invalid comment | ip: 191.96.242.17 | name: exercise pants | articlepath: posts/010-Gif-Renderer.html | mistake: invalidkey
[ Mon, 03 Apr 2017 02:16:59 +0100] invalid comment | ip: 191.96.242.17 | name: exercise pants | articlepath: posts/010-Gif-Renderer.html | mistake: invalidkey
[ Mon, 03 Apr 2017 02:16:59 +0100] invalid comment | ip: 191.96.242.17 | name: exercise pants | articlepath: posts/010-Gif-Renderer.html | mistake: invalidkey

Since the output format I chose is nice and regular, I could use a quick bit of bash magic to whip up some statistics:

cat failedcomments.log | sed -e 's/^.*mistake\: //' | grep -iv '\[' | sort | uniq -c | sort -nr

(explanation, courtesy of explainshell.com )

That gave me this output:

Count Reason
3922 invalidkey
148 website
67 nokey
67 noarticleid
56 noname
17 shortcomment
4 badarticlepath
3 longcomment
2 invalidemail
1 shortname

My first thoughts here were firstly along the lines of "wow, that's a lot of spam", secondly "that comment key is working really well!", and thirdly "I didn't realise how helpful that fake website field is". Still, though I had a table, I thought a visualisation might help to put things into perspective.

A pair of pie charts showing a breakdown on which mimstakes spammers made most often. Explained below.

There - much better :D As you might have suspected if you've been following my blog here for a while, having an invalid comment key is the most common mistake spammers make.

The comment key is a hidden field in the comment form that is actually a transformed timestamp of the time you loaded the page. Working it backwards, I can work out how long it took you to submit a comment from first loading the page.

Using a companion log file to the one that I generated the above pie chart from, I've calculated that 3558 potential comments were submitted within 10 seconds of loading the page! No ordinary humans are that fast.... (especially considering you probably want to read the article before commenting!) they have to be bots. Here's a graph to illustrate the dropoff (the time is in seconds):

A graph illustrating how many bots commented on my blog unnaturally fast.

Out of the other reasons that people failed, "website" was the second most common mistake, with ~3.45% of spammers getting caught out on it. This mistake refers to another of my little spam traps - defense in depth is always good! This particular one is a regular website address hidden field, which is hidden via some fancy CSS. Curious, I decided to investigate further - and what I found was fascinating.

About 497 spammers entered an invalid website address (i.e. one that doesn't start with http) into the website box - which I really can't understand, since it's got a (hidden) label and an appropriate name and type to match - 90 of which decided that "seo plugin" was a brilliant thing to fill it with! It's important to note here that spammers who got caught by the invalid comment key filter above are included in these statistics - here's the bash command I used here:

grep -i '"website"' rawsubmits.jsonlog  | sed -e 's/^.*"website": "//' -e 's/",//' -e 's/\\\///' | uniq | egrep -iv '^http' | wc -l

Other examples include "watch live sports free", "Samantha", "just click the following web site", long strings of html-encoded unicode characters (japanese I think, after decoding one), and more. Perfectly baffling, if you ask me (if you can shed some light on this one, please comment below!).

57 spambots forgot their own name. This could be because the box you put your name in below has a name of 'name', but an id of 'namebox' - which may have caused some confusion for some of the more stupid bots.

After all that, there were 3 long comments (probably a bunch of word salad), 2 invalid email addresses that weren't caught by any filters above, and 1 short name (under 3 characters).

That's about it for this impromptu analysis of my comments log! This took far longer than I thought it would to type up. Did you find it interesting? Thinking of putting some of these techniques into practice yourself? Comment below!

Art by Mythdael