Mailing List Articles Atom Feed Comments Atom Feed Twitter Reddit Facebook

Tag Cloud

3d account algorithms android announcement architecture archives arduino artificial intelligence artix assembly async audio bash batch blog bookmarklet booting c sharp c++ challenge chrome os code codepen coding conundrums coding conundrums evolved command line compilers compiling compression css dailyprogrammer debugging demystification distributed computing documentation downtime electronics email embedded systems encryption es6 features event experiment external first impressions future game github github gist graphics hardware hardware meetup holiday holidays html html5 html5 canvas infrastructure interfaces internet io.js jabber javascript js bin labs learning library linux low level lua maintenance manjaro network networking node.js operating systems performance photos php pixelbot portable privacy problem solving programming problems project projects prolog protocol protocols pseudo 3d python reddit reference release releases resource review rust secrets security series list server software sorting source code control statistics storage svg technical terminal textures three thing game three.js tool tutorial tutorials twitter ubuntu university update updates upgrade version control virtual reality virtualisation visual web website windows windows 10 xmpp xslt

How to prevent php-fpm from overriding your PHP-based caching logic

A while ago I implemented ETag support to the dynamic preview generator in Pepperminty Wiki. While I thought it worked during testing, for some reason on a private instance of Pepperminty Wiki I discovered recently that my browser didn't seen to be taking advantage of it. This got me curious, so I decided to do a little bit of digging to find out why.

It didn't take long to find the problem. For some reason, all the responses from the server had a Cache-Control: no-cache, no-store, must-revalidate header attached to them. How strange! Even more so that it was in capital letters - my convention in Pepperminty Wiki is to always make the headers lowercase.

I checked the codebase with via the Project Find feature of Atom just to make sure that I hadn't left in a debugging statement or anything (I hadn't), and then I turned my attention to Nginx (engine-x) - the web server that I use on my server. Maybe it had added a caching header?

A quick grep later revealed that it wasn't responsible either - which leaves just one part of the system unchecked - php-fpm, the PHP FastCGI server that sits just behind Nginx that's responsible for running the various PHP scripts I have that power this website and other corners of my server. Another quick grep returned a whole bunch of garbage, so after doing some research I discovered that php-fpm, by default, is configured to send this header - and that it has to be disabled by editing your php.ini (for me it's in /etc/php/7.1/fpm/php.ini), and changing

;session.cache_limiter = nocache

to be explicitly set to an empty string, like so:

session.cache_limiter = ''

This appears to have solved by problem for now - allowing me to regain control over how the resources I send back via PHP are cached. Hopefully this saves someone else the hassle of pulling their entire web server stack apart and putting it back together again the future :P

Found this helpful? Still having issues? Got a better way of solving the problem? Post a comment below!

Deterring spammers with a comment key system

I recently found myself reimplementing the comment key system I use on this blog (I posted about it here) for a another purpose. Being more experienced now, my new implemention (which I should really put into use on this blog actually) is stand-alone in a separate file - so I'm blogging about it here both to help out anyone who reads this other than myself - and for myself as I know I'll forget otherwise :P

The basic algorithm hasn't changed much since I first invented it: take the current timestamp, apply a bunch or arbitrary transformations to it, put it as a hidden field in the comment form, and then reverse the transformations on the comment key the user submits as part of the form to discover how long they had the page loaded for. Bots will have it loaded for either less than 10-ish seconds, or more than 24 hours. Humans will be somewhere in the middle - at least according to my observations!

Of course, any determined spammer can easily bypass this system if they spend even a little bit of time analysing the system - but I'm banking on the fact that my blog is too low-value for a spammer to bother reverse-engineering my system to figure out how it works.

This time, I chose to use simple XOR encryption, followed by reversing the string, followed by base64 encoding. It should be noted that XOR encryption is certainly not secure - but in this case it doesn't really matter. If my website becomes a high-enough value target for that to matter, I'll investigate proper AES encryption - which will probably be a separate post in and of itself, as a quick look revealed that it's rather involved - and will probably require quite a bit of research and experimentation working correctly.

Let's take a look at the key generation function first:

function key_generate($pass) {
    $new_key = strval(time());
    // Repeat the key so that it's long enough to XOR the key with
    $pass_enc = str_repeat($pass, (strlen($new_key) / strlen($pass)) + 1);
    $new_key = $new_key ^ $pass_enc;
    return base64_encode(strrev($new_key));

As I explained above, this first XORs the timestamp against a provided 'passcode' of sorts, and then it reverses it, base64 encodes it, and then returns it. I discovered that I needed to repeat the passcode to make sure it's at least as long as the timestamp - because otherwise it cuts the timestamp short! Longer passwords are always desirable for certain, but I wanted to make sure I addressed it here - just in case I need to lift this algorithm from here for a future project.

Next up is the decoding algorithm, that reverses the transformations we apply above:

    function key_decode($key, $pass) {
    $key_dec = strrev(base64_decode($key));
    // Repeat the key so that it's long enough to XOR the key with
    $pass_dec = str_repeat($pass, (strlen($key_dec) / strlen($pass)) + 1);
    return intval($key_dec ^ $pass_dec);

Very similar. Again, the XOR passphrase has to be repeated to make it long enough to apply to the whole encoded key without inadvertently chopping some off the end. Additionally, we also convert the timestamp back into an integer - since it is the number of seconds since the last UNIX epoch (1st January 1970 as of the time of typing).

With the ability to create and decode keys, let's write a helper method to make the verification process a bit easier:

    function key_verify($key, $pass, $min_age, $max_age) {
    $age = time() - key_decode($key, $pass);
    return $age >= $min_age && $age <= $max_age;

It's fairly self-explanatory, really. It takes an encoded key, decodes it, and verifies that it's age lies between the specified bounds. Here's the code in full (it updates every time I update the code in the GitHub Gist):

(Above: The full comment key code. Can't see it? Check it out on GitHub Gist here.)

Website Integrations (Mini!) Series List

Since I ended up doing a mini-series on the various website integrations I implemented, I thought that you might find a (mini-)series list useful. Here it is:

That's just about all I've got to mention here. Do you have any suggestions and / or requests on what I should blog about? Let me know below in the comments!

Website Integrations #3: Twitter cards

The twitter logo. (The posts featured in the above images are this one about my new Raspberry Pi 3, and the latest coding conundrums evolved post).

You have arrived in the third of three parts in my mini-series on how I implemented rich snippets. In the last two parts I tackled open graph and becoming an oEmbed provider. In this part, I'll be talking a bit about twitter cards, and how I implemented them.

Twitter's take on the problem seems to be much simpler than Facebook's, which makes for easy implementing :D Like in the other two protocols too, they decided to have multiple different types of, well, in this case, cards. I decided to implement the summary card type. Like open graph, it adds a bunch of <meta /> tags to the header. Sigh. Anyway, here are the property names I needed to implement:

  • twitter:card - The type of card. In my case this is set to summary
  • twitter:site - This one's confusing. Although it's called 'site', it should actually be set to your own twitter handle - mine is @SBRLabs.
  • twitter:title - The title of the content. Practically identical to open graph's og:title.
  • twitter:description - The description of the content. The same as og:description.
  • twitter:image - A url pointing to an image that should be displayed next to the title and description. Unlike Facebook's open graph, twitter appears to support https urls here with no problem at all.

Since after implementing open graph I already had 90% of the infrastructure and calculations in place already, throwing together values for the above wasn't too difficult. Here's an example set of twitter card <meta /> tags generated by the updated code:

<meta property="twitter:card" content="summary" />
<meta property="twitter:site" content="@SBRLabs" />
<meta property="twitter:title" content="Running Prolog on Linux" />
<meta property="twitter:description" content="Hello! I hope you had a nice restful Easter. I've been a bit busy this last 6 months, but I've got a holiday at the moment, and I've just received a .... (click to read more)" />
<meta property="twitter:image" content="" />

Easy peasy. Next up was testing time. Thankfully, Twitter made this easy too by providing an official testing tool. Interestingly, they whitelist domains based on whether the webmaster has run a url through their tool - so if you want twitter cards to show up, make sure you plug at least one of your website's page urls through their tool.

After a few tweaks, I got this:

The whitelisted message from the twitter card tester.

With that, my work was complete. This brings us to the end of my mini-series on rich-snippet integrations (unless I've missed a protocol O.o Comment below if I have)! I hope you've found it useful. If you have (or even if you haven't!) please let me know in the comments below :D

Website Integrations #2: oEmbed

Welcome to part 2 of this impromptu miniseries! In this second part of three, I'll be showing you a little about how I set up and tested a simple oEmbed provider for my blog posts - I've seen lots of oEmbed client information out there, but not much in the way of provider (or server) implementations.

If you haven't read part one about the open graph protocol yet, then you might find it interesting.

oEmbed is a bit different to open graph in that instead of throwing a bunch of meta tags into your <head />, you instead use a special <link /> element that points interested parties in the direction of some nice tasty json. Personally, I find this approach to be more sensible and easier to handle - the kind of thing you'd expect from an open standard.

To start with, I took a read of their specification, as I did with open graph. It doesn't have as many examples as I'd have liked, and I had to keep jumping around, but it's certainly not the worst I've seen.

oEmbed is built on the idea of providers (that's me!) and consumers (the programs and website you use). Providers, erm, provide machine-readable information about urls passed to them, and consumers take this information provided to them and display it to the user in a manner they think is appropriate.

To start with, I created a new PHP file to act as my provider over at and took a look at the different oEmbed types available - oEmbed has a type system of sorts, similar to open graph. I decided on link - while a rich would look cool, it would be almost impossible to test with every client out there, and I can't guarantee how the html would be rendered or what space it would have either.

With that decided, I made a list of the properties that I'd need to include in the json response:

  • version - The version of oEmbed. Currently 1.0 as of the time of typing.
  • type - The oEmbed type. I chose link here.
  • title - The title of the page
  • author_name - The name of the author
  • author_url - A link to the author's homepage.
  • provider_name - The provider's name.
  • provider_url - A link to the provider's homepage. I chose my blog index, since this script will only serve my blog.
  • cache_age - How long consumers should cache the response for. I put 1 hour (3600 seconds) here, since I usually correct mistakes after posting that I've missed, and I want them to go out fairly quickly.
  • thumbnail_url - A link to a suitable thumbnail picture.
  • thumbnail_width - The width of the thumbnail image, in pixels.
  • thumbnail_height - The width of the thumbnail image, in pixels.

Then I looked at the data I'd be getting from the client. It all comes in the form of GET parameters:

  • format - Either json or xml. Personally, I only support json.
  • url - The url to send oEmbed information for.

With all the information close at hand, I spent a happy hour or so writing code, and ended up with a script that outputs something like this:

    "version": "1.0",
    "type": "link",
    "title": "Website Integrations #1: Open Graph",
    "author_name": "Starbeamrainbowlabs",
    "author_url": "https:\/\/\/",
    "provider_name": "Stardust | Starbeamrainbowlabs' Blog",
    "provider_url": "https:\/\/\/blog\/",
    "cache_age": 3600,
    "thumbnail_url": "https:\/\/\/images\/logos\/open-graph.png",
    "thumbnail_width": 300,
    "thumbnail_height": 300

(See it for yourself!)

Though the specification includes requirements for satisfying 2 extra GET parameters, maxwidth and maxheight, I chose to ignore them since writing a dynamic thumbnail rescaling script is both rather complicated and requires a not insignificant amount of processing power every time it is used.

After finishing the oEmbed script, I turned my attention to one final detail: The special <link /> tag required for auto-discovery. A quick bit of PHP in the article page renderer adds something like this to the header:

<link rel="alternate" type="application/json+oembed" href="" />

and with that, my oEmbed provider implementation is complete - but it still needs testing! Unfortunately, testing tool for oEmbed are few and far between, but I did manage to find a few:

  • oEmbed Tester - A basic testing tool. Appears to work well for the most part - except the preview. Not sure why it says "Preview not available." all the time.
  • Iframely URL Debugger - Actually a testing tool for some commercial tool or other, but it still appears to accurately test not only oEmbed, but open graph and twitter cards (more on them in the next post!) too!

After testing and fixing a few bugs, my oEmbed provider was complete! Next time, I'll be taking a look at twitter's take on the subject: Twitter cards.

Found this interesting? Comment below! Share it with a friend!

Website Integrations #1: Open Graph

The logo of the Open Graph protocol.

These days, if you share a link to a website or a blog post with a friend or on a social networking site, sometimes the link expands to a preview of the link you've just posted. Personally, I find this behaviour to be quite helpful, as it lets me get an idea as to what it is that I'm about to click on.

Unfortunately, when it comes to the code behind these previews, there are no less than 3(!) different protocols that you need to implement in order to get it to work, since facebook, twitter, and the rest of the web community haven't been talking to each other quite like they should have been.

Anyway, after implementing these 3 protocols and having a bit of trouble with them, I thought I'd write up a mini-series on the process I went through, the problems I encountered, and how I solved them. In this post, I'm going to explain Facebook's Open Graph protocol.

I decided that I'd implement these 3 protocols on my home page and each blog post's page. Open Graph was the easiest - all it requires is a bunch of meta tags. These tags are split into 2 parts - the common tags, which all page types should have, and the type-specific tags, which depend on the type of page you're implementing them on. Here's the list of common tags I implemented:

  • og:title - The title of your page
  • og:description - A short description of your page
  • og:image, og:image:url, and og:image:secure_url - The url of an image that would fit as a preview for the page
  • og:url - The url of the page (not sure why this is required, since you have to know the url in order to require the page... :P Perhaps it's to help with deduplication - I'm not sure)

These were fairly easy for my home page:

<meta property="og:title" content="Starbeamrainbowlabs" />
<meta property="og:description" content="Hi! I am a computer science student who is in their second year at Hull University. I started out teaching myself about various web technologies, and then I managed to get a place at University, where I am now." />
<meta property="og:image" content="" />
<meta property="og:image:url" content="" />
<meta property="og:image:secure_url" content="" />
<meta property="og:url" content="" />

When I went to test it using Facebook's official testing tool, the biggest problem I had was that the image wouldn't show up - no matter what I did. I eventually found this stackoverflow answer which explained that Facebook doesn't support https urls in anything other than the og:image:secure_url meta tag (even though they say they do) - so changing the urls to regular http solved the problem.

Next, I took a look at the type-specific tags. There's a whole bunch of them (check out this section of the spec) - I decided on the profile type for the index page of my website here:

<meta property="og:type" content="profile" />

The profile type has a few extra specific meta tags that need setting too, so I added those:

<meta property="profile:first_name" content="Starbeamrainbowlabs" />
<meta property="profile:last_name" content="Tjovik" />
<meta property="profile:username" content="Starbeamrainbowlabs" />

With that done, I turned my attention to my blog posts. Since the page is rendered in PHP (and typing out all those meta tags was a rather annoying), I created a teensy little framework to output the meta tags for me

$metaTags = [];
$metaTags["property"] = "value";

$renderedMetaTags = "";
foreach($metaTags as $metaKey => $metaValue)
    $renderedMetaTags .= "\t<meta property=\"$metaKey\" content=\"$metaValue\" />";

Now I can add as many meta tags as I like, with a fraction of the typing - and it looks neater too :D With that done, I implemented the basic meta tags. Here's some example output from the last post:

<meta property="og:title" content="4287 Reasons why your comments weren't posted" />
<meta property="og:description" content="I don't get a lot of real comments on here from what I can tell, as you've probably noticed. I don't particularly mind (though it's always awesome whe.... (click to read more)" />
<meta property="og:image" content="" />
<meta property="og:image:url" content="" />
<meta property="og:image:secure_url" content="" />
<meta property="og:url" content="" />

That wasn't too tough. Next, I looked at the list of types again, and chose the article type for my blog posts.

<meta property="og:type" content="article" />

Like the profile type earlier, the article type also comes with a few type-specific meta tags (what they mean by not fitting into a 'vertical' I have no idea). I decided not to implement all the type-specific meta tags available here, since not all of them were practical to implement. Here's some more example output for the new tags:

<meta property="article:author" content="" />
<meta property="article:published_time" content="2017-04-08T12:56:46+01:00" />

Unfortunately, the article published time is really awkward to get hold of actually (even though it's outputted at the bottom of every article) , so I went with the 'last modified' time instead. The published time is marked up with html microdata Hopefully it doesn't cause too many issues later - though I can always change it :P

With that (and a final test), it looked like my Open Graph implementation was working as intended. Next time, I'll show you how I implemented a simple oEmbed provider.

Useful Links

4287 Reasons why your comments weren't posted

I don't get a lot of real comments on here from what I can tell, as you've probably noticed. I don't particularly mind (though it's always awesome when I do get one!) - but what I do mind about is the spam. Since February 2015, I've gotten 4287 spam comments. 4287! It's actually quite silly, when you think about it.

The other day I was fiddling with the code behind this blog (posts about that coming soon!), and I discovered that I implemented a log ages ago that records each and every spammer, and the mistake they made - and I thought I'd share some statistics here, and some tips for dealing with spam yourself (I've posted about tactics before here.

Here's an extract from my logs (full logs available on request):

[ Sun, 02 Apr 2017 23:17:25 +0100] invalid comment | ip: | name: ghkkll | articlepath: posts/120-Cpu-Registers.html | mistake: shortcomment
[ Mon, 03 Apr 2017 02:16:58 +0100] invalid comment | ip: | name: exercise pants | articlepath: posts/010-Gif-Renderer.html | mistake: invalidkey
[ Mon, 03 Apr 2017 02:16:58 +0100] invalid comment | ip: | name: exercise pants | articlepath: posts/010-Gif-Renderer.html | mistake: invalidkey
[ Mon, 03 Apr 2017 02:16:59 +0100] invalid comment | ip: | name: exercise pants | articlepath: posts/010-Gif-Renderer.html | mistake: invalidkey
[ Mon, 03 Apr 2017 02:16:59 +0100] invalid comment | ip: | name: exercise pants | articlepath: posts/010-Gif-Renderer.html | mistake: invalidkey

Since the output format I chose is nice and regular, I could use a quick bit of bash magic to whip up some statistics:

cat failedcomments.log | sed -e 's/^.*mistake\: //' | grep -iv '\[' | sort | uniq -c | sort -nr

(explanation, courtesy of )

That gave me this output:

Count Reason
3922 invalidkey
148 website
67 nokey
67 noarticleid
56 noname
17 shortcomment
4 badarticlepath
3 longcomment
2 invalidemail
1 shortname

My first thoughts here were firstly along the lines of "wow, that's a lot of spam", secondly "that comment key is working really well!", and thirdly "I didn't realise how helpful that fake website field is". Still, though I had a table, I thought a visualisation might help to put things into perspective.

A pair of pie charts showing a breakdown on which mimstakes spammers made most often. Explained below.

There - much better :D As you might have suspected if you've been following my blog here for a while, having an invalid comment key is the most common mistake spammers make.

The comment key is a hidden field in the comment form that is actually a transformed timestamp of the time you loaded the page. Working it backwards, I can work out how long it took you to submit a comment from first loading the page.

Using a companion log file to the one that I generated the above pie chart from, I've calculated that 3558 potential comments were submitted within 10 seconds of loading the page! No ordinary humans are that fast.... (especially considering you probably want to read the article before commenting!) they have to be bots. Here's a graph to illustrate the dropoff (the time is in seconds):

A graph illustrating how many bots commented on my blog unnaturally fast.

Out of the other reasons that people failed, "website" was the second most common mistake, with ~3.45% of spammers getting caught out on it. This mistake refers to another of my little spam traps - defense in depth is always good! This particular one is a regular website address hidden field, which is hidden via some fancy CSS. Curious, I decided to investigate further - and what I found was fascinating.

About 497 spammers entered an invalid website address (i.e. one that doesn't start with http) into the website box - which I really can't understand, since it's got a (hidden) label and an appropriate name and type to match - 90 of which decided that "seo plugin" was a brilliant thing to fill it with! It's important to note here that spammers who got caught by the invalid comment key filter above are included in these statistics - here's the bash command I used here:

grep -i '"website"' rawsubmits.jsonlog  | sed -e 's/^.*"website": "//' -e 's/",//' -e 's/\\\///' | uniq | egrep -iv '^http' | wc -l

Other examples include "watch live sports free", "Samantha", "just click the following web site", long strings of html-encoded unicode characters (japanese I think, after decoding one), and more. Perfectly baffling, if you ask me (if you can shed some light on this one, please comment below!).

57 spambots forgot their own name. This could be because the box you put your name in below has a name of 'name', but an id of 'namebox' - which may have caused some confusion for some of the more stupid bots.

After all that, there were 3 long comments (probably a bunch of word salad), 2 invalid email addresses that weren't caught by any filters above, and 1 short name (under 3 characters).

That's about it for this impromptu analysis of my comments log! This took far longer than I thought it would to type up. Did you find it interesting? Thinking of putting some of these techniques into practice yourself? Comment below!

I now have a public website status page!

My new status page! Just recently Uptime Robot (the awesome service that I use to monitor my server's uptime) have released a new feature: Public status pages! Status pages appear to be free (for now), so I've gone and set one up. Now all of you can see what's up with my website if it's down.

They even allow you to point a (sub)domain at it too. I did this too, so you can visit my status page at

SnoozeSquad.js - Finally a decent lazy image loader

It's been on my todo list for positively ages, but I've finally gotten around to replacing the existing lazy image loading on this website (not on the blog yet, sorry!) with a new one of my own devising.

Lazy image loading is a technique in which you only load images no a given webpage if they are near the user's field of view. This saves bandwidth by preventing images that are never seen from being downloaded.

Since I've been unable to find a good, solid, reliable lazy image loading script on the web, I thought it best to post about it here so that you can use it too.

Link: SnoozeSquad (Direct download)

Website Updates Feburary 2016: Syntax highlighting updates & more

I was going to do test some coursework, but I got distracted and ended up updating a few things around here instead. Here's the list:

  1. Rewrote build system for CSS / Javascript minification
  2. Replaced layzr.js with [be]Lazy.js
  3. Updated Prism the syntax highlighting library I use around here

The only change you should notice is the syntax highlighting here on my blog. All the other changes are behind the scenes (to take adventage of HTTP/2).

Prolog source code is highlighted now:

% Get the nth element
% From
nthelement(0, [ Head | _ ], Head).
nthelement(N, [ _ | Tail ], Result) :-
    N > 0,
    NRecurse is N - 1,
    nthelement(NRecurse, Tail, Result).

There are some nice additions to the syntax highlighting algorithm, too! The most noticeable are the line numbers. There are also some extra tooltips:

html, body { font-size: 100%; }
    font-family: sans-serif;
    background: linear-gradient(45deg, #b81831, #1a0871);
    color: rgb(187, 28, 59);

    transform: rotate(86deg);
    animation 3s ease-in-out slidein;

I will likely update this post a few more times as I continue to test the new configuration.

Art by Mythdael