Starbeamrainbowlabs

Stardust
Blog


Archive

Mailing List Articles Atom Feed Comments Atom Feed Twitter

Tag Cloud

3d account algorithms announcement archives arduino artificial intelligence assembly async audio bash batch blog bookmarklet booting c sharp c++ challenge chrome os code codepen coding conundrums coding conundrums evolved command line compiling css dailyprogrammer debugging demystification distributed computing downtime embedded systems encryption es6 features event experiment external first impressions future game github github gist graphics hardware hardware meetup holiday html html5 html5 canvas interfaces internet io.js jabber javascript js bin labs learning library linux low level lua maintenance network networking node.js operating systems performance photos php pixelbot portable privacy programming problems project projects prolog protocol protocols pseudo 3d python reddit reference release releases resource review rust secrets security series list server servers software sorting source code control statistics svg technical terminal textures three thing game three.js tool tutorial tutorials twitter ubuntu university update updates upgrade version control visual web website windows windows 10 xmpp

Website Integrations #1: Open Graph

The logo of the Open Graph protocol.

These days, if you share a link to a website or a blog post with a friend or on a social networking site, sometimes the link expands to a preview of the link you've just posted. Personally, I find this behaviour to be quite helpful, as it lets me get an idea as to what it is that I'm about to click on.

Unfortunately, when it comes to the code behind these previews, there are no less than 3(!) different protocols that you need to implement in order to get it to work, since facebook, twitter, and the rest of the web community haven't been talking to each other quite like they should have been.

Anyway, after implementing these 3 protocols and having a bit of trouble with them, I thought I'd write up a mini-series on the process I went through, the problems I encountered, and how I solved them. In this post, I'm going to explain Facebook's Open Graph protocol.

I decided that I'd implement these 3 protocols on my home page and each blog post's page. Open Graph was the easiest - all it requires is a bunch of meta tags. These tags are split into 2 parts - the common tags, which all page types should have, and the type-specific tags, which depend on the type of page you're implementing them on. Here's the list of common tags I implemented:

  • og:title - The title of your page
  • og:description - A short description of your page
  • og:image, og:image:url, and og:image:secure_url - The url of an image that would fit as a preview for the page
  • og:url - The url of the page (not sure why this is required, since you have to know the url in order to require the page... :P Perhaps it's to help with deduplication - I'm not sure)

These were fairly easy for my home page:

<meta property="og:title" content="Starbeamrainbowlabs" />
<meta property="og:description" content="Hi! I am a computer science student who is in their second year at Hull University. I started out teaching myself about various web technologies, and then I managed to get a place at University, where I am now." />
<meta property="og:image" content="http://starbeamrainbowlabs.com/favicon.png" />
<meta property="og:image:url" content="http://starbeamrainbowlabs.com/favicon.png" />
<meta property="og:image:secure_url" content="https://starbeamrainbowlabs.com/favicon.png" />
<meta property="og:url" content="https://starbeamrainbowlabs.com/" />

When I went to test it using Facebook's official testing tool, the biggest problem I had was that the image wouldn't show up - no matter what I did. I eventually found this stackoverflow answer which explained that Facebook doesn't support https urls in anything other than the og:image:secure_url meta tag (even though they say they do) - so changing the urls to regular http solved the problem.

Next, I took a look at the type-specific tags. There's a whole bunch of them (check out this section of the spec) - I decided on the profile type for the index page of my website here:

<meta property="og:type" content="profile" />

The profile type has a few extra specific meta tags that need setting too, so I added those:

<meta property="profile:first_name" content="Starbeamrainbowlabs" />
<meta property="profile:last_name" content="Tjovik" />
<meta property="profile:username" content="Starbeamrainbowlabs" />

With that done, I turned my attention to my blog posts. Since the page is rendered in PHP (and typing out all those meta tags was a rather annoying), I created a teensy little framework to output the meta tags for me

$metaTags = [];
$metaTags["property"] = "value";

$renderedMetaTags = "";
foreach($metaTags as $metaKey => $metaValue)
    $renderedMetaTags .= "\t<meta property=\"$metaKey\" content=\"$metaValue\" />";

Now I can add as many meta tags as I like, with a fraction of the typing - and it looks neater too :D With that done, I implemented the basic meta tags. Here's some example output from the last post:

<meta property="og:title" content="4287 Reasons why your comments weren't posted" />
<meta property="og:description" content="I don't get a lot of real comments on here from what I can tell, as you've probably noticed. I don't particularly mind (though it's always awesome whe.... (click to read more)" />
<meta property="og:image" content="http://starbeamrainbowlabs.com/blog/images/20170406-Spammer-Mistakes.png" />
<meta property="og:image:url" content="http://starbeamrainbowlabs.com/blog/images/20170406-Spammer-Mistakes.png" />
<meta property="og:image:secure_url" content="https://starbeamrainbowlabs.com/blog/images/20170406-Spammer-Mistakes.png" />
<meta property="og:url" content="https://starbeamrainbowlabs.com/blog/article.php?article=posts%2F228-4287-Reasons-Your-Comments-Were-Not-Posted.html" />

That wasn't too tough. Next, I looked at the list of types again, and chose the article type for my blog posts.

<meta property="og:type" content="article" />

Like the profile type earlier, the article type also comes with a few type-specific meta tags (what they mean by not fitting into a 'vertical' I have no idea). I decided not to implement all the type-specific meta tags available here, since not all of them were practical to implement. Here's some more example output for the new tags:

<meta property="article:author" content="https://starbeamrainbowlabs.com/" />
<meta property="article:published_time" content="2017-04-08T12:56:46+01:00" />

Unfortunately, the article published time is really awkward to get hold of actually (even though it's outputted at the bottom of every article) , so I went with the 'last modified' time instead. The published time is marked up with html microdata Hopefully it doesn't cause too many issues later - though I can always change it :P

With that (and a final test), it looked like my Open Graph implementation was working as intended. Next time, I'll show you how I implemented a simple oEmbed provider.

Useful Links

4287 Reasons why your comments weren't posted

I don't get a lot of real comments on here from what I can tell, as you've probably noticed. I don't particularly mind (though it's always awesome when I do get one!) - but what I do mind about is the spam. Since February 2015, I've gotten 4287 spam comments. 4287! It's actually quite silly, when you think about it.

The other day I was fiddling with the code behind this blog (posts about that coming soon!), and I discovered that I implemented a log ages ago that records each and every spammer, and the mistake they made - and I thought I'd share some statistics here, and some tips for dealing with spam yourself (I've posted about tactics before here.

Here's an extract from my logs (full logs available on request):

[ Sun, 02 Apr 2017 23:17:25 +0100] invalid comment | ip: 94.181.153.194 | name: ghkkll | articlepath: posts/120-Cpu-Registers.html | mistake: shortcomment
[ Mon, 03 Apr 2017 02:16:58 +0100] invalid comment | ip: 191.96.242.17 | name: exercise pants | articlepath: posts/010-Gif-Renderer.html | mistake: invalidkey
[ Mon, 03 Apr 2017 02:16:58 +0100] invalid comment | ip: 191.96.242.17 | name: exercise pants | articlepath: posts/010-Gif-Renderer.html | mistake: invalidkey
[ Mon, 03 Apr 2017 02:16:59 +0100] invalid comment | ip: 191.96.242.17 | name: exercise pants | articlepath: posts/010-Gif-Renderer.html | mistake: invalidkey
[ Mon, 03 Apr 2017 02:16:59 +0100] invalid comment | ip: 191.96.242.17 | name: exercise pants | articlepath: posts/010-Gif-Renderer.html | mistake: invalidkey

Since the output format I chose is nice and regular, I could use a quick bit of bash magic to whip up some statistics:

cat failedcomments.log | sed -e 's/^.*mistake\: //' | grep -iv '\[' | sort | uniq -c | sort -nr

(explanation, courtesy of explainshell.com )

That gave me this output:

Count Reason
3922 invalidkey
148 website
67 nokey
67 noarticleid
56 noname
17 shortcomment
4 badarticlepath
3 longcomment
2 invalidemail
1 shortname

My first thoughts here were firstly along the lines of "wow, that's a lot of spam", secondly "that comment key is working really well!", and thirdly "I didn't realise how helpful that fake website field is". Still, though I had a table, I thought a visualisation might help to put things into perspective.

A pair of pie charts showing a breakdown on which mimstakes spammers made most often. Explained below.

There - much better :D As you might have suspected if you've been following my blog here for a while, having an invalid comment key is the most common mistake spammers make.

The comment key is a hidden field in the comment form that is actually a transformed timestamp of the time you loaded the page. Working it backwards, I can work out how long it took you to submit a comment from first loading the page.

Using a companion log file to the one that I generated the above pie chart from, I've calculated that 3558 potential comments were submitted within 10 seconds of loading the page! No ordinary humans are that fast.... (especially considering you probably want to read the article before commenting!) they have to be bots. Here's a graph to illustrate the dropoff (the time is in seconds):

A graph illustrating how many bots commented on my blog unnaturally fast.

Out of the other reasons that people failed, "website" was the second most common mistake, with ~3.45% of spammers getting caught out on it. This mistake refers to another of my little spam traps - defense in depth is always good! This particular one is a regular website address hidden field, which is hidden via some fancy CSS. Curious, I decided to investigate further - and what I found was fascinating.

About 497 spammers entered an invalid website address (i.e. one that doesn't start with http) into the website box - which I really can't understand, since it's got a (hidden) label and an appropriate name and type to match - 90 of which decided that "seo plugin" was a brilliant thing to fill it with! It's important to note here that spammers who got caught by the invalid comment key filter above are included in these statistics - here's the bash command I used here:

grep -i '"website"' rawsubmits.jsonlog  | sed -e 's/^.*"website": "//' -e 's/",//' -e 's/\\\///' | uniq | egrep -iv '^http' | wc -l

Other examples include "watch live sports free", "Samantha", "just click the following web site", long strings of html-encoded unicode characters (japanese I think, after decoding one), and more. Perfectly baffling, if you ask me (if you can shed some light on this one, please comment below!).

57 spambots forgot their own name. This could be because the box you put your name in below has a name of 'name', but an id of 'namebox' - which may have caused some confusion for some of the more stupid bots.

After all that, there were 3 long comments (probably a bunch of word salad), 2 invalid email addresses that weren't caught by any filters above, and 1 short name (under 3 characters).

That's about it for this impromptu analysis of my comments log! This took far longer than I thought it would to type up. Did you find it interesting? Thinking of putting some of these techniques into practice yourself? Comment below!

Signing email with GPG/PGP in Evolution

Recently I've moved to a new laptop, and for the longest time I haven't been able to figure out why I couldn't sign my messages with gpg any more (I'm on keybase as sbrl). Turns out the problem was that gpg didn't 'trust' my private key. This post documents how I fixed it:


# First, import your private key into gpg (you've probably done this already)
gpg --import <secret_key.priv
# Then, get gpg to edit your private key, and ask it to trust your private key
gpg --edit-key toaster5@waffletoast.net
> trust
> 5
> y

Once done, you can then select your private key in evolution in the preferences (SHIFT + CTRL + S).

Did this help you out? Still having issues? Let me know in the comments below!

zram: A seriously cool way to do more with your ram

Got memory issues? Keep running out of ram? Want to store a ridiculously large file in ram because your disk (or network) is slow? If you're running linux, then look no further than zram - a hidden gem available since 3.14.

It's a little bit underdocumented, but thankfully easy to figure out, so I'm gathering everything I can find about it into one place.

zram is a kernel module for linux version 3.14 and above that allows you to use a portion of your ram to create virtual block devices (like /dev/sda1 or /dev/sdb3 for example) that actually compress data you put on them and store it in ram. It's installed by default (at least in ubuntu 16.10!) - just not enabled - so you should already have access to it if you're running a fairly recent kernel.

To start using it, you first have to enable it and load it into your running kernel. Despite how dangerous that might sound, it's actually quite safe (I think :P). Obviously, you'll need sudo (or 'Administrator' for Windows fans) privileges to do this - and if you want it to persist across reboots, you'll have to edit some file like /etc/rc.local or similar (though writing an init.d/upstart/system script/service file would probably be a better way to do it).

Here's the command to load the require kernel module:

sudo modprobe zram num_devices=2

In the above, I load the zram kernel module, and tell it to create 2 virtual devices - /dev/zram0 and /dev/zram1 (why it's called modprobe when it actually loads the module I have no idea). You can have as many devices as you want here - they don't do anything until they are initialised.

Next, as you may have guessed, you need to initialise one of the new virtual devices in order to use it. That's fairly easy too:

sudo zramctl --find --streams 2 --size 2GB --algorithm lz4

I've specified a few more options here, so I'll go through them in turn.

  • --find - This tells it to use the next available and uninitialised device. It'll echo the path to the device it ended up using to your terminal once it's done initialising.
  • --streams 2 - Sets the number of compression streams the device has. I'd recommend setting this to something around the number of cpu cores you have.
  • --size 2GB - Here I tell it the size of the virtual disk I want it to create. At most the device will use this amount of memory. I've specified 2GB here, but you should allocate a different amount based on your available ram - KB and MB are supported as suffixes here too.
  • --algorithm lz4 - This is the compression algorithm I told it to use. Currently it only supports 'lzo' and 'lz4' - I'm not sure which one is better. If you do, let me know in the comments :-)

Now that the zram device is all initialised, we can start doing things with it! Need a place to put a few files? Try this command:

sudo mke2fs -t ext4 -O ^has_journal -L "zram device" /dev/zram0

In the above I format /dev/zram0 to an ext4 partition. Note that I also disable journaling - journaling is system that ensures integrity in the event of an unplanned shutdown or loss of power - great for persistent drives like the one holding your operating system, but not so useful for one that'll get deleted automatically on shutdown anyway :-)

Perhaps you need a bit of extra speedy swap space because you don't have quite enough ram? That's easy too:

sudo mkswap --label 'zram swap parition' /dev/zram0
sudo swapon --priority 10 /dev/zram0
swapon

In the above I format /dev/zram0 to be a swap partition, and then turn it on. The final command lists the current swap devices that are currently active. You can learn more about swap partitions here.

That concludes this post. Got any cool uses for this? Found something I've missed? Write a comment below!

Sources and further reading

The HTTPS version of my website is insecure? Nonsense!

A chrome privacy warning.

I'm still rather ill, but I wanted to post about an issue I've just had with my website. Upon visiting my website in the latest version of chrome beta (57 as of the time of typing), I discovered that chrome had decided that the connection was 'insecure'. It didn't tell me precisely what the problem was (even in the developer tools :-) - why would I possibly need to know that? - only that it considered it insecure.

After googling around a bit, I didn't find any specific articles on the subject - their recent move to start considering regular http connections insecure is swamping all the relevant articles in the search results I suspect.

The big clue came when I discovered that one of my subdomains that uses a letsencrypt works as expected. You see, the main website actually used a StartSSL certificate. My running theory is that even though my certificate was an SHA2 cerrtificate, chrome decided that it was not trustworthy as there was an SHA1 certificate in the trust chain somewhere.

The fix: Replace all my existing StartSSL certificates with Let's Encrypt ones. It seems to have fixed the issue for now. I also discovered that Let's Encrypt certificates can also be used in mail servers (i.e. SMTP and IMAP) too - so I don't have to go and fiddle about with finding an alternative certificate provider.

In future, it would certainly be helpful if Google actually told people precisely what they were going to do before they do it....!

Was this useful? Could it be improved? Would you like a Let's encrypt tutorial? Let me know in the comments below!

Learning Prolog Series List

The new Learning Prolog banner!

I'm rather ill at the moment, but I had the idea of creating a banner for the Learning Prolog series I did a while ago, and at the same time I ended up realising that I never posted a full series list for it. This banner has been something to occupy my mind with while I recover, at least.

Anyway, here's the series list:

What's a package manager? An introduction

A selection of package manager logos. From left to right: Bower, npm ruby gems, nuget, and pypi

If you haven't written much code outside of university yet, then you probably won't have heard of the concept, but today package mangers are everywhere. Every language and ecosystem has one. C♯ has Nuget, Javascript has npm and Bower, and Debian has apt. Even PHP now has one, called composer.

If you're thinking something along the lines of "Why do I need someone to take my parcels to the post office for me?", don't worry! I intend to explain what a package manager is and why / when you would want to use one in this post.

How many times have you downloaded a zip of someone else's library, and manually extracted it into your project? Perhaps it inflated the size of your version control repository, and slowed things down. This is just one of the problems that a package manager is designed to solve. Let me explain.

Imagine that you've just built a house. You've got all the walls and floors in place, and all that's left is to call the gas man and get some central heating installed. You call the gas man, and he comes out with his equipment and gets ready to install a brand new central heating system when he asks you about an electricity supply.

Oh, gosh! You totally forgot about that. Then you have to call the electrician and get him to come out and install the electricity supply. When he comes out, he asks whether you've got planning permission... and, well, you get the picture.

Imagine if you could just say "install this library!", and it would install it, handling all the dependencies for you. Imagine if you could store a list of libraries and their versions, such that you don't have to commit all your libraries to version control - saving on space and speeding up your repository. What if you could issue a single command, and all your libraries could be brought up to the latest version? Now, with a package manager, you can!

Most programming languages and operating systems you come across will have one, and it's usually a good idea to make use of it. To end this post, I'm going to list a few common languages, their associated package managers, and a link to a tutorial on how to use it.

  • C♯ - Nuget (Visual Studio tutorial)
  • Javascript - npm (tutorial, should work for Linux, Mac, and Windows users)
  • Ruby - RubyGems (tutorial, Linux users should have the gem command installed already along with Ruby)
  • Python - pip (tutorial - pip comes with python on both Linux and Windows. pypi - The Python Package Index. pip installs packages that are in the python package index.
  • Debian - apt (tutorial, should work for most Debian-based linux distributions - apt is probably the command behind your software manager gui. Not technically a programming language library package manager, but really useful all the same.)
  • PHP - composer (tutorial)

Need help? Want to say hello? Post below!

PixelBot Part 2: Devices need protocols, apparently

A selection of the technologies I'm using to put together my PixelHub server.

So there I was. I'd just got home, turned on my laptop, opened the Arduino IDE and Monodevelop, and then.... nothing. I knew I wanted my PixelBot to talk to the PixelHub I'd started writing, but I was confused as to how I could make it happen.

In this kind of situation, I realised that although I knew what I wanted them to do, I hadn't figured out the how. As it happens, when you're trying to get one (or more, in this case) different devices to talk to each other, there's something rather useful that helps them all to speak the same language: a protocol.

A protocol is a specification that defines the language that different devices use to talk to each other and exchange messages. Defining one before you start writing a networked program is probably a good idea - I find particularly helpful to write a specification for the protocol that the program(s) I'm writing, especially if their function(s) is/are complicated.

To this end, I've ended up spending a considerable amount of time drawing up the PixelHub Protocol - a specification document that defines how my PixelHub server is going to talk to a swarm of PixelBots. It might seem strange at first, but I decided on a (mostly) binary protocol.

Upon closer inspection though, (I hope) it makes a lot of sense. Since the Arduino is programmed using C++ as it has a limited amount of memory, it doesn't have any of the standard string manipulation function that you're used to in C♯. Since C++ is undoubtedly the harder of the 2 to write, I decided to make it easier to write the C++ rather than the C&sharp. Messages on the Arduino side are come in as a byte[] array, so (in theory) it should be easy to pick out certain known parts of the array and cast them into various different fundamental types.

With the specification written, the next step in my PixelBot journey is to actually implement it, which I'll be posting about in the next entry in this series!

How to update your linux kernel version on a KimSufi server

(Or why PHP throws random errors in the latest update)

Hello again!

Since I had a bit of a time trying to find some clear information on the subject, I'm writing the blog post so that it might help others. Basically, yesterday night I updated the packages on my server (the one that runs this website!). There was a PHP update, but I didn't think much of it.

This morning, I tried to access my ownCloud instance, only to discover that it was throwing random errors and refusing to load. I'm running PHP version 7.0.16-1+deb.sury.org~xenial+2. It was spewing errors like this one:

PHP message: PHP Fatal error:  Uncaught Exception: Could not gather sufficient random data in /srv/owncloud/lib/private/Security/SecureRandom.php:80
Stack trace:
#0 /srv/owncloud/lib/private/Security/SecureRandom.php(80): random_int(0, 63)
#1 /srv/owncloud/lib/private/AppFramework/Http/Request.php(484): OC\Security\SecureRandom->generate(20)
#2 /srv/owncloud/lib/private/Log/Owncloud.php(90): OC\AppFramework\Http\Request->getId()
#3 [internal function]: OC\Log\Owncloud::write('PHP', 'Uncaught Except...', 3)
#4 /srv/owncloud/lib/private/Log.php(298): call_user_func(Array, 'PHP', 'Uncaught Except...', 3)
#5 /srv/owncloud/lib/private/Log.php(156): OC\Log->log(3, 'Uncaught Except...', Array)
#6 /srv/owncloud/lib/private/Log/ErrorHandler.php(67): OC\Log->critical('Uncaught Except...', Array)
#7 [internal function]: OC\Log\ErrorHandler::onShutdown()
#8 {main}
  thrown in /srv/owncloud/lib/private/Security/SecureRandom.php on line 80" while reading response header from upstream, client: x.y.z.w, server: ownc

That's odd. After googling around a bit, I found this page on the Arch Linux bug tracker. I'm not using arch (Ubuntu 16.04.2 LTS actually), but it turned out that this comment shed some much-needed light on the problem.

Basically, PHP have changed the way they ask the Linux Kernel for random bytes. They now use the getrandom() kernel function instead of /dev/urandom as they did before. The trouble is that getrandom() was introduced in linux 3.17, and I was running OVH's custom 3.14.32-xxxx-grs-ipv6-64 kernel.

Thankfully, after a bit more digging, I found this article. It suggests installing the kernel you want and moving one of the grub config file generators to another directory, but I found that simply changing the permissions did the trick.

Basically, I did the following:


apt update
apt full-upgrade
apt install linux-image-generic
chmod -x /etc/grub.d/06_OVHkernel
update-grub
reboot

Basically, the above first updates everything on the system. Then it installs the linux-image-generic package. linux-image-generic is the pseudo-package that always depends on the latest stable kernel image available.

Next, I remove execute privileges on the file /etc/grub.d/06_OVHkernel. This is the file that gives the default installed OVH kernel priority over any other instalaled kernels, so it's important to exclude it from the grub configuration process.

Lastly, I update my grub configuration with update-grub and then reboot. You need to make sure that you update your grub configuration file, since if you don't it'll still use the old OVH kernel!

With that all done, I'm now running 4.4.0-62-generic according to uname -a. If follow these steps yourself, make sure you have a backup! While I am happy to try and help you out in the comments below, I'm not responsible for any consequences that may arise as a result of following this guide :-)

Coding Conundrums Evolved 5: Space Transmissions

A pretty procedurally generaated nebula. Code not written by me! (Banner from here with the seed 64oekm9bi4cg.)

This post is part of a biweekly series of programming problems, targeted at those learning to program using C# at University. The problems will vary in difficulty - some will be rather easy, and some will be quite challenging! Don't worry if you can't solve a problem just yet - come back to it in a few months and it'll seem a lot easier than it did before.

Coding Conundrums Evolved is back for another episode! I can't promise that these will be a regular feature on here due to the amount of time they take to put together, but I had a good idea for another one that I really couldn't turn down :D

The 2 new elligon-class starships, called the Lilai and the Inveinya, are on a joint mission deep in the newly-discovered Tenobulet Nebula. Due to the thickness of the gas clouds surrounding the 2 ships, they are having difficulty communicating, as talking over a video-link is causing the quantum neural-network based interference analysis system is draining all their available power clearing up the incoming data streams.

To combat this, the genius communications officer Mr. Ümafa has devised the Dassma System. The system allows the 2 ships to communicate by sending text-based messages, reducing power usage by sending a quantum coaxial error correction stream in parallel to the actual message stream.

Anyway, Mr. Ümafa needs your help as the newly-hired systems programming engineer, since all the other programmers onboard are busy scratching their heads over a new asymmetric inverse shift register algorithm whilst writing a firmware upgrade for the quadrilateral energy transmission stabilisers to increase their efficiency.

Unfortunately, his dassma system can only transmit messages in short chunks of 128 characters, and he's finding it terribly cumbersome to keep splitting up the long messages that the crew keep asking him to send.

Given a long string as input, write a program that splits the input into multiple chunks that are at most 128 characters long, and outputs each chunk on a separate line.

Helpful Hints

  • Perform every step of the process on paper first.
  • Draw a flowchart of what your program needs to do.
  • If you're having trouble, step through your code line by line and inspect it's state at each point.

Challenge

Mr. Ümafa is impressed with your program, but he's got an additional request or two. Unfortunately, Communications Officer Bramaar of the Iveinya keeps mixing up the different message parts, and he finds it annoying when it splits a message in the middle of a word.

Mr. Ümafa would like you to upgrade your program so that it only splits on spaces, and so that it labels each of the chunks with their number in the sequence.

The solution archive is here. I'll release the password to this challenge in the comments in 2 weeks time.

Art by Mythdael