Starbeamrainbowlabs

Stardust
Blog


Archive

Mailing List Articles Atom Feed Comments Atom Feed Twitter Reddit Facebook

Tag Cloud

3d account algorithms announcement architecture archives arduino artificial intelligence artix assembly async audio bash batch blog bookmarklet booting c sharp c++ challenge chrome os code codepen coding conundrums coding conundrums evolved command line compilers compiling css dailyprogrammer debugging demystification distributed computing downtime electronics email embedded systems encryption es6 features event experiment external first impressions future game github github gist graphics hardware hardware meetup holiday html html5 html5 canvas infrastructure interfaces internet io.js jabber javascript js bin labs learning library linux low level lua maintenance manjaro network networking node.js operating systems performance photos php pixelbot portable privacy programming problems project projects prolog protocol protocols pseudo 3d python reddit reference release releases resource review rust secrets security series list server software sorting source code control statistics svg technical terminal textures three thing game three.js tool tutorial tutorials twitter ubuntu university update updates upgrade version control virtual reality virtualisation visual web website windows windows 10 xmpp xslt

How to set up a WebDav share with Nginx

I've just been setting up a WebDav share on a raspberry pi 3 for my local network (long story), and since it was a bit of a pain to set up (and I had to combine a bunch of different tutorials out there to make mine work), I thought I'd share how I did it here.

I'll assume you have a raspberry pi all set up and up-to-date in headless mode, with a ufw for your firewall (if you need help with this, post in the comments below or check out the Raspberry Pi Stack Exchange). To start with, we need to install the nginx-full package:

sudo apt update
sudo apt install  nginx-full

Note that we need the nginx-full package here, because the nginx-extras or just simply nginx packages don't include the required additional webdav support modules. Next, we need to configure Nginx. Nginx's configuration files live at /etc/nginx/nginx.conf, and in /etc/nginx/conf.d. I did something like this for my nginx.conf:

user www-data;
worker_processes 4;
pid /run/nginx.pid;

events {
    worker_connections 768;
    # multi_accept on;
}

http {

    ##
    # Basic Settings
    ##

    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 65;
    types_hash_max_size 2048;
    # server_tokens off;

    # server_names_hash_bucket_size 64;
    # server_name_in_redirect off;

    include /etc/nginx/mime.types;
    default_type application/octet-stream;

    ##
    # SSL Settings
    ##

    ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
    ssl_prefer_server_ciphers on;

    ##
    # Logging Settings
    ##

    access_log /var/log/nginx/access.log;
    error_log /var/log/nginx/error.log;

    ##
    # Gzip Settings
    ##

    gzip on;

    gzip_vary on;
    gzip_proxied any;
    gzip_comp_level 6;
    gzip_buffers 16 8k;
    gzip_http_version 1.1;
    gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

    ##
    # Virtual Host Configs
    ##

    include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-enabled/*;
}

Not many changes here. Then, I created a file called 0-webdav.conf in the conf.d directory, and this is what I put in it:

server {
    listen 80;
    listen [::]:80;

    server_name plans.helenshydrogen.be;

    auth_basic              realm_name;
    auth_basic_user_file    /etc/nginx/.passwords.list;

    dav_methods     PUT DELETE MKCOL COPY MOVE;
    dav_ext_methods PROPFIND OPTIONS;
    dav_access      user:rw group:rw all:r

    client_body_temp_path   /tmp/nginx/client-bodies;
    client_max_body_size    0;
    create_full_put_path    on;

    root /mnt/hydroplans;
}

Now this is where the magic happens. The dav_access directive tells nginx to allow everyone to read, but only logged in users to write to the share. This isn't actually particularly relevant, because of the auth_basic and auth_basic_user_file directives, which tell nginx to require people to login to the share before they are allowed to access it.

It's also important to note that I found that Windows (10, at least), didn't like the basic authentication - even though Ubuntu's Nautilus accepted it just fine - so I had to comment that bit out :-(

If you do still want authentication (hey! May you'll have better luck than I :P), then you'll need to set up the passwords file. Here's how you create it:


echo -n 'helen:' | sudo tee /etc/nginx/.passwords.list
openssl passwd -apr1 | sudo tee -a /etc/nginx/.passwords.list 
Password:

The above creates a user called helen, and asks you to type a password. If you're adding another user to the file, simply change the first tee to be tee -a to avoid overwriting the first one.

With that all configured, it's time to test the configuration file, and, if we're lucky, restart nginx!


sudo nginx -t
sudo systemctl restart nginx

That's all you should need to do to set up a simple WebDav share. Remember that this is a starting point, and not an ending point - there are a few big holes in the above that you'll need to address, depending on your use case (for example, I haven't included the setup of https / encryption - try letsencrypt for that).

Here are the connection details for the above for a few different clients:

  • Ubuntu / Nautilus: (Go to "Other Locations" and paste this into the "Connect to Server" box) dav://plans.helenshydrogen.be/
  • Windows: (Go to "Map Network Drive" and paste this in) http://plans.helenshydrogen.be/

Did this work for you? Have any problems? Got instructions for a WebDav client not listed here? Let me know in the comments!

Website Integrations (Mini!) Series List

Since I ended up doing a mini-series on the various website integrations I implemented, I thought that you might find a (mini-)series list useful. Here it is:

That's just about all I've got to mention here. Do you have any suggestions and / or requests on what I should blog about? Let me know below in the comments!

Website Integrations #3: Twitter cards

The twitter logo. (The posts featured in the above images are this one about my new Raspberry Pi 3, and the latest coding conundrums evolved post).

You have arrived in the third of three parts in my mini-series on how I implemented rich snippets. In the last two parts I tackled open graph and becoming an oEmbed provider. In this part, I'll be talking a bit about twitter cards, and how I implemented them.

Twitter's take on the problem seems to be much simpler than Facebook's, which makes for easy implementing :D Like in the other two protocols too, they decided to have multiple different types of, well, in this case, cards. I decided to implement the summary card type. Like open graph, it adds a bunch of <meta /> tags to the header. Sigh. Anyway, here are the property names I needed to implement:

  • twitter:card - The type of card. In my case this is set to summary
  • twitter:site - This one's confusing. Although it's called 'site', it should actually be set to your own twitter handle - mine is @SBRLabs.
  • twitter:title - The title of the content. Practically identical to open graph's og:title.
  • twitter:description - The description of the content. The same as og:description.
  • twitter:image - A url pointing to an image that should be displayed next to the title and description. Unlike Facebook's open graph, twitter appears to support https urls here with no problem at all.

Since after implementing open graph I already had 90% of the infrastructure and calculations in place already, throwing together values for the above wasn't too difficult. Here's an example set of twitter card <meta /> tags generated by the updated code:

<meta property="twitter:card" content="summary" />
<meta property="twitter:site" content="@SBRLabs" />
<meta property="twitter:title" content="Running Prolog on Linux" />
<meta property="twitter:description" content="Hello! I hope you had a nice restful Easter. I've been a bit busy this last 6 months, but I've got a holiday at the moment, and I've just received a .... (click to read more)" />
<meta property="twitter:image" content="https://starbeamrainbowlabs.com/blog/images/20151015-learning-swi-prolog-banner.svg" />

Easy peasy. Next up was testing time. Thankfully, Twitter made this easy too by providing an official testing tool. Interestingly, they whitelist domains based on whether the webmaster has run a url through their tool - so if you want twitter cards to show up, make sure you plug at least one of your website's page urls through their tool.

After a few tweaks, I got this:

The whitelisted message from the twitter card tester.

With that, my work was complete. This brings us to the end of my mini-series on rich-snippet integrations (unless I've missed a protocol O.o Comment below if I have)! I hope you've found it useful. If you have (or even if you haven't!) please let me know in the comments below :D

Website Integrations #2: oEmbed

Welcome to part 2 of this impromptu miniseries! In this second part of three, I'll be showing you a little about how I set up and tested a simple oEmbed provider for my blog posts - I've seen lots of oEmbed client information out there, but not much in the way of provider (or server) implementations.

If you haven't read part one about the open graph protocol yet, then you might find it interesting.

oEmbed is a bit different to open graph in that instead of throwing a bunch of meta tags into your <head />, you instead use a special <link /> element that points interested parties in the direction of some nice tasty json. Personally, I find this approach to be more sensible and easier to handle - the kind of thing you'd expect from an open standard.

To start with, I took a read of their specification, as I did with open graph. It doesn't have as many examples as I'd have liked, and I had to keep jumping around, but it's certainly not the worst I've seen.

oEmbed is built on the idea of providers (that's me!) and consumers (the programs and website you use). Providers, erm, provide machine-readable information about urls passed to them, and consumers take this information provided to them and display it to the user in a manner they think is appropriate.

To start with, I created a new PHP file to act as my provider over at https://starbeamrainbowlabs.com/blog/oembed.php and took a look at the different oEmbed types available - oEmbed has a type system of sorts, similar to open graph. I decided on link - while a rich would look cool, it would be almost impossible to test with every client out there, and I can't guarantee how the html would be rendered or what space it would have either.

With that decided, I made a list of the properties that I'd need to include in the json response:

  • version - The version of oEmbed. Currently 1.0 as of the time of typing.
  • type - The oEmbed type. I chose link here.
  • title - The title of the page
  • author_name - The name of the author
  • author_url - A link to the author's homepage.
  • provider_name - The provider's name.
  • provider_url - A link to the provider's homepage. I chose my blog index, since this script will only serve my blog.
  • cache_age - How long consumers should cache the response for. I put 1 hour (3600 seconds) here, since I usually correct mistakes after posting that I've missed, and I want them to go out fairly quickly.
  • thumbnail_url - A link to a suitable thumbnail picture.
  • thumbnail_width - The width of the thumbnail image, in pixels.
  • thumbnail_height - The width of the thumbnail image, in pixels.

Then I looked at the data I'd be getting from the client. It all comes in the form of GET parameters:

  • format - Either json or xml. Personally, I only support json.
  • url - The url to send oEmbed information for.

With all the information close at hand, I spent a happy hour or so writing code, and ended up with a script that outputs something like this:

{
    "version": "1.0",
    "type": "link",
    "title": "Website Integrations #1: Open Graph",
    "author_name": "Starbeamrainbowlabs",
    "author_url": "https:\/\/starbeamrainbowlabs.com\/",
    "provider_name": "Stardust | Starbeamrainbowlabs' Blog",
    "provider_url": "https:\/\/starbeamrainbowlabs.com\/blog\/",
    "cache_age": 3600,
    "thumbnail_url": "https:\/\/starbeamrainbowlabs.com\/images\/logos\/open-graph.png",
    "thumbnail_width": 300,
    "thumbnail_height": 300
}

(See it for yourself!)

Though the specification includes requirements for satisfying 2 extra GET parameters, maxwidth and maxheight, I chose to ignore them since writing a dynamic thumbnail rescaling script is both rather complicated and requires a not insignificant amount of processing power every time it is used.

After finishing the oEmbed script, I turned my attention to one final detail: The special <link /> tag required for auto-discovery. A quick bit of PHP in the article page renderer adds something like this to the header:

<link rel="alternate" type="application/json+oembed" href="https://starbeamrainbowlabs.com/blog/oembed.php?format=json&url=https%3A%2F%2Fstarbeamrainbowlabs.com%2Fblog%2Farticle.php%3Farticle%3Dposts%252F229-Website-Integrations-1-Open-Graph.html" />

and with that, my oEmbed provider implementation is complete - but it still needs testing! Unfortunately, testing tool for oEmbed are few and far between, but I did manage to find a few:

  • oEmbed Tester - A basic testing tool. Appears to work well for the most part - except the preview. Not sure why it says "Preview not available." all the time.
  • Iframely URL Debugger - Actually a testing tool for some commercial tool or other, but it still appears to accurately test not only oEmbed, but open graph and twitter cards (more on them in the next post!) too!

After testing and fixing a few bugs, my oEmbed provider was complete! Next time, I'll be taking a look at twitter's take on the subject: Twitter cards.

Found this interesting? Comment below! Share it with a friend!

Website Integrations #1: Open Graph

The logo of the Open Graph protocol.

These days, if you share a link to a website or a blog post with a friend or on a social networking site, sometimes the link expands to a preview of the link you've just posted. Personally, I find this behaviour to be quite helpful, as it lets me get an idea as to what it is that I'm about to click on.

Unfortunately, when it comes to the code behind these previews, there are no less than 3(!) different protocols that you need to implement in order to get it to work, since facebook, twitter, and the rest of the web community haven't been talking to each other quite like they should have been.

Anyway, after implementing these 3 protocols and having a bit of trouble with them, I thought I'd write up a mini-series on the process I went through, the problems I encountered, and how I solved them. In this post, I'm going to explain Facebook's Open Graph protocol.

I decided that I'd implement these 3 protocols on my home page and each blog post's page. Open Graph was the easiest - all it requires is a bunch of meta tags. These tags are split into 2 parts - the common tags, which all page types should have, and the type-specific tags, which depend on the type of page you're implementing them on. Here's the list of common tags I implemented:

  • og:title - The title of your page
  • og:description - A short description of your page
  • og:image, og:image:url, and og:image:secure_url - The url of an image that would fit as a preview for the page
  • og:url - The url of the page (not sure why this is required, since you have to know the url in order to require the page... :P Perhaps it's to help with deduplication - I'm not sure)

These were fairly easy for my home page:

<meta property="og:title" content="Starbeamrainbowlabs" />
<meta property="og:description" content="Hi! I am a computer science student who is in their second year at Hull University. I started out teaching myself about various web technologies, and then I managed to get a place at University, where I am now." />
<meta property="og:image" content="http://starbeamrainbowlabs.com/favicon.png" />
<meta property="og:image:url" content="http://starbeamrainbowlabs.com/favicon.png" />
<meta property="og:image:secure_url" content="https://starbeamrainbowlabs.com/favicon.png" />
<meta property="og:url" content="https://starbeamrainbowlabs.com/" />

When I went to test it using Facebook's official testing tool, the biggest problem I had was that the image wouldn't show up - no matter what I did. I eventually found this stackoverflow answer which explained that Facebook doesn't support https urls in anything other than the og:image:secure_url meta tag (even though they say they do) - so changing the urls to regular http solved the problem.

Next, I took a look at the type-specific tags. There's a whole bunch of them (check out this section of the spec) - I decided on the profile type for the index page of my website here:

<meta property="og:type" content="profile" />

The profile type has a few extra specific meta tags that need setting too, so I added those:

<meta property="profile:first_name" content="Starbeamrainbowlabs" />
<meta property="profile:last_name" content="Tjovik" />
<meta property="profile:username" content="Starbeamrainbowlabs" />

With that done, I turned my attention to my blog posts. Since the page is rendered in PHP (and typing out all those meta tags was a rather annoying), I created a teensy little framework to output the meta tags for me

$metaTags = [];
$metaTags["property"] = "value";

$renderedMetaTags = "";
foreach($metaTags as $metaKey => $metaValue)
    $renderedMetaTags .= "\t<meta property=\"$metaKey\" content=\"$metaValue\" />";

Now I can add as many meta tags as I like, with a fraction of the typing - and it looks neater too :D With that done, I implemented the basic meta tags. Here's some example output from the last post:

<meta property="og:title" content="4287 Reasons why your comments weren't posted" />
<meta property="og:description" content="I don't get a lot of real comments on here from what I can tell, as you've probably noticed. I don't particularly mind (though it's always awesome whe.... (click to read more)" />
<meta property="og:image" content="http://starbeamrainbowlabs.com/blog/images/20170406-Spammer-Mistakes.png" />
<meta property="og:image:url" content="http://starbeamrainbowlabs.com/blog/images/20170406-Spammer-Mistakes.png" />
<meta property="og:image:secure_url" content="https://starbeamrainbowlabs.com/blog/images/20170406-Spammer-Mistakes.png" />
<meta property="og:url" content="https://starbeamrainbowlabs.com/blog/article.php?article=posts%2F228-4287-Reasons-Your-Comments-Were-Not-Posted.html" />

That wasn't too tough. Next, I looked at the list of types again, and chose the article type for my blog posts.

<meta property="og:type" content="article" />

Like the profile type earlier, the article type also comes with a few type-specific meta tags (what they mean by not fitting into a 'vertical' I have no idea). I decided not to implement all the type-specific meta tags available here, since not all of them were practical to implement. Here's some more example output for the new tags:

<meta property="article:author" content="https://starbeamrainbowlabs.com/" />
<meta property="article:published_time" content="2017-04-08T12:56:46+01:00" />

Unfortunately, the article published time is really awkward to get hold of actually (even though it's outputted at the bottom of every article) , so I went with the 'last modified' time instead. The published time is marked up with html microdata Hopefully it doesn't cause too many issues later - though I can always change it :P

With that (and a final test), it looked like my Open Graph implementation was working as intended. Next time, I'll show you how I implemented a simple oEmbed provider.

Useful Links

Pepperminty Wiki Turns 2!

Pepperminty Wiki turns 2(!) :D

2 years ago today, I decided that I'd build a wiki. At the time, I was unsatisfied with all the currently solutions out there - they were either too bulky, old and unmaintained, or just otherwise not quite right. I decided to build something different: An entire wiki in a single file. One single file that you can drop onto your web server and have it just work.

Although I've had to rework and rewrite a few things along the way, development has generally gone ok. I've found that as it's grown, I've needed to change the way I design it slightly - it's been a great learning experience!

In May this year, I managed to get Pepperminty Wiki into Awesome Self Hosted, a list of cool pieces of software that you can host on your own server - a list that has ~12,000 stars on GitHub.

Fast forward to the present, and I find myself with an awesome wiki - that's still all contained in a single file. It's powered by Parsedown. It supports multiple users, page protection, sub pages, full text search (!), customisable themes, tags, redirects, and more! It's also got a whole host of configurable settings system - allowing you to customise how your wiki works to your liking. Best of all, it's got a flexible module based system - so anyone can come along and write a new module to extend it's functionality.

If this sounds like the kind of thing you'd like to use yourself, just head over to the getting your own copy section on it's page (for the lazy, the online downloader is here).

Going forwards, I'd like to a commenting system to let people comment on pages on a wiki. I'd like to add edit previews. I'd like to add a GUI for the settings file. I've got so many ideas that it's difficult to choose which to do next :D

Thank you, everyone for the last 2 years. Here's to another 2 amazing fun filled years! I don't intend to stop development any time soon :D

An easier way to debug PHP

A nice view of some trees taken by Mythdael I think, the awesome person who designed this website :D

Recently at my internship I've been writing quite a bit of PHP. The language itself is OK (I mean it does the job), but it's beginning to feel like a relic of a bygone era - especially when it comes to debugging. Up until recently I've been stuck with using echo() and var_dump() calls all over the place in order to figure out what's going on in my code - that's the equivalent of debugging your C♯ ACW with Console.WriteLine() O.o

Thankfully, whilst looking for an alternative, I found xdebug. Xdebug is like visual studio's debugging tools for C♯ (albeit a more primitive form). They allow you to add breakpoints and step though your PHP code one line at a time - inspecting the contents of variables in both the local and global scope as you go. It improves the standard error messages generated by PHP, too - adding stack traces and colour to the output in order to make it much more readable.

Best of all, I found a plugin for my primary web development editor atom. It's got some simple (ish) instructions on how to set up xdebug too - it didn't take me long to figure out how to put it to use.

I'll assume you've got PHP and Nginx already installed and configured, but this tutorial looks good (just skip over the MySQL section) if you haven't yet got it installed. This should work for other web servers and configurations too, but make sure you know where your php.ini lives.

XDebug consists of 2 components: The PHP extension for the server, and the client that's built into your editor. Firstly, you need to install the server extension. I've recorded an asciicast (terminal recording) to demonstrate the process:

(Above: An asciinema recording demonstrating how to install xdebug. Can't see it? Try viewing it on asciinema.org.)

After that's done, you should be able to simply install the client for your editor (I use php-debug for atom personally), add a breakpoint, load a php page in your web browser, and start debugging!

If you're having trouble, make sure that your server can talk directly to your local development machine. If you're sitting behind any routers or firewalls, make sure they're configured to allow traffic though on port 9000 and configured to forward it on to your machine.

More automatic reposting!

I've had automatic tweeting about blog posts I put up on here for ages now. I've also had a mailing list set up for a while too. Today it occurred to me that not everyone uses twitter or wants an email, and so I've just set up a few more automatic reposting systems (thanks to IFTTT!).

  • If you know me through facebook, I've set it up to post about my blog posts on my timeline.
  • If you use reddit, then I've set up a new subreddit that IFTTT will automatically post links to my new blog posts in.

That brings the total number of channels you get notified about my blog posts up to 5(!)

I'm still tweaking and perfecting the new systems - please comment if you notice anything that's off or could be improved. Also, let me know if there's a channel that I haven't covered that you'd like me to set up.

Client certificates and HTTPS: A deep dive into the world of certificates

Recently I've been checking out client side certificates in order to secure access to various secret parts of this website. In order to do this, I found myself learning more than I ever thought I would about certificates and certification authorities, so I thought I'd share it here.

Before I get into how client side certificates work, I should start at the beginning. In order to serve pages over HTTPS, a web server must have both a private key and a certificate. The private key does the encryption, and the certificate verifies the identity of the web server (so you know you're connecting to the real thing and not a fake).

The certificate is signed by a Certification Authority. A certification authority (CA) is an organisation that has a Root Certificate, which is a certificate that's explicitly trusted by your operating system and browser. If a chain of valid certificates can be made from the web server's certificate back to a trusted root certificate, then we know that the web server's certificate can be trusted. Since a picture is worth 1000 words, here's a diagram describing the above:

A diagram of the hierarchy of a CA.

The web server isn't the only one who can have a certificate. The client can also have a certificate, and the web server can choose whether or not it likes the certificate that the client presents based on which certificates it can be linked back to in the signed certificate chain.

This presents some interesting possibilities. We can create our own certification authority and use it to issue client certificates. Then we can tell our web server that it should only accept clients certificates that can be linked back to our own certification authority.

For the certificate creation process, I can recommend tinyca2 (direct apt install link), which should be available in the repositories of most linux distributions. It's got a great GUI that makes the process relatively painless - this tutorial is pretty good at explaining it, although you'll need to read it more than once to understand everything. XCA is pretty good too, although a tad more complex. Once you've created your CA and certificates, come back here.

The Nginx logo.

Next up is configuring our web server. I'm using Nginx in this tutorial (that's the web server that's currently behind this website!), but there are guides elsewhere for Apache 2, Lighttpd, Hiawatha (For Hiawatha it's the RequiredCA configuration directive in the manual!), and probably others too!

Configuring Nginx to accept client certificates signed by our CA is actually fairly straight forward, despite the lack of information on the internet currently. I'm going to assume that you've already got a working Nginx webserver online that supports HTTPS (or you haven't try checking out this and this and this) To do this, grab a copy of your root CA's certificate (excluding the private key of course) and upload it to your server. Then, open up your Nginx configuration file in your favourite text editor (I use sudo nano) and add the following lines to your http block:

ssl_client_certificate  /path/to/your/root/certificate.pem;
ssl_verify_client               on;
ssl_verify_depth                2;

Let's go through these one at a time. The ssl_client_certificate directive tells Nginx where to find our root certificate, which it should use to verify the certificate of connecting clients. ssl_verify_client tells Nginx that it should perform verification of all clients connecting via HTTPS, and that nobody is allowed to connect unless they have a certificate signed by the above root CA. Finally, the ssl_verify_depth parameter instructs Nginx that if a client's certificate isn't directly signed by our root CA then it should follow the certificate chain down to a depth of 2.

Once done, you should only be able to connect to your webserver if you have an appropriate certificate installed in your browser. This is nice and all, but what if we only want to apply this to a single subdomain? Or even a single folder? This presents a problem: Nginx only supports client certificates on everything, or nothing at all.

Thankfully, there's a workaround. First, change ssl_verify_client from on to optional, and then the following to the server or location block you want to verify certificates for:

if ($ssl_client_verify != SUCCESS) {
    return 403;
}

I've found through trial and error that the whitespace is important. If it doesn't work, double check it and try again! The above snippet simply checks to see if the user has connected with a valid certificate, and will send an HTTP 403 error if they haven't.

That concludes this tutorial. Did it work for you? Did you find it useful Did you read this far? Comment below!

Sources

The web needs encrypting, but that's not the whole story

There's been a lot of talk about 'encrypting the web' recently, so I decided to post about it myself. In this post I'll explain the problem, the solution, and why the solution may cause more problems than it solves.

The problem itself has actually been around since the web was designed, but it hasn't really been a huge problem until recently. Let's consider a legitimate origin server that serves HTML files about animals, such as bunnies.html. Since it doesn't have anything to hide (who'd want to steal pictures of bunnies, anyway?), it serves these files over regular HTTP, not HTTPS.

Suppose that our animal website becomes hugely popular overnight, and has thousands of users browsing it per hour. Let's also suppose that someone has managed to gain control of the network that connects our origin server's network to the wider internet. Since this someone is smart, rather than sending a DOS attack or a ton of spam from the compromised network, rather they fiddle about with the responses that our origin server is sending to, turning our unsuspecting users' browsers into the source of a massive DDOS attack or worse! A prime example of this was the attack on GitHub last year.

A diagram demonstrating a MITM attack.

If a website's communications aren't encrypted, it allows anyone to inspect and tamper with both requests and responses. You can't even guarantee that you're connecting to the server you think you are and not a cleverly designed imitation!

HTTPS solves all of these problems. It encrypts communications between the client and server, preventing messages from being inspected or tampered with. It also verifies the identity of the server, which is why you need a certificate in order to serve things over HTTPS.

The other side

At the beginning of this post, I said that HTTPS isn't the whole story. Obviously it solves the problem at hand, right? Yes it does, but it also brings to other issues to the table: complicating the setup process, and breaking links. The second of these problems is easy - we can just setup automatic redirects that send you to the HTTPS version of a site.

The first problem is decidedly more difficult to solve. HTTPS is difficult to set up, and if it becomes the default, it could reduce the accessibility of setting up and running your own website. Thankfully it isn't the required yet.

At the moment, nobody has a complete solution to this issue. Letsencrypt is a new brilliant service that makes obtaining SSL certificates easy, provided that you can either fiddle with your web server config, or are willing to let a script do it for you, but it doesn't help you set it up correctly (another excellent read).

The other slightly related issue is that users often mistake 'HTTPS' to mean secure. While this is true of the communications between their computer and the server, it doesn't stop the server from setting the root password to 1234 and storing passwords in plain text.

If you're still here, thank you for reading! Hopefully you now understand some of the issues surrounding web security at the moment. Please post a comment down below if you have anything to say (-:

Sources and further reading

Art by Mythdael