Starbeamrainbowlabs

Stardust
Blog


Archive


Mailing List Articles Atom Feed Comments Atom Feed Twitter Reddit Facebook

Tag Cloud

3d 3d printing account algorithms android announcement architecture archives arduino artificial intelligence artix assembly async audio automation backups bash batch blog bookmarklet booting bug hunting c sharp c++ challenge chrome os cluster code codepen coding conundrums coding conundrums evolved command line compilers compiling compression containerisation css dailyprogrammer data analysis debugging demystification distributed computing docker documentation downtime electronics email embedded systems encryption es6 features ethics event experiment external first impressions future game github github gist gitlab graphics hardware hardware meetup holiday holidays html html5 html5 canvas infrastructure interfaces internet interoperability io.js jabber jam javascript js bin labs learning library linux lora low level lua maintenance manjaro network networking nibriboard node.js operating systems own your code pepperminty wiki performance phd photos php pixelbot portable privacy problem solving programming problems projects prolog protocol protocols pseudo 3d python reddit redis reference releases rendering resource review rust searching secrets security series list server software sorting source code control statistics storage svg talks technical terminal textures thoughts three thing game three.js tool tutorial twitter ubuntu university update updates upgrade version control virtual reality virtualisation visual web website windows windows 10 xmpp xslt

Cluster, Part 11: Lock and Key | Let's Encrypt DNS-01 for wildcard TLS certificates

Welcome one and all to another cluster blog post! Cluster blog posts always take a while to write, so sorry for the delay. As is customary, let's start this post off with a list of all the parts in the series so far:

With that out of the way, in this post we're going to look at obtaining a wildcard TLS certificate using the Let's Encrypt DNS-01 challenge. We want this because you need a TLS certificate to serve HTTPS without lighting everyone's browsers up with warnings like a Christmas tree.

The DNS-01 challenge is an alternate challenge to the default HTTP-01 challenge you may already me familiar with.

Unlike the HTTP-01 challenge which proves you have access to single domain by automatically placing a file on your web server, the DNS-01 challenge proves you have control over an entire domain - thus allowing you to obtain a wildcard certificate - which is valid for not only your domain, but all possible subdomains! This should save a lot of hassle - but it's important we keep it secure too.

As with regular Let's Encrypt certificates, we'll also need to ensure that our wildcard certificate we obtain will be auto-renewed, so we'll be setting up a periodic task on our Nomad cluster to do this for us.

If you don't have a Nomad cluster, don't worry. It's not required, and I'll be showing you how to do it without one too. But if you'd like to set one up, I recommend part 7 of this series.

In order to complete the DNS-01 challenge successfully, we need to automatically place a DNS record in our domain. This can be done via an API, if your DNS provider has one and it's supported. Personally, I have the domain name I'm using for my cluster (mooncarrot.space.) with Gandi. We'll be using certbot to perform the DNS-01 challenge, which has a plugin system for different DNS API providers.

We'll be installing the challenge provider we need with pip3 (a Python 3 package manager, as certbot is written in Python), so you can find an up-to-date list of challenge providers over on PyPi here: https://pypi.org/search/?q=certbot-dns

If you don't see a plugin for your provider, don't worry. I couldn't find one for Gandi, so I added my domain name to Cloudflare and followed the setup to change the name servers for my domain name to point at them. After doing this, I can now use the Cloudflare API through the certbot-dns-cloudflare plugin.

With that sorted, we can look at obtaining that TLS certificate. I opt to put certbot in a Docker container here so that I can run it through a Nomad periodic task. This proved to be a useful tool to test the process out though, as I hit a number of snags with the process that made things interesting.

The first order of business is to install certbot and the associate plugins. You'd think that simply doing an sudo apt install certbot certbot-dns-cloudflare would do the job, but you'd be wrong.

As it turns out, it does install that way, but it installs an older version of the certbot-dns-cloudflare plugin that requires you give it your Global API Key from your Cloudflare account, which has permission to do anything on your account!

That's no good at all, because if the key gets compromised an attacker could edit any of the domain names on our account they like, which would quickly turn into a disaster!

Instead, we want to install the latest version of certbot and the associated Cloudflare DNS plugin, which support regular Cloudflare API Tokens, upon which we can set restrictive permissions to only allow it to edit the one domain name we want to obtain a TLS certificate for.

I tried multiple different ways of installing certbot in order to get a version recent enough to get it to take an API token. The way that worked for me was a script called certbot-auto, which you can download from here: https://dl.eff.org/certbot-auto.

Now we have a way to install certbot, we also need the Cloudflare DNS plugin. As I mentioned above, we can do this using pip3, a Python package manager. In our case, the pip3 package we want is certbot-dns-cloudflare - incidentally it has the same name as the outdated apt package that would have made life so much simpler if it had supported API tokens.

Now we have a plan, let's start to draft out the commands we'll need to execute to get certbot up and running. If you're planning on following this tutorial on bare metal (i.e. without Docker), go ahead and execute these directly on your target machine. If you're following along with Docker though, hang on because we'll be wrapping these up into a Dockerfile shortly.

First, let's install certbot:

sudo apt install curl ca-certificates
cd some_permanent_directory;
curl -sS https://dl.eff.org/certbot-auto -o certbot-auto
chmod +x certbot-auto
sudo certbot-auto --debug --noninteractive --install-only

Installation with certbot-auto comprises downloading a script and executing it. with a bunch of flags. Next up, we need to shoe-horn our certbot-dns-cloudflare plugin into the certbot-auto installation. This requires some interesting trickery here, because certbot-auto uses something called virtualenv to install itself and all its dependencies locally into a single directory.

sudo apt install python3-pip
cd /opt/eff.org/certbot/venv
source bin/activate
pip install certbot-dns-cloudflare
deactivate

In short, we cd into the certbot-auto installation, activate the virtualenv local environment, install our dns plugin package, and then exit out of the virtual environment again.

With that done, we can finally add a convenience synlink so that the certbot command is in our PATH:

ln -s /opt/eff.org/certbot/venv/bin/certbot /usr/bin/certbot

That completes the certbot installation process. Then, to use certbot to create the TLS certificate, we'll need an API as mentioned earlier. Navigate to the API Tokens part of your profile and create one, and then create an INI file in the following format:

# Cloudflare API token used by Certbot
dns_cloudflare_api_token = "YOUR_API_TOKEN_HERE"

...replacing YOUR_API_TOKEN_HERE with your API token of course.

Finally, with all that in place, we can create our wildcard certificate! Do that like this:

sudo certbot certonly --dns-cloudflare --dns-cloudflare-credentials path/to/credentials.ini -d 'bobsrockets.io,*.bobsrockets.io' --preferred-challenges dns-01

It'll ask you a bunch of interactive questions the first time you do this, but follow it through and it should issue you a TLS certificate (and tell you where it stored it). Actually utilising it is beyond the scope of this post - we'll be tackling that in a future post in this series.

For those following along on bare metal, this is where you'll want to skip to the end of the post. Before you do, I'll leave you with a quick note about auto-renewing your TLS certificates. Do this:

sudo letsencrypt renew
sudo systemctl reload nginx postfix

....on a regular basis, replacing nginx postfix with a space-separated list of services that need reloading after you've renewed your certificates. A great way to do this is to setup a cron job.

Sweeping things under the carpet

For the Docker users here, we aren't quite finished yet: We need to package this mess up into a nice neat Docker container where we can forget about it :P

Some things we need to be aware of:

  • certbot has a number of data directories it interacts with that we need to ensure don't get wiped when the Docker ends instances of our container.
  • Since I'm serving the shared storage of my cluster over NFS, we can't have certbot running as root as it'll get a permission denied error when it tries to access the disk.
  • While curl and ca-certificates are needed to download certbot-auto, they aren't needed by certbot itself - so we can avoid installing them in the resulting Docker container by using a multi-stage Dockerfile.

To save you the trouble, I've already gone to the trouble of developing just such a Dockerfile that takes all of this into account. Here it is:

ARG REPO_LOCATION
# ARG BASE_VERSION

FROM ${REPO_LOCATION}minideb AS builder

RUN install_packages curl ca-certificates \
    && curl -sS https://dl.eff.org/certbot-auto -o /srv/certbot-auto \
    && chmod +x /srv/certbot-auto

FROM ${REPO_LOCATION}minideb

COPY --from=builder /srv/certbot-auto /srv/certbot-auto

RUN /srv/certbot-auto --debug --noninteractive --install-only && \
    install_packages python3-pip

WORKDIR /opt/eff.org/certbot/venv
RUN . bin/activate \
    && pip install certbot-dns-cloudflare \
    && deactivate \
    && ln -s /opt/eff.org/certbot/venv/bin/certbot /usr/bin/certbot

VOLUME /srv/configdir /srv/workdir /srv/logsdir

USER 999:994
ENTRYPOINT [ "/usr/bin/certbot", \
    "--config-dir", "/srv/configdir", \
    "--work-dir", "/srv/workdir", \
    "--logs-dir", "/srv/logsdir" ]

A few things to note here:

  • We use a multi-stage dockerfile here to avoid installing curl and ca-certificates in the resulting docker image.
  • I'm using minideb as a base image that resides on my private Docker registry (see part 8). For the curious, the script I use to do this located on my personal git server here: https://git.starbeamrainbowlabs.com/sbrl/docker-images/src/branch/master/images/minideb.
    • If you don't have minideb pushed to a private Docker registry, replace minideb with bitnami/minideb in the above.
  • We set the user and group certbot runs as to 999:994 to avoid the NFS permissions issue.
  • We define 3 Docker volumes /srv/configdir, /srv/workdir, and /srv/logsdir to contain all of certbot's data that needs to be persisted and use an elaborate ENTRYPOINT to ensure that we tell certbot about them.

Save this in a new directory with the name Dockerfile and build it:

sudo docker build --no-cache --pull --tag "certbot" .;

...if you have a private Docker registry with a local minideb image you'd like to use as a base, do this instead:

sudo docker build --no-cache --pull --tag "myregistry.seanssatellites.io:5000/certbot" --build-arg "REPO_LOCATION=myregistry.seanssatellites.io:5000/" .;

In my case, I do this on my CI server:

laminarc queue docker-rebuild IMAGE=certbot

The hows of how I set that up will be the subject of a future post. Part of the answer is located in my docker-images Git repository, but the other part is in my private continuous integration Git repo (but rest assured I'll be talking about it and sharing it here).

Anyway, with the Docker container built we can now obtain our certificates with this monster of a one-liner:

sudo docker run -it --rm -v /mnt/shared/services/certbot/workdir:/srv/workdir -v /mnt/shared/services/certbot/configdir:/srv/configdir -v /mnt/shared/services/certbot/logsdir:/srv/logsdir certbot certonly --dns-cloudflare --dns-cloudflare-credentials path/to/credentials.ini -d 'bobsrockets.io,*.bobsrockets.io' --preferred-challenges dns-01

The reason this is so long is that we need to mount the 3 different volumes into the container that contain certbot's data files. If you're running a private registry, don't forget to prefix certbot there with registry.bobsrockets.com:5000/.

Don't forget also to update the Docker volume locations on the host here to point a empty directories owned by 999:994.

Even if you want to run this on Nomad, I still advise that you execute this manually. This is because the first time you do so it'll ask you a bunch of questions interactively (which it doesn't do on subsequent times).

If you're not using Nomad, this is the point you'll want to skip to the end. As before with the bare-metal users, you'll want to add a cron job that runs certbot renew - just in your case inside your Docker container.

Nomad

For the truly intrepid Nomad users, we still have one last task to complete before our work is done: Auto-renewing our certificate(s) with a Nomad periodic task.

This isn't really that complicated I found. Here's what I came up with:

job "certbot" {
    datacenters = ["dc1"]
    priority = 100
    type = "batch"

    periodic {
        cron = "@weekly"
        prohibit_overlap = true
    }

    task "certbot" {
        driver = "docker"

        config {
            image = "registry.service.mooncarrot.space:5000/certbot"
            labels { group = "maintenance" }
            entrypoint = [ "/usr/bin/certbot" ]
            command = "renew"
            args = [
                "--config-dir", "/srv/configdir/",
                "--work-dir", "/srv/workdir/",
                "--logs-dir", "/srv/logsdir/"
            ]
            # To generate a new cert:
            # /usr/bin/certbot --work-dir /srv/workdir/ --config-dir /srv/configdir/ --logs-dir /srv/logsdir/ certonly --dns-cloudflare --dns-cloudflare-credentials /srv/configdir/__cloudflare_credentials.ini -d 'mooncarrot.space,*.mooncarrot.space' --preferred-challenges dns-01

            volumes = [
                "/mnt/shared/services/certbot/workdir:/srv/workdir",
                "/mnt/shared/services/certbot/configdir:/srv/configdir",
                "/mnt/shared/services/certbot/logsdir:/srv/logsdir"
            ]
        }
    }
}

If you want to use it yourself, replace the various references to things like the private Docker registry and the Docker volumes (which require "docker.volumes.enabled" = "True" in clientoptions in your Nomad agent configuration) with values that make sense in your context.

I have some confidence that this is working as intended by inspecting logs and watching TLS certificate expiry times. Save it to a file called certbot.nomad and then run it:

nomad job run certbot.nomad

Conclusion

If you've made it this far, congratulations! We've installed certbot and used the Cloudflare DNS plugin to obtain a DNS wildcard certificate. For the more adventurous, we've packaged it all into a Docker container. Finally for the truly intrepid we implemented a Nomad periodic job to auto-renew our TLS certificates.

Even if you don't use Docker or Nomad, I hope this has been a helpful read. If you're interested in the rest of my cluster build I've done, why not go back and start reading from part 1? All the posts in my cluster series are tagged with "cluster" to make them easier to find.

Unfortunately, I haven't managed to determine a way to import TLS certificates into Hashicorp Vault automatically, as I've stalled a bit on the Vault front (permissions and policies are wildly complicated), so in future posts it's unlikely I'll be touching Vault any time soon (if anyone has an alternative that is simpler and easier to understand / configure, please comment below).

Despite this, in future posts I've got a number of topics lined up I'd like to talk about:

  • Configuring Fabio (see part 9) to serve HTTPS and force-redirect from HTTP to HTTPS (status: implemented)
  • Implementing HAProxy to terminate port forwarding (status: initial research)
  • Password protecting the private docker registry, Consul, and Nomad (status: on the todo list)
  • Semi-automatic docker image rebuilding with Laminar CI (status: implemented)

In the meantime, please comment below if you liked this post, are having issues, or have any suggestions. I'd love to hear if this helped you out!

Sources and Further Reading

Switching TOTP providers from Authy to andOTP

Since I first started using 2-factor authentication with TOTP (Time based One Time Passwords), I've been using Authy to store my TOTP secrets. This has worked well for a number of years, but recently I decided that I wanted to change. This was for a number of reasons:

  1. I've acquired a large number of TOTP secrets for various websites and services, and I'd like a better way of sorting the list
  2. Most of the web services I have TOTP secrets for don't have an icon in Authy - and there are only so many times you can repeat the 6 generic colours before it becomes totally confusing
  3. I'd like the backups of my TOTP secrets to be completely self-hosted (i.e. completely on my own infrastructure)

After asking on Reddit, I received a recommendation to use andOTP (F-Droid, Google Play). After installing it, I realised that I needed to export my TOTP secrets from Authy first.

Unfortunately, it turns out that this isn't an easy process. Many guides tell you to alter the code behind the official Authy Chrome app - and since I don't have Chrome installed (I'm a Firefox user :D), that's not particularly helpful.

Thankfully, all is not lost. During my research I found the authy project on GitHub, which is a command-line app - written in Go - temporarily registers as a 'TOTP provider' with Authy and then exports all of your TOTP secrets to a standard text file of URIs.

These can then be imported into whatever TOTP-supporting authenticator app you like. Personally, I did this by generating QR codes for each URI and scanning them into my phone. The URIs generated, when converted to a QR code, are actually in the same format that they were originally when you scan them in the first place on the original website. This makes for an easy time importing them - at least from a walled garden.

Generating all those QR codes manually isn't much fun though, so I automated the process. This was pretty simple:

#!/usr/bin/env bash
exec 3<&0; # Copy stdin
while read url; do
    echo "${url}" | qr --error-correction=H;
    read -p "Press a enter to continue" <&3; # Pipe in stdin, since we override it with the read loop
done <secrets.txt;

The exec 3<&0 bit copies the standard input to file descriptor 3 for later. Then we enter a while loop, and read in the file that contains the secrets and iterate over it.

For each line, we convert it to a QR code that displays in the terminal with VT-100 ANSI escape codes with the Python program qr.

Finally, after generating each QR code we pause for a moment until we press the enter key, so that we can generate the QR codes 1 at a time. We pipe in file descriptor 3 here that we copied earlier, because inside the while loop the standard input is the file we're reading line-by-line and not the keyboard input.

With my secrets migrated, I set to work changing the labels, images, and tags for each of them. I'm impressed by the number of different icons it supports - and since it's open-source if there's one I really want that it doesn't have, I'm sure I can open a PR to add it. It also encrypts the TOTP secrets database at rest on disk, which is pretty great.

Lastly came the backups. It looks like andOTP is pretty flexible when it comes to backups - supporting plain text files as well as various forms of encrypted file. I opted for the latter, with GPG encryption instead of a password or PIN. I'm sure it'll come back to bite me later when I struggle to decrypt the database in an emergency because I find the gpg CLI terribly difficult to use - perhaps I should take multiple backups encrypted with long and difficult password too.

To encrypt the backups with GPG, you need to have a GPG provider installed on your phone. It recommended that I install OpenKeychain for managing my GPG private keys on Android, which I did. So far, it seems to be functioning as expected too - additionally providing me with a mechanism by which I can encrypt and decrypt files easily and perform other GPG-related tasks...... if only it was this easy in the Linux terminal!

Once setup, I saved my encrypted backups directly to my Nextcloud instance, since it turns out that in Android 10 (or maybe before? I'm not sure) it appears that if you have the Nextcloud app installed it appears as a file system provider when saving things. I'm certainly not complaining!

While I'm still experimenting with my new setup, I'm pretty happy with it at the moment. I'm still considering how I can make my TOTP backups even more secure while not compromising the '2nd factor' nature of the thing, so it's possible I might post again in the future about that.

Next on my security / privacy todo list is to configure my Keepass database to use my Solo for authentication, and possibly figure out how I can get my phone to pretend to be a keyboard to input passwords into machines I don't have my password database configured on :D

Found this interesting? Got a suggestion? Comment below!

Solo hardware security key review

Sometime last year (I forget when), I backed a kickstarter that promised the first open-source hardware security key that supports FIDO2. Since the people doing the kickstarter have done this before for an older standard, I decided to back it.

Last week they finally arrived, and the wait was totally worth it! I got 1 with a USB type c connector (in yellow below), and 1 type a regular type a connector that also supports nfc (in red, for using with my phone).

Before I get into why they are so awesome, it's probably a good idea if we take small step back and look at what a hardware security key does and why it does it.

My Solos!

In short, a hardware security key has a unique secret key baked into it that you can't extract. If I understand it, this is sometimes known as a physically unclonable function (correct me in a comment if I'm wrong). It makes use of this secret key for authentication purposes by way of a chain of protocols, which are collectively known as FIDO2.

A diagram showing the different FIDO2 protocols. It's basically WebAuthn between browser and OS, and CTAP2 between OS and hardware security key

There are 2 important protocols here: WebAuthn that the browser provides to web pages to interact with hardware devices, and CTAP2 - which allows the browser to interface with the hardware security key through a channel that the operating system provides (be that over USB, NFC, Bluetooth, or some other means).

FIDO2 is new. Like very very new. To this end, browsers and websites don't yet have full support for it. Those that do aren't always enabled by default (in Firefox you've got to set security.webauth.u2f, security.webauth.webauthn, and security.webauth.webauthn_enable_usbtoken to true, but I think these will set by default in a coming update) or incorrectly 'detect' support by sniffing the user-agent string ( cough I'm looking at you, GitHub and Facebook cough ).

Despite this, when it is supported it works fabulously. Solo goes a long way to making the process as painless as possible - supporting both CTAP (for the older U2F protocol) and CTAP 2 (which is part of the FIDO 2 protcol suite). It's designed well (though the cases on the NFC-enabled version called the Solo Tap are a bit on the snug side), and since it's open source you can both inspect and contribute to the firmware to improve the Solo and add new features for everyone to enjoy.

Extra features like direct access to the onboard TRNG (true random number generator) are really nice to have - and the promise of more features to come makes it even better. I'm excited to see what new capabilities my Solo will gain with future updates!

In the future I want to take a deeper dive into Webauthn and implement support in applications I've written (e.g. Pepperminty Wiki). It looks like it might be quite complicated, but I'll post here when I've figured it out.

Password Protect: Secure?

Everyone knows about passwords. Most people I've met usually react to creating a new account with a variety of negative reactions - mainly due to the annoying issue of having to create a new password (or even worse, re-use an old one!). Most people I've met also reuse at least one password several times, too!

This is obviously a bad thing, but what can we do about it? Perhaps, while we're at it we can solve that awkward problem of forgetting which password you've used too.

Before we tackle those questions, it's important to discuss what makes a good password. Perhaps you think of some of these rules:

A laughably bad password policy that demands a password between 8 and 10 characters in length.

(Source: this post on Password Shaming)

  • It has to contain a number (not always)
  • Some symbols make it more secure (it depends)
  • Changing it often is a good security measure (not as good as you'd think)
  • It shouldn't be longer than 32 characters long (no! If longer is an option, take it! )
  • Change out letters makes it more secure (this can be guessed by attackers)

Here's the big question though: What do any of these password rules achieve? Well, we want to ensure that only the owner of an account can access it.

What could be so bad?

Ok, so we've got our goal. Make sure only the owner of an account can access it. Sounds simple, right? Just let the user enter a secret that only they know, and then we can check if they still know that secret in the future in order to verify that they are the right person attempting to access the account.

This brings a number of problems:

  1. The user has to remember their password (more on this later)
  2. As a service provider, we have to store their password securely so that a hacker can't steal it

Point #1 here is bad enough (more on this later and what you can do about it though), but #2 is a real issue. A safe is only as secure as its lock, so how to we makes sure we keep passwords stored so that nobody can steal them?

The answer: Make it so that even we can't read them! This sounds silly, but it really does work. By using a process called hashing, we can apply a process of complicated transformations to an input string - leaving us with an unintelligible output string.

Why is this useful? Because it's repeatable. By hashing the same string twice, we can get exactly the same output - allowing us to check if a user has entered their password correctly without actually storing it in plain-text! Very cool.

We're not out of the woods yet though. Picking the right hashing algorithm can be tricky. No doubt you've heard about sha1 - and maybe even sha2 and sha3. They stand for secure hashing algorithm, right? What if we used one of those?

More problems, unfortunately. It's too fast. Yep! My laptop upon which I'm writing this blog post can hash 157 Megabytes of data per second per core. I've got 4 cores, so that's 628MiB if I maxed them all out. If I manage to steal the hash of your password, then I could try every single combination of characters (including non-printable ones) in the following amount of time:

Length (chars) Time to crack
1 ~0.04µs
2 ~12µs
3 ~3ms
4 ~803ms
5 ~204s
6 ~14.5 hours
7 ~154 days
8 ~107.5 years
9 ~27423 years
10 ~7 million years

It gets worse. With new innovations, hashing algorithms such as the SHA series (don't even start on MD5) can now be run in massive parallel on a (or several) graphics card, slashing these times by several orders of magnitude:

Length (chars) Time to Crack
8 1 minute
9 2 hours
10 1 week
11 2 years
12 2 centuries

(Source: This Coding Horror Blog Post)

I don't even have to buy a top-of-the-range card any more, either. I can rent one for as little as ~35p per hour from Google Cloud Compute - well within the reach of almost any script kiddie or wannabe hacker.

Many people also use the same password for several different accounts - and attackers exploit this mercilessly. If they've gained access to one of your accounts, they'll also try using the same password against other online services to see if they can gain access to any other accounts they may belong to you.

(If you're wondering, SHA2 and SHA3 are actually secure - they just aren't for hashing passwords :P)

Furthermore, the particularly determined have gone to the trouble of pre-generating what we call rainbow tables. These table contain pre-generated hashes for millions and millions of different combinations of characters - reducing the time required to crack a password to almost nothing!

Doing something about it

Obviously, if there wasn't anything we can do about it then everyone's accounts would be hacked by now. Thankfully though, this isn't the case. We can combat the threat by using longer passwords (at least 12 characters - and preferable 16+), and that of a rainbow table by utilising a salt.

Basically, it's a long and unintelligible string that's added to the password before it's hashed - and is different for every password. Then in order to check that whether an entered password is identical to the stored one, we simply re-read the plain-text salt from storage and hash the new password with the salt.

Over the Rainbow

Pretty good right? Not so fast. Even though we've prevented attackers from using a rainbow table against our password hashes, we still have to store the salt in plain text, so our attacker can still try millions of different passwords a second if they manage to steal our password hashes.

The solution: Slow them down! Using bcrypt, we can specify a work factor when hashing a password. Higher work factors mean that it takes longer to hash a password. By making it so that it takes a consistent ~1 second to hash passwords, we can limit our attacker to trying 1 password per second instead of a few million. Furthermore, the bycrypt algorithm doesn't translate onto graphics cards very well at all - further increasing the amount of time it takes to crack our password hashes!

Combined with a salt from earlier, this is a pretty good way of storing passwords. There are other variants of this general algorithm too - including the newer Argon2 algorithm whose work factor increases the amount of memory required to calculate a hash as well as the amount of CPU power.

Still others include scrypt and PBKDF2 - the likes of which are discussed here (also see this answer).

Just getting started

This isn't the end of the road though. Far from it - we're just getting started. Next, the best hashing algorithm in the world doesn't help you if your password is terrible. There are many lists of common passwords extracted from data breaches from around the world - so if your password is in that list, you can expect your account to be hacked in seconds!

If you're unsure if your password is good enough, then there are online tools available that you can use to measure the strength of your passwords. It will even tell you if your password is on any of the common password lists.

Even so, there are other methods employed by those with questionable intent to gain access to your account. For example, why bother cracking your password when the can install a keylogger on your computer or look over your shoulder to pull out your password as you're typing it?

Some simple steps are all that's required to keep keyloggers at bay. Firstly, keeping your computer (and all the software you have installed) up-to-date is critical for patching security holes. If you're using Linux, then you've already got a brilliant way of doing this - so long as you install all your software through your package manager.

If you're on Windows, making sure any auto-update options are turned on and installing updates when prompted is very important. Though it's not installed by default, Chocolatey brings Linux-style package management to your computer - allowing you to mass-update all the software you've got installed at once.

Secondly, Windows users should make sure they have an anti-virus program installed (Windows 10 comes with one built in, so no need to worry there) with up-to-date virus definitions. If you're not on Windows 10 yet, then Windows Defender can be installed and enabled. Alternatively, Avast does the job well enough (though somewhat noisily with lots of notifications). Both are free, so there's no excuse :P

Password Management

As I mentioned earlier in this post, re-using passwords is a very bad idea. According to dashlane, the average number of accounts registered against a single email address is a staggering 118! Having to remember a different password for all 118 accounts is clearly not sustainable (dashlane also notes that the average number of 'forgot password' emails per inbox in 2020 is estimated to be 22), so is there anything we can do about it?

Yes, as it turns out. With the rise in the number of online accounts people have, so have the ways at your disposal to manage them. The best way I've found to manage my passwords is with a password manager. The general idea is that you create a password database that's encrypted with a (super-secure) master password, and then you store all of your credentials all of your different accounts inside it.

Many password managers come with helper tools that automagically type them into a box at the touch of a button - eliminating the need to remember them at all. This means that you can use much longer and more secure passwords - protecting your online accounts from intrusion.

Personally, my preference is Keepass 2 - as it lets me save my password database to disk - so that I can set up my own automatic backup system (don't forget this step!) against my own infrastructure. Many others exist though too. This article has a great discussion on the merits (and dangers) of using one, along with recommendations at the end.

Of course, it's a trade-off. In using a password database all of your passwords will be in one place (albeit encrypted with your master password). If an attacker gets hold of it and cracks the password, then it's game over! Thankfully, there are several techniques that are employed to ensure the security of such a database, such as utilising a password hashing algorithm (as discussed above) to transform the master password before encrypting the database, adding a work factor. Secondly, keeping your database stored in a secure location (such as flash drive in a safe place, or a secure remote server that you own) can also help.

Given that the chance of being burgled in the UK is about 2.5% (source), but the chance of being hacked is about 1 in 3 (source), I think I'd rather take my chances with a password database to keep my online accounts secure.

If you aren't comfortable with storing your passwords digitally yet though, fear not! You could buy a password book from loads of different online stores - and even most good high-street stationers such as W.H. Smith for under £10. Although storing really long and unintelligible passwords isn't really viable, you can still have a different one for each site - making your online accounts much more secure.

Beyond the Password

Having a good password is a great start, but is there anything else we can do? Well yes, as it turned out. Enter stage left: 2-factor authentication.

Given those statistics about burglary and hacking, ideally we want to tie our account security to the burgalry statistic rather than the hacking one. 2-factor authentication does just this: it makes it such that you require not only something that you know (your password) to access your account, but also something that you have, such as your phone - or a small flash-drive like device called a hardware-security key.

Linking the two is a system of 6-digit codes. Basically, on your phone you scan a QR code representing a lump of data. This data is then used to generate a 6-digit code that you enter into the online service after entering your password. This code changes every 20 or so seconds according to a complicated algorithm, so the server can also generate a code to see if it matches the one you entered.

In this fashion, attackers are prevented from accessing your account unless they not only crack your password, but also actively infect your phone.

(Above: A few hardware security keys. Onlykey is featured on the left, whilst a selection of Yubikeys are on the right.)

If you're really paranoid, special 'security keys' exist that can store your 2nd-factor information - and generate the codes securely without the source data ever leaving your device (though those are a separate topic for a different post - comment below if you'd like me to do a writeup on them on something!)

Conclusion

Phew. If you've reach the end of this post, then congratulations! We've covered a lot of content in this post. We've looked a little at what makes a good password, and why that is. We've investigated how attackers attempt to gain access to your account, and what you can do to protect yourself. Also also considered the merits of various password management strategies. Finally, we've looked at 2-factor authentication, and how it is far more secure than even the strongest password.

As always, this post is a starting point - not an ending point! I'll list some useful articles below for further reading. I'd also recommend you consider taking the time to secure your online accounts better - for example changing their passwords, enabling 2-factor authentication, and implementing a password management strategy.

Found this interesting? Spotted a mistake? Got a great tip of your own? Comment below!

Sources and Further Reading

Securing a Linux Server Part 2: SSH

Wow, it's been a while since I posted something in this series! Last time, I took a look at the Uncomplicated Firewall, and how you can use it to control the traffic coming in (and going out) of your server. This time, I'm going to take a look at steps you can take to secure another vitally important part of most servers: SSH. Used by servers and their administrators across the world to talk to one another, if someone manages to get in who isn't supposed to, they could do all kinds of damage!

The first, and easiest thing we can do it improve security is to prevent the root user logging in. If you haven't done so already, you should create a new user on your server, set a good password, and give it superuser privileges. Login with the new user account, and then edit /etc/ssh/sshd_config, finding the line that says something like

PermitRootLogin yes

....and change it to

PermitRootLogin no

Once done, restart the ssh server. Your config might be slightly different (e.g. it might be PermitRootLogin without-password) - but the principle is the same. This adds an extra barrier to getting into your server, as now attackers must not only guess your password, but your username as well (some won't even bother, and keep trying to login to the root account :P).

Next, we can move SSH to a non-standard port. Some might argue that this isn't a good security measure to take and that it doesn't actually make your server more secure, but I find that it's still a good measure to take for 2 reasons: defence in depth, and preventing excessive CPU load from all the dumb bots that try to get in on the default port. With that, it's make another modification to /etc/ssh/sshd_config. Make sure you test at every step you take, as if you lock yourself out, you'll have a hard time getting back in again....

Port 22

Change 22 in the above to any other number between about 1 and 65535. Next, make sure you've allowed the new port through your firewall! If you're using ufw, my previous post (link above) gives a helpful guide on how to do this. Once done, restart your SSH server again - and try logging in before you close your current session. That way if you make a mistake, you can fix through your existing session.

Once you're confident that you've got it right, you can close port 22 on your firewall.

So we've created a new user account with a secure password (tip: use a password manager if you have trouble remembering it :-)), disabled root login, and moved the ssh port to another port number that's out of the way. Is there anything else we can do? Turns out there is.

Passwords are not the only we can authenticate against an SSH server. Public private keypairs can be used too - and are much more secure - and convenient - than passwords if used correctly. You can generate your own public-private keypair like so:

ssh-keygen -t ed25519

It will ask you a few questions, such as a password to encrypt the private key on disk, and where to save it. Once done, we need to tell ssh to use the new public-private keypair. This is fairly easy to do, actually (though it took me a while to figure out how!). Simply edit ~/.ssh/config (or create it if it doesn't exist), and create (or edit) an entry for your ssh server, making it look something like this:

Host bobsrockets.com
    Port            {port_name}
    IdentityFile    {path/to/private/keyfile}

It's the IdentityFile line that's important. The port line simply makes it such that you can type ssh bobsrockets.com (or whatever your server is called) and it will figure out the port number for you.

With a public-private keypair now in use, there's just one step left: disable password-based logins. I'd recommend trailing it for a while to make sure you haven't messed anything up - because once you disable it, if you lose your private key, you won't be getting back in again any time soon!

Again, open /etc/ssh/sshd_config for editing. Find the line that starts with PasswordAuthentication, and comment it out with a hash symbol (#), if it isn't already. Directly below that line, add PasswordAuthentication no.

Once done, restart ssh for a final time, and check it works. If it does, congratulations! You've successfully secured your SSH server (to the best of my knowledge, of course). Got a tip I haven't covered here? Found a mistake? Let me know in a comment below!

Signing email with GPG/PGP in Evolution

Recently I've moved to a new laptop, and for the longest time I haven't been able to figure out why I couldn't sign my messages with gpg any more (I'm on keybase as sbrl). Turns out the problem was that gpg didn't 'trust' my private key. This post documents how I fixed it:


# First, import your private key into gpg (you've probably done this already)
gpg --import <secret_key.priv
# Then, get gpg to edit your private key, and ask it to trust your private key
gpg --edit-key toaster5@waffletoast.net
> trust
> 5
> y

Once done, you can then select your private key in evolution in the preferences (SHIFT + CTRL + S).

Did this help you out? Still having issues? Let me know in the comments below!

UFW and Samba

Today I have another post for you about Samba. Today I found that people couldn't actually access the samba shares I set up (I must have forgotten to test them). They were getting a weird "The Network Path was not found" error. Strange. After looking into it, I found that I didn't unblock the right ports in ufw. You see, Samba operates using two listeners, one called smbd, and the another called nmbd. I had forgotten to read the output of netstat -peanut correctly, and I missed a few ports.

For future reference (and for others having the same problem), here's the list of commands you need to enter in order to use shared folders with Samba correctly:

sudo ufw allow 139/tcp
sudo ufw allow 445/tcp
sudo ufw allow 137/udp
sudo ufw allow 138/udp

Hopefully it doesn't take you as long to fix your problem as it did mine...!

Protecting your Privacy Online: Reviewing which Advertisers Track your Interests

This is a different kind of post about privacy on the web.

The youronlinechoices.com website

Protecting your privacy whilst browsing online is important. Most if not all advertisers will track you and the websites you visit to try and work out what you are interested in - The amount of information that advertising networks can accumulate is astonishing! I currently have an AdBlocker installed with a tracking prevention list enabled, but that isn't always enough to stop all the advertising networks from tracking you all the time.

Recently I have come across a website called youronlinechoices.com - a website for people in the EU that allows you to review and control which out of 96(!) advertising networks are currently tracking you and your interests. I found that about two dozen advertisers were tracking me and was able to stop them.

The Internet needs adverts in order for it to stay free for all (disable your adblocker on the sites you want to support), but one also needs to prevent ones personal information from falling into the wrong hands.

Securing a Linux Server Part 1: Firewall

Welcome to a new tutorial series, where I will show you what I have learnt so far about making sure that your linux server (and desktop too!) are secure so that nobody can get in (easily) and assume control.

Disclaimer: This tutorial series will not cover everything, and should not be taken to. There probably will be some mistakes in this post too. Check other guides online or consult a professional to make sure that your machine is secure. Please suggest improvements or point out mistakes in the comments.

To start this tutorial session off, I will talk about firewalls. Firewalls control how data is allowed to travel in and out of your computer. In Ubuntu, a firewall called ufw, the 'uncomplicated firewall' is already present. It acts as a nice frontend to iptables, which I find to be difficult to understand and use. We will be using that as our firewall.

I have done an asciinema recording on a virtual machine of this whole process:

Enabling the firewall

Ufw by default allows all outgoing connections and denys all incoming connections. This means that if you are using ssh to connect to your server, you will need to open the appropriate ports first before enabling ufw. Do that like this:

~$ sudo ufw allow 22/tcp

Ufw will automatically configure iptables to allow incoming connections on port 22 that use tcp. I will talk more about allowing and denying different connections later.

Just in case ufw blocks your ssh connection and you are unable to get back in, you can use another program called at to schedule the disabling of the ufw so that you can get back in again. If you don't have it installed, you can install it with sudo apt-get install at.

~$ sudo at -vM now +10 minutes
ufw disable
^D

Where ^D stands for CTRL + D. Now that you have it set such that ufw will disable itself in 10 minutes time, we go ahead and turn ufw on:

~$ sudo ufw enable

It will warn you that this may disrupt any existing ssh connections you have open. Reply yes to this. Once it have been enabled successfully, you should check that you can still ssh into your server (if that is the method that you are using to control it). If yes, great! If not, ufw will disable itself in 10 minutes and then you can try again.

Now that we have ufw enabled, we can cancel the at job we created to disable ufw. Type sudo atq to list the jobs you have schedules, and sudo atrm <number> to remove it, where <number> is the number of the jobs that you want to delete.

You may also want to cheeck the status of ufw to make sure that it is enabled, or to get a list of the rules that are currently in force. You can do that like this:

~$ sudo ufw status
Status: active

To                         Action      From
--                         ------      ----
80/tcp                     ALLOW       Anywhere
80/tcp (v6)                ALLOW       Anywhere

Allowing connections

Allowing connections through the firewall is easy. Simply type something like this:

~$ sudo ufw allow 80/tcp

Ufw will automatically configure iptables, in this example, to allow all connections on port 80 that use tcp. It will also configure it appropriately for both ipv4 and ipv6. Replace 80 with the port number you want to allow, and tcp with udp if needed. Ufw also understands several protocol names, and can configure itself accordingly:

~$ sudo ufw allow http
~$ sudo ufw allow imap

Denying connections

Denying all connections on a given port is very similar., Simply type something like this:

~$ sudo ufw deny 4722/tcp

The above would deny all tcp connections on port 4722.

You can also prevent a particular ip from gaining access to your server:

~$ sudo ufw deny from 123.123.123.123

The above would block all packets from the ip address 123.123.123.123. It works with IPv6 addresses too:

~$ sudo ufw deny from 2607:f8b0:4003:c05::65

The above would block all packets from the ip address 2607:f8b0:4003:c05::65, which just happens to belong to Google.

Port Ranges

You can open a range of ports with a colon:

~$ sudo ufw allow 60000:61000/udp

The above will allow udp connections on any port in the range 60,000 - 61,000 (the ports used for mosh).

Deleting Rules

Deleting rules can be done like this:

~$ sudo ufw delete allow 4724/tcp

The above would delete the rule(s) allowing tcp connections on port 4724.

Summary

In this post, I have shown you how to activate and configure a simple firewall that is bundled with Ubuntu. Next time, I will talk about securing you ssh daemon.

If you spotted a mistake in this post, have a suggestion, or are having trouble following along, please leave a comment below.

Other useful posts

These posts helped me to understand and use the uncomplicated firewall:

Generating Session Tokens with PHP

Recently I needed to generate random strings to hex to act as a session token for Blow Worm. Using session tokens mean that you send the login credentials once, and then the server hands out a session token for use instead of the password for the rest of that session. In theory this is more secure than sending the password to the server every time.

The problem with generating random session tokens is that you need a secure random number generator, so that hackers can't attempt to guess the random numbers and hence guess the session tokens (that would be bad).

The way I did it (please leave a comment below if this is insecure!) is as follows:

  1. Generate ~128 bits of randomness using the OpenSSL function openssl_random_pseudo_bytes(). This randomness generator is apparently better than rand() and mt_rand().
  2. Hash that resulting randomness with SHA256 to ensure a constant session key length.

The PHP code I am currently using is as follows:

$sessionkey = hash("sha256", openssl_random_pseudo_bytes($session_key_length));

I thought that I would share this here since it took me a little while to look up how to do this. If anyone has a better way of doing this, I will gladly take suggestions and give full credit.

Art by Mythdael