Starbeamrainbowlabs

Stardust
Blog


Archive


Mailing List Articles Atom Feed Comments Atom Feed Twitter Reddit Facebook

Tag Cloud

3d 3d printing account algorithms android announcement architecture archives arduino artificial intelligence artix assembly async audio automation backups bash batch blog bookmarklet booting bug hunting c sharp c++ challenge chrome os cluster code codepen coding conundrums coding conundrums evolved command line compilers compiling compression containerisation css dailyprogrammer data analysis debugging demystification distributed computing docker documentation downtime electronics email embedded systems encryption es6 features ethics event experiment external first impressions freeside future game github github gist gitlab graphics hardware hardware meetup holiday holidays html html5 html5 canvas infrastructure interfaces internet interoperability io.js jabber jam javascript js bin labs learning library linux lora low level lua maintenance manjaro minetest network networking nibriboard node.js operating systems own your code pepperminty wiki performance phd photos php pixelbot portable privacy problem solving programming problems project projects prolog protocol protocols pseudo 3d python reddit redis reference release releases rendering resource review rust searching secrets security series list server software sorting source code control statistics storage svg systemquery talks technical terminal textures thoughts three thing game three.js tool tutorial tutorials twitter ubuntu university update updates upgrade version control virtual reality virtualisation visual web website windows windows 10 worldeditadditions xmpp xslt

systemquery, part 1: encryption protocols

Unfortunately, my autoplant project is taking longer than I anticipated to setup and debug. In the meantime, I'm going to talk about systemquery - another (not so) little project I've been working on in my spare time.

As I've acquired more servers of various kinds (mostly consisting of Raspberry Pis), I've found myself with an increasing need to get a high-level overview of the status of all the servers I manage. At the moment, this need is satisfied by my monitoring system's (collectd, which while I haven't blogged about my setup directly, I have posted about it here and here) web-based dashboard called Collectd Graph Panel (sadly now abandonware, but still very useful):

This is great and valuable, but if I want to ask questions like "are all apt updates installed", or "what's the status of this service on all hosts?", or "which host haven't I upgraded to Debian bullseye yet?", or "is this mount still working", I currently have to SSH into every host to find the information I'm looking for.

To solve this problem, I discovered the tool osquery. Osquery is a tool to extract information from a network of hosts with an SQL-like queries. This is just what I'm looking for, but unfortunately it does not support the armv7l architecture - which most of my cluster currently runs on - thereby making it rather useless to me.

Additionally, from looking at the docs it seems to be extremely complicated to setup. Finally, it does not seem to have a web interface. While not essential, it's a nice-to-have

To this end, I decided to implement my own system inspired by osquery, and I'm calling it systemquery. I have the following goals:

  1. Allow querying all the hosts in the swarm at once
  2. Make it dead-easy to install and use (just like Pepperminty Wiki)
  3. Make it peer-to-peer and decentralised
  4. Make it tolerate random failures of nodes participating in the systemquery swarm
  5. Make it secure, such that any given node must first know a password before it is allowed to join the swarm, and all network traffic is encrypted

As a stretch goal, I'd also like to implement a mesh message routing system too, so that it's easy to connect multiple hosts in different networks and monitor them all at once.

Another stretch goal I want to work towards is implementing a nice web interface that provides an overview of all the hosts in a given swarm.

Encryption Protocols

With all this in mind, the first place to start is to pick a language and platform (Javascript + Node.js) and devise a peer-to-peer protocol by which all the hosts in a given swarm can communicate. My vision here is to encrypt everything using a join secret. Such a secret would lend itself rather well to a symmetrical encryption scheme, as it could act as a pre-shared key.

A number of issues stood in the way of actually implementing this though. At first, I thought it best to use Node.js' built-in TLS-PSK (stands for Transport Layer Security - Pre-Shared Key) implementation. Unlike regular TLS which uses asymmetric cryptography (which works best in client-server situations), TLS-PSK uses a pre-shared key and symmetrical cryptography.

Unfortunately, although Node.js advertises support for TLS-PSK, it isn't actually implemented or is otherwise buggy. This not only leaves me with the issue of designing a encryption protocol, but also:

  1. The problem of transferring binary data
  2. The problem of perfect forward secrecy
  3. The problem of actually encrypting the data

Problem #1 here turned out to be relatively simple. I ended up abstracting away a raw TCP socket into a FramedTransport class, which implements a simple protocol that sends and receives messages in the form <length_in_bytes><data....>, where <length_in_bytes> is a 32 bit unsigned integer.

With that sorted and the nasty buffer manipulation safely abstracted away, I could turn my attention to problems 2 and 3. Let's start with problem 3 here. There's a saying when programming things relating to cryptography: never roll your own. By using existing implementations, these existing implementations are often much more rigorously checked for security flaws.

In the spirit of this, I sought out an existing implementation of a symmetric encryption algorithm, and found tweetnacl. Security audited, it provides what looks to be a secure symmetric encryption API, which is the perfect foundation upon which to build my encryption protocol. My hope is that by simply exchanging messages I've encrypted with an secure existing algorithm, I can reduce the risk of a security flaw.

This is a good start, but there's still the problem of forward secrecy to tackle. To explain, perfect forward secrecy is where should an attacker be listening to your conversation and later learn your encryption key (in this case the join secret), they still are unable to decrypt your data.

This is achieved by using session keys and a key exchange algorithm. Instead of encrypting the data with the join secret directly, we use it only to encrypt the initial key-exchange process, which then allows 2 communicating parties to exchange a session key, which used to encrypt all data from then on. By re-running the key-exchange process to and generating new session keys at regular intervals, forward secrecy can be achieved: even if the attacker learns a session key, it does not help them to obtain any other session keys, because even knowledge of the key exchange algorithm messages is not enough to derive the resulting session key.

Actually implementing this in practice is another question entirely however. I did some research though and located a pre-existing implementation of JPAKE on npm: jpake.

With this in hand, the problem of forward secrecy was solved for now. The jpake package provides a simple API by which a key exchange can be done, so then it was just a case of plugging it into the existing system.

Where next?

After implementing an encryption protocol as above please do comment below if you have any suggestions), the next order of business was to implement a peer-to-peer swarm system where agents connect to the network and share peers with one another. I have the basics of this implemented already: I just need to test it a bit more to verify it works as I intend.

It would also be nice to refactor this system into a standalone library for others to use, as it's taken quite a bit of effort to implement. I'll be holding off on doing this though until it's more stable however, as refactoring it now would just slow down development since it has yet to stabilise as of now.

On top of this system, the plan is to implement a protocol by which any peer can query any other peer for system information, and then create a command-line interface for easily querying it.

To make querying flexible, I plan on utilising some form of in-memory database that is populated with queries to other hosts based on the tables mentioned in the user's query. SQLite3 is the obvious choice here, but I'm reluctant to choose it as it requires compilation upon installation - and given that I have experienced issues with this in the past, I feel this has the potential to limit compatibility with some system configurations. I'm going to investigate some other in-memory database libraries for Javascript - giving preference to those which are both light and devoid of complex installation requirements (pure JS is best if I can manage it I think). If you know of a pre Javascript in-memory database that has a query syntax, do let me know in the comments below!

As for querying system information directly, that's an easy one. I've previously found systeminformation - which seems to have an API to fetch pretty much anything you'd ever want to know about the host system!

Sources and further reading

How to contribute code to git repositories that aren't hosted on GitHub

With just over 48 million public repositories (and growing fast [^repos]), GitHub is pretty much the de-facto place to host code, as pretty much everyone has an account there. By far the most useful feature GitHub provides is the ability to open pull requests (PRs).

Not all code repositories are hosted on GitHub, however - and these repositories do not get the same exposure and hence level of participation and collaboration that those on GitHub do, due in no small part (other reasons exist too though) I suspect because contributing to these repositories is unfortunately more complicated than opening a PR.

It needn't be this way though - so in this post I'll show you how to unlock the power of contributing code to quite literally any project that is under git version control. While knowledge of your command line is necessary, basic familiarity will suffice (see also my blog post on learning your terminal). I'll also assume that you have git installed, and that Windows users have already opened Git Bash and navigated to the cloned repository in question with cd.

Step 0: Making your changes

This is the easy part. After cloning your repository in the normal way, make a new branch for your changes. GUI users should be able to navigate their interfaces. For those using the command line, do this from the source branch you want to branch from:

git switch -c new_branch_name

Then, make your changes in the usual way.

Step 1: Find contact details

Once you have your changes, you need to find somewhere to send them. This is different for every repository, but here are some common places to check for contact details:

  • The project's README file
  • The project's website (if it has one)
  • Track down the author's name on other websites
  • Email addresses of commits

Step 2: Make a patch file

Now that you've found a place to send your contribution to, we need to pack it into a nice neat box that can be transported (usually via email as an attachment). Doing so is fairly simple. You need to first identify the hashes of the commits you want to include. Do that with this command:

git log --one-line --graph --decorate

You might get some output that looks a bit like this:

* c443459 (HEAD -> some-patch) wireframe/corner_set: fix luacheck warnings
* 3d12345 //smake: fix luacheck warnings
* 4c7bb6a //sfactor: fix luacheck warnings; fix crash
* 67b4495 (HEAD -> main, origin/main) README: add link to edit full reference
* ee46507 fixup
* 58933c6 README: Update command list
* 6c49b9d fixup again
* 364de73 fixup

In your terminal it will probably be coloured. The 7 digit hexadecimal value (e.g. 4c7bb6a) there is the commit hash. Copy the commit hash of the oldest and the newest commits in question, and then do this:

git format-patch --stdout OLDHASH..NEWHASH >somefilename.patch

...replacing OLDHASH and NEWHASH with the oldest and newest commit hashes respectively. If the newest commit hash is the latest commit on the branch, then the keyword HEAD can also be used instead.

Step 3: Submit patch file

Now that you have a patch file, you can send it to the author. By email, instant messaging, or avian carriers - any means of communication will do!

This is all there is to it. If you've received such a patch and are unsure about what to do though, keep reading.

But what happens if I receive a contribution?

If you've received a patch file generated by the above method and don't know what to do with it, read on! You may have received a patch file for a variety of reasons:

  • Someone's interested in improving your project
  • You've previously sent a contribution to someone else, and they've sent back a patch of their own along with a code review of things you need to change or improve

Either way, it's easy to apply it to your git repository. First, make sure you have the branch in question you want to apply the commits to checked out. Then, download the patch file, and do this:

git am path/to/somefile.patch

...this will apply the commits contained within to the currently checked out branch for you. If you're unsure about what they contain, don't forget that you can always open the patch file in your text editor and inspect it, or do this to see a quick summary:

grep Subject: path/to/somefile.patch

Once a patch file is applied, you can handle things in the usual way - for example you'll probably want to use git push to push the commit(s) to your remote, or perhaps git rebase -i to clean them up first.

Conclusion

In this post, I've shown you how to create and apply patch files. This is extremely useful when dealing with sending patches to code repositories that are either on servers where you can't create an account to open a pull request (e.g. Gitea) or just simply doesn't have a pull request system at all. It can even be used in extreme situations where a given code repository doesn't have a central remote server at all - this is surely where git get's its reputation as a distributed version control system.

Sources and further reading

[repos]: Ref https://github.com/search?q=is:public as of 2022-01-06

Cluster, Part 12: TLS for Breakfast | Configuring Fabio for HTTPS

Hey there, and happy new year 2022! It's been a little while, but I'm back now with another blog post in my cluster series. In this shorter post, I'm going to show you how I've configured my Fabio load balancer to serve HTTPS.

Before we get started though, I can recommend visiting the series list to check out all the previous parts in this series, as a number of them give useful context for this post.

In the last post, I showed you how to setup certbot / let's encrypt in a Docker container. Building on this, we can now reconfigure Fabio (which we setup in part 9) to take in the TLS certificates we are now generating. I'll be assuming that the certificates are stored on your NFS share you've got setup (see part 8) for this post. In the future I'd love to use Hashicorp Vault for storing these certificates, but as of now I've found Hashicorp Vault to be far too complicated to setup, so I'll be using the filesystem instead.

Configuring Fabio to use HTTPS is actually really quite simple. Open /etc/fabio/fabio.properties for editing, and at the beginning insert a line like this:

proxy.cs = cs=some_name_here;type=file;cert=/absolute/path/to/fullchain.pem;key=/absolute/path/to/privkey.pem

cs stands for certificate store, and this tells Fabio about where your certificates are located. some_name_here is a name you'd like to assign to your certificate store - this is used to reference it elsewhere in the configuration file. /absolute/path/to/fullchain.pem and /absolute/path/to/privkey.pem are the absolute paths to the fullchaim.pem and privkey.pem files from Let's Encrypt. These can be found in the live directory in the Let's Encrypt configuration directory in the subdirectory for the domain in question.

Now that Fabio knows about your new certificates, find the line that starts with proxy.addr. In the last tutorial, we configured this to have a value of :80;proto=http. proxy.addr can take a comma-separated list of ports to listen on, so append the following to the existing value:

:443;proto=https;cs=some_name_here;tlsmin=tls12

This tells Fabio to listen on TCP port 443 for HTTPS requests, and also tells it which certificate store to use for encryption. We also set the minimum TLS version supported to TLS 1.2 - but you should set this value to 1 version behind the current latest version (check this page for that). For those who want extra security, you can also add the tlsciphers="CIPHER,LIST" argument too (see the official documentation for more information - cross referencing it with the ssl-config.mozilla.org is a good idea).

Now that we have this configured, this should be all you need to enable HTTPS! That was easy, right?

We still have little more work to do though to make HTTPS the default and to redirect all HTTP requests to HTTPS. We can do this by adding a route to the Consul key-value store under the path fabio/config. You can do this either by editing it in the web interface by creating a new key under fabio/config and pasting the following in & saving it:

route add route_name_here example.com:80 https://example.com$path opts "redirect=308"

Alternatively, through the command line:

consul kv put fabio/config/some_name_here 'route add some_name_here example.com:80 https://example.com$path opts "redirect=308"'

No need to restart fabio - it should pick routes up automatically. I have found however that I do need to restart it occasionally if it doesn't pick up some changed routes as fast as I'd like though.

With this, we now have automatic HTTPS setup and configured! Coming up in this series:

  • Using Caddy as an entrypoint for port forwarding on my router (status: implemented; there's an awesome plugin for single sign-on, and it's amazing in other ways too) - this replaces the role HAProxy was going to play that I mentioned in part 11
  • Password protecting Docker, Nomad, and Consul (status: on the todo list)
  • Semi-automatic docker image rebuilding with Laminar CI (status: implemented)

Sources and further reading

Happy Christmas 2021!

Happy Christmas to everyone here! I hope you have a restful break and a great new year :-)

I've got a few things potentially coming up in the new year, from another quick cluster blog post to a post on my new GPU server I bought - and I'm sure I'll accidentally end up writing all sorts of other posts too! I'll also of course be continuing to blog about my PhD experience - which seems to be starting to come together! :D

As always, if you have a thing you're interested in that I'm doing and I haven't yet blogged about (or even something you're confused on in a previous post!), please do let me know in a comment below and I'll do my best to blog about it :_)

A frosted white bauble with green foliage and purple and pink flowers, against a backdrop of a Christmas tree

Cluster Series List

Hey there - I hope you've had a happy Christmas and a great new year! I'm writing this before both, so I'm still looking forward to it :D

It's been a while since I did a series list here. This time, it's for my Cluster series of blog posts, which has reached 11 parts so far! Despite publishing this series list, this series is not over! I intend to post more parts to this series in the future. I'm just posting this as a convenient place to point people interested in my cluster setup.

For those not in the know, I have a cluster of Raspberry Pis which provide my primary source of compute power for continually running services on my home network - and this series documents how I've got it setup. I also have a NAS which provides redundant (and backed up, of course) storage (see here for the setup, and here for the backups), and finally a GPU server which when I look back at the archives I apparently have yet to blog about - oops!

Here's the full series list - I'll update this list as I post more parts to this series.

If I forget to update this post, please do get in touch by leaving a comment below. You can also find all the posts related to my cluster by looking at the cluster tag here on my blog.

PhD Update 11: Answers to our questions

Heya! It's time for another PhD update blog post. Sometimes, answers to the questions one ask some in the form of more questions. In this post, as I predicted in the last post I'll be talking mainly about my work with tweets from twitter, as I haven't yet had time to return to the Temporal CNN Autoencoder. Before we start though, here's a list of all the parts in this series so far:

As usual, none of the things I present here are finalised, and are subject to significant change as I double check everything.

Conferences part 2

In the last post, I talked about the conferences I have applied to and not applied to. In this one, I can now say that I have been accepted for the AAAI-22 Doctoral Consortium! It was going to be held in Vancouver, Canada - but has since been moved to be fully virtual and online. While I both understand the reasoning behind the decision and am relieved I don't have to travel, it is a bit of a shame that I won't get the chance to have those face-to-face conversations you don't get when in a video call.

Despite the move to being in person, I'm still both excited to attend and mildly terrified about presenting the things I've been doing to a potentially large audience.

Looking forward, at the suggestion of my supervisor I plan to finish writing up that AI+HADR paper I mentioned in the last update as a journal article instead, and then submit that. While I'm working through the review process for that paper, I'll return to the Temporal Autoencoder / rainfall radar subproject and work on implementing my idea for it.

Tweet sentiment analysis

After the rather rushed analysis of the data in the last post, I've now taken the time to analyse the data more thoroughly. A number of things became apparent here, but first let's look at the research questions I've asked:

  1. Is there a more negative response to more sudden / severe floods?
  2. Can we classify images by the sentiment of the associated tweet?

Answering these questions has not been straightforward, but I'm now at a point where I have some preliminary answers (which, I stress, are not double checked and not peer reviewed).

Unfortunately, the answer to the first question there is that it can't easily be answered with the current data available. As it turns out, I have been unable to find an objective and consistent measure of how severe or sudden a flood was.

One might think that say the amount of insurance damages would be a good choice. This doesn't work out though, because not all flooding event have (public) damage estimates. Those that do are often measured against different goalposts: flood A might be measured in property damages, and flood B in economic impact.

Another measure I investigated was the number of homes destroyed or people displaced. This too as it turns out has multiple issues. For one, while for some floods multiple estimates are available they don't always agree. For a single given flood estimates might range from 800 to 2400 homes destroyed, and are often measured at different points in the history of the given flooding event, and it's sometimes unclear at what stage in a flood's lifecycle the estimate was made.

Even if such estimates were consistent (which they really aren't), there's another issue too: they are often limited to country borders. For example, a government agency may estimate the number of homes destroyed for their country, but not other countries. This is totally reasonable: a government is concerned first and foremost with the people within it's borders. Sadly floods, storms, and hurricanes rarely discriminate across such borders. Take Storm Christoph for example. It hit the UK in January 2021, but after that it continued on and hit Scandinavia too.

The other question above is thankfully much easier to answer - it deserves it's own section in this post though, so see below for that. It's not all bad new on the tweet sentiment analysis front though - I did find some unexpected results while I was analysing the data. To explain, let's look at a fancy new chart I've plotted:

(Above: Various floods and their overall sentiment - both with replies included and excluded.)

This fancy chart shows the overall sentiment of a number of different flooding events, with replies included (going down) and excluded (going up). What's fascinating here is that it appears to suggest that the replies contribute significantly towards the percentage of positive tweets. Perhaps people are most likely to tweet words of encouragement (e.g. "stay safe :hugs:")?

With this in mind, I correlated the total number of tweets made in a flood with the overall sentiment (as a percentage), and got a Pearson correlation coefficient of -0.54, which indicates a medium correlation. Apparently, if more people tweet about a flood, it's more likely to have a more positive overall sentiment. If replies are excluded, it works out to -0.31, which would indicate a weaker negative correlation.

Image classification

Another thing I've been working on is classifying images associated with tweets using the sentiment of the tweet as a label. With roughly ~175K images to work with, this has proved to be a useful exercise - resulting in ~75.23% validation accuracy (rising to 96.9% training accuracy by epoch 50%, suggesting it's overfitting and I need more data to improve validation accuracy any further) over 9 epochs. While unfortunately I can't share a sample of positive/negative images predicted by the model due to data privacy rules, I can talk about the structure of the model and show a confusion matrix.

I started out with a Compact Convolutional Transformer model as it looks very cool and has some significant benefits over more traditional model architectures, there's a bug in my implementation somewhere I can't spot, and it only yields 10% to 20% accuracy on Fashion MNIST. To avoid wasting too much time, I'm now using a prebuilt ResNet50 initialised with random weights that takes in images in the size 128 x 128 pixels (images larger or smaller than this are automatically resized without preserving aspect ratio).

While the accuracy of the model is slightly lower than that of the model that predicts the sentiment of the tweets themselves, looking at samples that I quickly threw together with a bit of Bash and ImageMagick and the confusion matrix (see below) reveals that it is in fact doing something useful. Here's that Bash 2-liner:

shuf path/to/tweets-labelled.tsv | awk '/positive$/ { print("/absolte/path/to/media_dir/" $1); }' | head -n64 >/tmp/sample-pos.txt;
montage @/tmp/sample-pos.txt -geometry 960x540+10+10 -tile 8x8 /tmp/sample-pos.jpeg

The @path/to/file.txt syntax in the montage command call there reads a list of filepaths from a file instead of directly specifying them on the command line. By replacing positive in the awk filter with negative and sample-pos.txt to sample-neg.txt, the same procedure can also generate a random sample for the negative category too.

From looking at a single sample of 64 positive images and 64 negative images generated using the method above, positive images generally include:

  • Cats (by far the most popular of course)
  • Cupcakes
  • People, some of which are helping others in a flood

Whereas negative images generally include:

  • Floods
  • Damage to homes, buildings, etc

My next immediate step here is to plot a confusion matrix to better understand how the model is performing, as I'm slightly concerned that it's ignoring the minority class (in this case, positive tweets). I've already mostly completed this already, but it's just not quite ready to show here yet as I need to double check and revise some stuff.

Of course, given that the source dataset is very noisy (social media data generally is) and relatively difficult for AI models to understand, I think this is a good result.

From here, if I have time I'd like to combine this image classification model with the earlier tweet sentiment analyser model to create a single model that can more accurately predict the sentiment of both the text of a tweet and the associated image at the same. To do this, I'm probably going to investigate and use CLIP - more on this in a future post.

Conclusion

While I still haven't done anything with the Temporal Autoencoder due to other priorities, I'm hoping to return to it once I've wrapped up this social media section of the project. I have made significant progress on analysing the social media data - both the textual tweets and the associated images, and I plan to combine the models I've trained to classify both text and images into a single model. It's not the end of the road yet though: while I've found some answers, they are just leading to more questions.

Encryption demystified: What to use and when

The other day, I found myself explaining different types of encryption, how they work, and what they are used for to someone in my lab implementing a secure system. During this process, I ended up creating a series of fancy diagrams in draw.io - so I thought I'd write it up into a proper demystification blog post.

To start us off here, let's define encryption. Encryption is the process of transforming a given input block of data (of an arbitrary data) using some kind of secret key into a form that is then completely unreadable. Any adversary obtaining a block of encrypted data encrypted with a suitably strong key (and algorithm) is not able to read or understand the data at all - except perhaps infer its original length.

Conversely, decryption is the process of undoing the encryption process with the same (or different, in some cases) key to get back the original data.

For purpose of this blog post, we will assume:

  1. The encryption algorithms in question are perfect with no known weaknesses
  2. Keys used to encrypt and/or decrypt are very strong and can't be cracked

Each of these are fields in their own right that could quite easily take many blog posts to fully explore.

From the perspective of a developer, there are 3 different basic places one needs to aware of. Others certainly exist, but to avoid making this post too long I'll just be covering the following 3:

  1. Device encryption
  2. Transport layer encryption
  3. End-to end encryption

If there's any other encryption scheme you'd like me to cover, please leave a comment below and I'll try my best to explain it in a separate post.

Device encryption

First up is device encryption. Most modern operating systems for phones and PCs alike support device encryption:

  1. Windows
  2. Linux
  3. Android
  4. iOS

Not sure on macOS since I don't own one, but I'd be surprised if it didn't. The purpose of device encryption is that when the device is powered off, all data is stored physically on disk in an encrypted format, making it unreadable should the device be physically stolen - thereby protecting all data stored on it.

This is accomplished in a layered fashion. Let's explain it with a diagram:

A vertical layered diagram explaining device encryption. Physical block devices on the bottom, software applications on the top.

Although they may have different names for it, most operating systems back a concept of a "block device". Such a device is capable of storing a given number of bytes of data. Such devices need not be physical disks: they can instead be virtual. For example, zram presents block devices that store data compressed in RAM.

We can make use of this to encrypt hard drives. An encryption layer such as LUKS on Linux presents a virtual block device to the operating system which encrypts all data written to it before saving them back to some physical disk by which it is backed.

On boot, the encryption layer is initialised by the operating system and it asks the user for a password. Upon being given the correct password, the encryption layer is activated, and the operating system can then both request data blocks from the virtual block device (which causes the encryption layer to fetch the encrypted block from disk and then decrypt it before passing it to the requester) and write data blocks back to the virtual block device (whereby the encryption layer will encrypt the new data block before writing it to disk).

Even operating systems such as Windows (e.g. Bitlocker) and iOS which don't expose block devices in the same way as Linux does, the same principles I've explained here apply.

When the device is powered off, the key that was being stored in memory is wiped (it's stored in RAM, and RAM requires power to store data) and the data is secured.

Transport layer encryption

Another place encryption is commonly encountered in when transferring data to and from remote hosts over the Internet. Since the Internet is untrusted, it becomes rather a problem when one wants to transfer personal information such as passwords, bank card numbers, and location information across the Internet, in that such data could be stolen or modified in transit.

To solve this problem, the Transport Layer Security (TLS) protocol was invented. The purpose of TLS is to provide a secure connection between 2 hosts using authenticated encryption that has the following properties:

  1. Eavesdroppers are unable to read data being transmitted
  2. Attackers are unable to successfully modify any data in transit without the destination host knowing about it
  3. The 2 hosts communicating with each other can verify each other's identities 1

Although TLS itself is a protocol that is usually spoken over TCP, because it provides a generic bidirectional pipe through which any binary data can be transmitted and received, it is commonly used to wrap around other protocols to secure them. Examples include:

  1. HTTP: Hypertext Transfer Protocol (used in web browsers)
  2. SMTP: Simple 2 Mail Transfer Protocol (used for sending and receiving emails)
  3. IMAP: Internet Message Access Protocol (used for accessing email inboxes)
  4. XMPP: Extensible Messaging and Presence Protocol (a federated messaging protocol used for instant messaging) 3

....and many others. There's a reason it's so prevalent: The most important rule when dealing with encryption and security is to never roll your own. Follow the standards, and use existing crypto libraries for your platform. Don't implement your own, as it's much more difficult than it appears to ensure your system is actually secure.

Here's a diagram of how it works:

End-to-end encryption

The last form of encryption I'm going to talk about is also perhaps the most misunderstood: end-to-end encryption.

End-to-end encryption is useful when you have 3 parties involved in the equation - usually 2 clients and a server. Suppose Alice and Bob have a messaging app on their phone that sends messages through an intermediary server (perhaps performing store-and-forward functions), but they do not want the server to be able to read their message. The solution here is end-to-end encryption, which prevents the intermediary server from being able to read the message.

Here's a diagram to explain what I mean:

End-to-end encryption is accomplished by using asymmetric cryptography. Asymmetric encryption - unlike symmetric encryption uses 2 keys instead of 1, and these keys also have to possess special properties, so you can't just generate some cryptographically secure random numbers and call it a day 4.

In asymmetric encryption, you have a public key which can only encrypt data, and a private key which can then decrypt the data. An example of this in practice is GPG, which is extensively used e.g. by apt (the package manager on some Linux systems).

In the diagram above, the sender encrypts the message with the public key that belongs to the receiver. They then send the message to the server, who forwards it on to the receiver. The receiver then decrypts the message with the private key (sometimes called a secret key).

In this way, the server is never able to read the content of the message. If the receiver wanted to reply to the sender, the same would happen in reverse. The receiver would need to ask the sender to securely transmit their public key to them, which they could then use to encrypt a message to send back.

In practice, every client involved in an end-to-end encryption system will generate their own keypair that consists of a public and a private key. They will then advertise their public key to everyone, allowing anyone to encrypt a message that only they can decrypt (an example of this: my GPG key can be found here).

It is important to avoid confusing end-to-end encryption with transport layer encryption. Indeed, end-to-end encryption is absolutely no substitute for transport layer encryption, because an application may for example need to authenticate with the intermediary server before being allowed to transmit end-to-end encrypted messages.

Transport layer encryption:

  1. Allows 2 parties to communicate with each other securely
  2. Does not prevent the receiver from reading received data (even if device encryption is employed)

End-to-end encryption:

  1. Requires 3 parties to be involved in order to be effective
  2. Ensures that 2 parties can communicate securely through an intermediary party
  3. Requires that 2 parties wishing to communicate must first securely exchange their public keys and be confident that the public keys they have received actually belong to the other party they wish to communicate with
  4. Can be significantly complicated to implement

Conclusion

In this post, we've looked at 3 types of encryption, how they work, and when they are useful. To summarise:

  1. Device encryption protects data from physical theft
  2. Transport layer encryption protects data in transit between 2 communicating parties talking to each other directly
  3. End-to-end encryption protects the communications of 2 parties who are talking through 1 or more intermediary parties

Each of these are useful in different situations - and most likely are already solved problems. Do not implement any of these yourself. Use well known, battle tested libraries and programs for your platform that are regularly receiving updates instead.

While I've simplified this a lot in writing this post (we'd be here all week if I didn't!), I hope you've found this helpful (or even if you're still confused). This is a starting point, not an ending point - if this kind of thing interests you I can recommend researching it further and playing around with some practical implementations thereof.

Please do comment below (especially if you've spotted a mistake)! It's very motivating to hear that the things I write here are actually helpful to people.


  1. In TLS, this is done using certificates. Each host has a list of certificate authorities (CAs) it trusts, and when a connection is initiated between a client and a server during the handshake certificates signed by these CAs are exchanged securely and checked. In practice, generally only the server sends a certificate which is then checked by the client - for example in HTTPS in web browsers. Server-to-server connections in a federated system (e.g. email) however give an opportunity to put this mutual authentication into action though. 

  2. SMTP is not simple. While it was simple once upon a time, unfortunately it was not designed with the modern web and security in mind (given that it was first invented in 1981, I'm not surprised). Since it was invented, a large number of additions (both standardised and otherwise) have been adopted, significantly complicating it. Setting up a mail server correctly and ensuring your emails are delivered properly is not a simple task. 

  3. See Snikket for a server, and Conversations for an Android client. See also the full client list

  4. Use a crypto library like your programming language's crypto built-ins. If your language doesn't have a built-in module and you've tried checking your package manager, try libsodium, bearssl, or openssl

Autoplant, Part 1: Overview

At a recent plant sale at my University, I bought myself both a parlour palm and an areca palm for my desk. They look lovely, but I always worry that I'm going to forget to water them.

The palms on my desk, with a blurred background

Having some experience with arduino already, I decided to resolve the issue by implementing an arduino-based system to monitor my new plants and log the resulting data in my existing collectd-based (see also CGP which I use, but sadly it's abandonware. Any suggestions for alternatives are greatly appreciated) monitoring system I use for the various servers and systems that I manage.

The eventual aim is to completely automate the watering process too - but before I can do that I need to first get the monitoring up and running so that I can calibrate the sensors, so this is what I'll be focusing on in this post.

Circuit design

To do this, I've used a bunch of parts I have lying around (plug a few new ones), and wired up a NodeMCU v0.9 to some sensors:

The circuit I've designed. See below for explanation.

(Above: The circuit I've wired up. See the Fritzing file.)

A full list of parts can be found at the end of this post, along with links to where I got them from. The sensors I'm using are:

  • 2 x Capacitive soil moisture sensors
  • 2 x Liquid level sensors
  • 1 x BME280 that I had lying around and thought "why not?"

Both the soil sensors and the liquid level sensors give out an analogue signal (shown in orange), but unfortunately the NodeMCU v0.9 (based on the ESP8266) only has a single analogue pin, so I bought myself a CD4051 from switch electronics (link at the bottom of this post) to act as a multiplexer. Given 3 digital wires to act as a channel selector (shown in purple), it allows you to select between 8 different analogue channels - perfect for my use-case. You can see it in the diagram at the left-hand side of the larger breadboard.

While the Fritzing design for a USB breakout board isn't exactly the same as the ones I have, I plan on using them to transport power, ground, and the 2 analogue outputs from the 2 sensors for the plant on the other side of my desk.

The other component here is a BME280 I have lying around to monitor temperature, humidity, and air pressure. This isn't required, but since I have one lying around anyway I thought it was a good idea to use it.

Networking

With the circuit designed and implemented (still got to finalise the USB breakout boards), the next thing to organise is the transport and logging for the data generated. MQTT is easy to use on the Arduino because of PubSubClient, so that's what I decided on using.

Time for another diagram!

A brief overview of the networking setup I'll be going for with the backend. Subject to change.

(Can't see the above? Try a PNG instead.)

If you've followed my blog here for a while, you might remember that I have a cluster that's powered by a bunch of Raspberry Pis running Hashicorp Nomad and Consul.

On this cluster - amongst a many other things - I have an MQTT server (check out my earlier post on how to set one up) running inside a Docker container as a Nomad task. Connections to my Mosquitto MQTT server are managed by Fabio, which reverse-proxies connections to Mosquitto. Fabio exposes2 ports by which one can connect to the MQTT server:

  • TCP port 1883: Unencrypted MQTT
  • TCP port 8883: (TLS encrypted) MQTTS

In doing so, Fabio terminates TLS so that Mosquitto doesn't need to have access to my TLS certificates. For those interested, the proxy.addr in your fabio.properties file is actually a comma-separated list, and to achieve TLS termination for MQTTS you can add something like this to proxy.addr (make sure to use an up-to-date cipher list):

:1883;proto=tcp,:8883;proto=tcp;cs=mooncarrot;tlsmin=tls12;tlsciphers="TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305

Then if we continue working backwards, the next piece of the puzzle is port forwarding on my router. While Fabio exposes both MQTT and MQTTS, I only port-forward MQTTS and not unencrypted MQTT. My autoplant system will only be communicating over MQTTS, so it doesn't make sense to expose the less secure unencrypted port to the world too.

From here, no matter where in the world my autoplant monitoring system is located it will still be able to send sensor values back to be logged.

Working forwards again from the Mosquitto MQTT server, the Arduino is going to send MQTT messages to the sensors/data topic with messages that look something like this (but minified):

{
    "id": "plant-a",
    "sensor": "soil",
    "value": 2.41
}

The MQTT plugin for Collectd doesn't support JSON input though, so I'm going to write a broker that translates messages from the JSON format above to the format that Collectd likes. My reasoning for this is twofold:

  1. I also want to log data to a tab-separated-values file for later long-term analysis
  2. The collectd format is somewhat complicated and has the potential to mess up other parts of my monitoring system, so I want to do some extra validation on incoming data from remote sensors

I haven't fully decided on which language I'm going to write this validating broker in, but I'm thinking it might end up being a shell script. Weird as it sounds, I found a cool MQTT library for Bash called bish-bosh which I want to try out. My second choice here would be Rust, but I unfortunately can't find a (preferably pure-rust) MQTT(S) library, which I'm finding rather strange.

Either way, if possible I'm going to package up the completed implementation of this broker into a Docker container and write a quick Hashicorp Nomad job file to get it running on my cluster so that it benefits from the reundancy of my Nomad cluster. Then, with collectd listening on another topic, it can transparently bridge the 2. I'm not quite sure how will collectd's MQTT plugin actually works though, so this shell script may end up being a child process of my main collectd server using the exec plugin instead.

Conclusion

In this post, I've outlined my solution to a seemingly simple problem of watering plants automatically (because all simple problems need complex solutions :P). With an arduino-based system, I'm going to send messages via MQTTS containing sensor data, which will then be passed to a backend for processing. In future posts in this series, I want to (in no particular order):

  • Go through the code for the arduino I've written
  • Look at the code I have yet to write for the translation broker
  • Explore the 3d printed parts I have yet to design for various purposes
  • Show off the final completed circuit in action
  • Look at some initial statistics collected by the sensors
  • Start to play with pumping water around and different holding containers / designs, etc

Not all of these warrant a separate post, but there are definitely multiple posts to come on this project :D

Full parts list

  • 1 x NodeMCU v0.9 (anything with WiFi will do, I just had an old on lying around)
  • 1 x Half size breadboard
  • 1 x Mini breadboard (the smallest I had)
  • 2 x Capacitive soil moisture sensor (source, significantly cheaper if you're willing to wait ages)
  • 2 x Waveshare liquid level sensor (source, original website)
  • 1 x CD4051 8:1 analogue multiplexer (source - breadboard compatible)
  • 2 x USB type a breakouts
  • 1 x BME 280 (I got mine cheap on AliExpress)
  • Lots of jumper wires

Backing up with tar, curl, and SFTP with key-based authentication

I have multiple backup strategies, from restic (which was preceded by duplicity) to btrfs snapshots that I sync over ssh. You can never have too many backups though (especially for your most valuable data that can't be easily replaced), so in this post I want to share another of the mechanisms I employ.

Backup systems have to suit the situation at hand, and in this case I have a personal git server which I backup daily to Backblaze B2. In order to be really absolutely sure that I don't lose it though, I also back it up to my home NAS (see also the series that I wrote on it). As you might have guessed km the title of this post, it takes backups using tar. I have recently upgraded it to transfer these backups over SFTP (SSH File Transfer Protocol).

Given that the sftp command exists, one might wonder why I use curl instead. Unfortunately, sftp as far as I can tell does not support uploading a file passed in though stdin - which is very useful when you have limited disk space on the source host! But using curl, we can pipe the output of tar directly to curl without touching the disk.

Documentation is sadly rather sparse on using curl to upload via SFTP, so it took some digging to figure out how to do it using SSH keys. SSH keys are considerably more secure than using a password (and a growing number of my systems are setup to disallow password authentication altogether), so I'll be using SSH key based authentication in this post.

To start, you'll need to generate a new SSH keypair. I like to use ed25519:

ssh-keygen -t ed25519

When prompted, choose where you want to save it to (preferably with a descriptive name), and then do not put a password on it. This is important, because at least in my case want this to operate completely autonomously without any user input.

Then, copy the public SSH key to your remote server (I strongly recommend using an account that is locked to be SFTP-only and no shell access - this tutorial seems to be good at explaining the steps involved in doing this), and then on the device doing the backing up do a test to both make sure it works and add the remote server to the known_hosts file:

sudo -u backupuser bash
ssh -i path/to/keyfile -T remoteuser@remotehost

Now we've got our SSH / SFTP setup done, we can do the backup itself:

ionice -c Idle nice -n20 tar --create --exclude-tag .BACKUP_IGNORE --gzip --file path/to/dir_to_backup | curl -sS --user "remoteuser:" --key "path/to/sshkey_ed25519" --pubkey "path/to/sshkey_ed25519.pub" -T - "sftp://example.com/path/on/remote/upload_filename.tar.gz"

Let's break this down a bit:

  • ionice -c Idle node -n20: Push the backup job into the background - both for the CPU and disk priorities. Optional.
  • tar --create --exclude-tag .BACKUP_IGNORE --gzip --file path/to/dir_to_backup: An example tar command. Use whatever you want here.
  • --user "remoteuser:": The remoteuser bit there is the user to login to the remote host with. The bit after the colon is technically the password, but we're leaving that blank 'cause we're using SSH keys instead.
  • --key "path/to/sshkey_ed25519": The path to the SSH private key.
  • -T -: Upload the standard input instead of a file on disk
  • --pubkey "path/to/sshkey_ed25519.pub": The path to the SSH public key.
  • sftp://example.com/path/on/remote/upload_filename.tar.gz: The host to upload to and path thereon to upload the standard input to. If you need to specify a custom port here, do sftp://example.com:20202/path/blah/.... instead, where 20202 is your custom port number.

Personally, I'm using this technique with an SSH tunnel, so my variant of the above command looks a bit like this (extra bits around the edges stripped away for clarity):

git_backup_user="sftpbackups";
git_backup_location="sftp://localhost:20204/git-backups";
git_backup_key="path/to/sshkey_ed25519";
upload_filename="git-$(date +"%Y-%m-%d").tar.gz";

nice -n20 tar --create --exclude-tag .BACKUP_IGNORE --gzip --file - git/{data,gitea,repos}/ www/blog | curl -sS --user "${git_backup_user}:" --key "${git_backup_key}" --pubkey "${git_backup_key}.pub" -T - "${git_backup_location}/${upload_filename}"

That's it for this post. If you've got any questions or comments, please post them below.

lnav basics tutorial

Last year, I blogged about lnav. lnav is a fantastic tool for analysing log files, and after getting a question from CrimsonTome I thought I'd write up a longer-form tutorial on the basics of using it, as I personally find it exceedingly useful.

A screenshot of lnav at work

I'll be using an Ubuntu Server 20.04 instance for this tutorial, but anything Linuxy will work just fine. As mentioned in my previous post, it's available in the default repositories for your distribution. For apt-based systems, install like so:

sudo apt install lnav

Adjust for your own package manager. For example, pacman-based distributions should do this:

sudo pacman -S lnav

lnav operates on 1 or more input files. It's common to use logrotate to rotate log files, so this is what I'd recommend to analyse all your logs of a particular type in 1 go (here I analyse generic syslog logs):

lnav /var/log/syslog*

On your system you may need to sudo that. Once you've got lnav started, you may need to wait a moment for it to parse all the log files - especially if you have multi-million line logfiles.

After it's finished loading, we can get to analysing the logs at hand. The most recent logs appear at the bottom, and you'll notice that lnav will have coloured various parts of each log message - the reason for this will become apparently later on. lnav should also livestream log lines from disk too.

Use the arrow keys or scroll up / down to navigate log messages.

lnav operates via a command pallette system, which if you use GitHub's [Atom IDE] (https://atom.io/) or Sublime Text (which is apparently where the feature originated) may already be familiar to you. In lnav's case, it's also crossed with a simple shell. Let's start with the most important command: :filter-out.

To execute a command, simply start typing. Commands in lnav are prefixed with a colon :. :filter-out takes a regular expression as it's only argument and filters all log lines which match the given regular expression out and hides them. Sticking with our earlier syslog theme, here's an example:

:filter-out kernel:

You'll notice that once you've finished typing :filter-out, lnav will show you some help in a pane at the bottom of the screen showing you how to use that command.

:filter-out has a twin that's also useful to remember: :filter-in. Unlike :filter-out, :filter-in does the opposite - anything that doesn't match the specified pattern is hidden from view. Very useful if you know what kind of log messages you're looking for, and they are a (potentially very small) subset of a much larger and more unstructured log file.

:filter-in dovecot:

To delete all existing filters and reset the view, hit Ctrl + R.

lnav has many other built-in commands. Check out the full reference here: https://docs.lnav.org/en/latest/commands.html.

The other feature that lnav comes with is also the most powerful: SQLite3 support. By parsing common log file formats (advanced users can extend lnav by defining their own custom formats, but the specifics of how to do this are best left to the lnav documentation), it can enable you to query your log files by writing arbitrary SQLite queries!

To understand how to query a file, first hit the p key. This will show you how lnav has parsed the log line at the top of the screen (scroll as normal to look at different lines, and hit p again to hide). Here's an example:

Using this information, we can then make an SQL query against the data. Press semicolon ; to open the SQL query prompt, and then enter something like this:

SELECT * FROM syslog_log WHERE log_procname == "gitea";

....hit the enter key when you're done composing your query, and the results should then appear! You can scroll through them just like you do with the regular log viewer - you just can't use :filter-in and :filter-out until you leave the query results window with the q key (this would be a really useful feature though!).

If you're running lnav on your Nginx logs (located in /var/log/nginx/ by default), then I find this query to be of particular use:

SELECT COUNT(cs_referer) AS count, cs_referer FROM access_log GROUP BY cs_referer ORDER BY COUNT(cs_referer) DESC

That concludes this basic tutorial on lnav. There are many more features that lnav offers:

  • :filter-expr for filtering the main view by SQL query
  • Analysing files on remote hosts over SSH
  • Search logs for a given string (press / and start typing)
  • Too many others to list here

Check out the full documentation here: https://docs.lnav.org/

Art by Mythdael