Starbeamrainbowlabs

Stardust
Blog


Archive


Mailing List Articles Atom Feed Comments Atom Feed Twitter Reddit Facebook

Tag Cloud

3d 3d printing account algorithms android announcement architecture archives arduino artificial intelligence artix assembly async audio automation backups bash batch blog bookmarklet booting bug hunting c sharp c++ challenge chrome os cluster code codepen coding conundrums coding conundrums evolved command line compilers compiling compression containerisation css dailyprogrammer data analysis debugging demystification distributed computing documentation downtime electronics email embedded systems encryption es6 features ethics event experiment external first impressions future game github github gist gitlab graphics hardware hardware meetup holiday holidays html html5 html5 canvas infrastructure interfaces internet interoperability io.js jabber jam javascript js bin labs learning library linux lora low level lua maintenance manjaro network networking nibriboard node.js operating systems own your code pepperminty wiki performance phd photos php pixelbot portable privacy problem solving programming problems project projects prolog protocol protocols pseudo 3d python reddit redis reference releases rendering resource review rust searching secrets security series list server software sorting source code control statistics storage svg talks technical terminal textures thoughts three thing game three.js tool tutorial twitter ubuntu university update updates upgrade version control virtual reality virtualisation visual web website windows windows 10 xmpp xslt

PhD Update 4: Hyper optimisation and frustration

Hello there again! It's been longer than I anticipated since the last proper post in this series. Before I continue, here's a list of all the (proper) posts in this series so far:

I've haven't managed to get as much done since last time as I was hoping (partly due to the fact that I'm currently having to work from home, which is more challenging than I expected), but I have finished my implementation of the Temporal CNN, and am now working on hyperparameter optimisation. I've also fixed a number of issues in my rainfall radar data downloader and processing programs - which I'll talk about in more detail below

HAIL-CAESAR and the iterative improvements

Someone at the University recently approached me (if you are reading this and have a blog, comment below and I'll update) to ask if they could use my rainfall radar data downloader program to download some rainfall radar data for their project. Naturally, I helped them out. This turned out to be a great thing for me as well, as with their help I managed to uncover a number of very nasty issues with the data pipeline I had been building up to that point:

  • The hydro index file that HAIL-CAESAR uses was completely scrambled
  • The date on the data downloaded was a month out
  • The data downloaded was (and still is) rotated by 90┬░ on disk
  • The data was out by a factor of 32

While fixing each of these bugs was a (relatively) simple process, I can't help but wonder how they managed to escape my notice until (for all but 1 of them) someone else told me about them.

The other issue was that because of the amount of data I'm working with, it took forever to re-run the program to test to see if I had managed to fix the problem - and if I had, I'd encounter another problem. This long iteration process makes implementing a new feature or fixing a bug a very time-consuming process.

Despite fixing all these issues, I'm still experiencing issues with my latest refactoring of the rainfall radar data downloader (namely a hang in the event system when reading tar files). My current thinking is that I'm going to completely reimplement it (using snippets from the old programif I need to use it again in the future, as it is currently neither particularly efficient (it's single-threaded) nor easy to bugfix (it's pretty complicated).

I've got an idea for a parallel system that processes each tar file separately first, and then only after all the tar files have been converted separately are they strung together into the actual files the existing implementation spits out currently.

Temporal CNN delight

Last time, I had just started my implementation of a Temporal CNN. This is now pretty much complete, and I've also been able to run it and get some results! Check out this graph:

A graph showing the root mean squared error while training my Temporal CNN implementation - more details below.

This graph shows the root mean squared error when training on 1000 time steps of data (about 3 days 11 hours or so). Epochs are along the X axis, and the root mean squared error is on the Y axis.

A few things to note here. Firstly, the implementation I've come up with essentially does video-to-image translation. The original model in the paper I've linked to is demonstrates a classification task (specifically land use over time) - so what I'm doing is a little different.

Secondly, I've omitted the root mean squared error for the first epoch. It was so high that it made the rest of the graph impossible to see - hence the omission.

I'm pretty pleased with this result so far - as I have a nice downwards curve indicating that the model is (probably) learning something useful.

I am still rather nervous about the output though, as due to the way I've implemented the network I haven't actually been able to 'see' the output of the network at all as an image yet. Doing so would take a while to implement, so I haven't done so for now (although I really should do this soon). It would be really cool to see a short video (maybe at ~10fps) of the network output as the epochs move forwards to visualise the network training process.

Hyperparameter frustrations

Lastly, at the suggestion of my supervisor I've been working on hyperparameter optimisation. In short, this consists of training the model with random combinations of hyperparameters and seeing which ones work best.

A hyperparameter is a tunable parameter that controls an aspect of a model. In my case, I have 2 key hyperparameters I need to tune:

  • Filter count: CNN layers in Tensorflow.js have a filter count associated with them. I theorise that increasing this will increase the model's ability to learn spatial information.
  • Temporal depth: The number of time steps to push through the model at once. Increasing this will allow the model to make predictions based on events that occur further in the past.

My eventual aim here is to create a heatmap that has the above hyperparameters along the X and Y axes, and the colour showing the accuracy of the model that was trained - similar to the one I created previously.

To do this, I implemented a program that tries random combinations of hyperparameters - but never the same combination twice. It starts the model in a subprocess and passes the chosen filter count and temporal depth values in as CLI arguments, which the child process picks up, parses, and then trains a model based on. This CLI is the same one as the one I developed that generated the above graph in the previous section of this blog post.

This approach has the advantage that it isolates the model in a subprocess, so when the subprocess exits and a new one spawns for the next combination of hyperparameters, the environment is completely clean and there isn't anything that might interfere with it.

Unfortunately though, while I set off a run of this implementation before I took a 'holiday' - and even checked on it to ensure it was running as expected (multiple times) - it still managed to crash when I wasn't looking.

After some debugging, I discovered that problem was because the model ran out of memory while training. This was something I had expected - and used the --unhandled-rejections=strict option for Node.js, which tells Node.js to crash and exit when an UnhandledPromiseRejection is thrown - like this one:

2020-08-04 15:31:26.174395: W tensorflow/core/framework/op_kernel.cc:1651] OP_REQUIRES failed at conv_grad_ops_3d.cc:1783 : Resource exhausted: OOM when allocating tensor with shape[8,2,104,348,210] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
(node:62355) UnhandledPromiseRejectionWarning: Error: Invalid TF_Status: 8
Message: OOM when allocating tensor with shape[8,2,104,348,210] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
    at Object.<anonymous> (<anonymous>)
.....

Unfortunately though, I used this flag on the parent process (that drives the hyperparameter optimisation) and not the child process - leading to a situation whereby the child process crashed due to the aforementioned error and just hangs around doing nothing. Even more frustatingly, the solution si as simple as doing a quick export NODE_OPTIONS="--unhandled-rejections=strict" before running the hyperparameter optimisation program to ensure that the flag propagates to the child processes......

Very frustrating indeed - especially considering I calculate that it will take multiple weeks to gather enough data to create a meaningful heatmap.

Conclusion

Reading back over this post, I have got more done than I expected. I've started and finished my Temporal CNN implementation, and fixed lots of bugs in them existing code.

However, the long iteration times to test code I've written (despite using a small slice of the dataset to test with), the large datasets I'm working with, having work on a system remotely via SSH by pushing and pulling code with git (many times), working from home all the time, and the continued bugs I've been facing and will likely continue to face have caused and are causing significant unexpected slowdowns moving forwards.

At least the VPN is no longer dropping out every 5 minutes!

Sources and Further Reading

Partitioning and mounting a new disk using LVM

As I've been doing my PhD, I've been acquiring quite a lot of data that needs storing. To this end, I have acquired a new 2 TiB hard drive in my Lab PC. Naturally, this necessitates formatting it so that I can use it. Since I've been using LVM (Logical Volume Management) for my OS disk - so I decided to use it for my new disk too.

Unfortunately, I don't currently have GUI access to my Lab PC - instead for the past few months I've been limited to SSH access (which is still much better than nothing at all), so I can't really use any GUI tool to do this for me.

This provided me with a perfect opportunity to get into LVM through the terminal instead. As it turns out, it's not actually that bad. In this post, I'm going to take you through the process of formatting a fresh disk: from creating a partition table to mounting the LVM logical volume.

Let's start by partitioning the disk. For this, we'll use the fdisk CLI tool (install it on Debian-based systems with sudo apt install fdisk if it's not available already). It should be obvious, but for this tutorial root access is required to execute pretty much all the commands we'll be using.

Start fdisk like so:

sudo fdisk /dev/sdX

Replace X with the index of your disk (try lsblk - no sudo required - to disk your disks). fdisk works a bit like a shell. You enter letters (or short sequences) followed by hitting enter to give it commands. Enter the following sequence of commands:

  • g: Create new GPT partition table
  • n: Create new partition (allow it to fill the disk)
  • t: Change partition type
    • L: List all known partition types
    • 31: Change to a Linux LVM partition
  • p: Preview final partition setup
  • w: Write changes to disk and exit

Some commands need additional information - fdisk will prompt you here.

With our disk partitioned, we now need to get LVM organised. LVM has 3 different key concepts:

  • Physical Volumes: The physical disk partitions it should use as a storage area (e.g. we created an appropriate partition type above)
  • Volume Groups: Groups of 1 or more Physical Volumes
  • Logical Volumes: The volumes that you use, format, and mount - they are stored in Volume Groups.

To go with these, there are also 3 different classes of commands that LVM exposes: pv* commands for Physical Volumes, vg* for Volume Groups, and lv* for Logical Volumes.

With respect to Physical Volumes, these are physical partitions on disk. For legacy MSDOS partition tables, these must have a partition type of 8e. For newer GPT partition tables (such as the one we created above), these need the partition id 31 (Linux LVM) - as described above.

Next, we create a new volume group that holds our physical volume:

sudo vgcreate vg-pool-name /dev/sdXY

....replacing /dev/sdXY with the partition you want to add (again, lsblk is helpful here). Don't forget to change the name of the Volume Group to something more descriptive than vg-pool-name - though keeping the vg prefix is recommended.

If you like, you can display the current Volume Groups with the following command:

sudo vgdisplay

Then, create a new logical volume that uses all of the space in the new volume group:

sudo lvcreate -l 100%FREE -n lv-rocket-blueprints vg-pool-name

Again, replace vg-pool-name with the name of your Volume Group, and lv-rocket-blueprints with the desired name of your new logical volume. tldr (for which I review pull requests) has a nice page on lvcreate. If you have a tldr client installed, simply do this to see it:

tldr lvcreate

With our logical volume created, we can now format it. I'm going to format it as ext4, but you can format it as anything you like.

sudo mkfs.ext4 /dev/vg-pool-name/lv-rocket-blueprints

As before, replace vg-pool-name and lv-rocket-blueprints with your Volume Group and Logical Volume names respectively.

Finally, mount the newly formatted partition:

sudo mkdir /mnt/rocket-blueprints
sudo mount /dev/vg-extra-data/lv-rocket-blueprints /mnt/rocket-blueprints

You can mount it anywhere - though I'd recommend mounting it to somewhere in /mnt.

Auto-mounting LVM logical volumes

A common thing many (myself included) want to do is automatically mount an LVM partition on boot. This is fairly trivial to do with /etc/fstab.

To start, find the logical volume's id with the following command:

sudo blkid

It should be present as UUID="THE_UUID_HERE". Pick out only the UUID of the logical volume you want to automount here. As a side note, using the UUID is generally a better idea than the name, because the name of the partition (whether it's an LVM partition or a physical /dev/sdXY partition) might change, while the UUID always stays the same.

Before continuing, ensure that the partition is unmounted:

sudo umount /mnt/rocket-blueprints

Now, edit /etc/fstab (e.g. with sudo nano /etc/fstab), and append something like this following to the bottom of the file:

UUID=THE_UUID_YOU_FOUND_HERE    /mnt/rocket-blueprints  ext4    defaults,noauto 0   2

Replace THE_UUID_YOU_FOUND_HERE with the UUID you located with blkid, and /mnt/rocket-blueprints with the path to where you want to mount it to. If an empty directory doesn't already exist at the target mount point, don't forget to create it (e.g. with sudo mkdir /mnt/rocket-blueprints).

Save and close /etc/fstab, and then try mounting the partition using the /etc/fstab definition:

sudo mount /mnt/rocket-blueprints

If it works, edit /etc/fstab again and replace noauto with auto to automatically mount it on boot.

That's everything you need to know to get up and running with LVM. I've included my sources below - in particular check out the howtogeek.com tutorial, as it's not only very detailed, but it also has a cheat sheet containing most of the different LVM commands that are available.

Found this useful? Still having issues? Got a suggestion? Comment below!

Sources and further reading

Ensuring a Linux machine's network connection stays up with Bash

Recently, I had the unpleasant experience of my Lab machine at University dropping offline. It has a tendency to do this randomly - and normally I'd just reboot it myself, but since I'm working from home at the moment it meant that I couldn't go in to fix it. This unfortunately meant that I was stuck waiting for a generous technician to go in and reboot it for me.

With access now restored I decided that I really didn't want this to happen again, so I've written a simple Bash script to resolve the issue.

It works by checking for an Internet connection every hour by pinging starbeamrainbowlabs.com - and if it doesn't manage to do so successfully, then it will reboot. A simple concept, but I discovered a number of things that needed considering while writing it:

  1. To avoid detecting transient network issues, we should make multiple attempts before giving up and rebooting
  2. Those multiple attempts need to be delayed to be effective
  3. We mustn't reboot more than once an hour to avoid getting into a 'reboot loop'
  4. If we're running an experiment, we need a way to temporarily delay it from doing it's checks that will resume automatically
  5. We could try and diagnose the network error or turn the networking of and on again, but if it gets stuck halfway through then we're locked out (very undesirable) - so it's easier / safer to just reboot

With these considerations in mind, I came up with this: ensure-network.sh (link to part of a GitHub Gist, as it's quite long)

This script requires Bash version 4+ and has a number of environment variables that can configure its behaviour:

Environment Variable Description
CHECK_EXTERNAL_HOST The domain name or IP address to ping to check the connection
CHECK_INTERVAL The interval to check the connection in seconds
CHECK_TIMEOUT Wait at most this long for a reply to our ping
CHECK_RETRIES Retry this many times before giving up and rebooting
CHECK_RETRY_DELAY Delay this many seconds in between retries
CHECK_DRY_RUN If true, then don't actually reboot (useful for testing)
CHECK_REBOOT_DELAY Leave at least this many minutes in between reboots
CHECK_POSTPONE_FILE If this file exists and has a recent last-modified time (mtime), don't actually reboot
CHECK_POSTPONE_MAXAGE The maximum age in minutes of the CHECK_POSTPONE_FILE to consider it fresh and avoid rebooting

With these environment variables, it covers all 4 points in the above list. To expand on CHECK_POSTPONE_FILE, if I'm running an experiment for example and I don't want it to reboot in the middle of said experiment, then I can simply run touch /path/to/postpone_file to delay network connection-related reboots for 7 days (by default). After this time, it will automatically start rebooting again if it drops off the network. This ensures that it will always restart monitoring eventually - as if I had a more manual system I'd forget to re-enable it and then loose access.

Another consideration is that the /var/cache directory must exist. This is because an empty tracking file is created there to keep track of when the last network connection-related reboot occurred.

With the script written, then next step is to have it run automatically on boot. For systemd-based systems such as my lab machine, a systemd service is the order of the day. This is relatively simple:

[Unit]
Description=Reboot if the network connection is down
After=network.target

[Service]
Type=simple
# Because it needs to be able to reboot
User=root
Group=root
EnvironmentFile=-/etc/default/ensure-network
ExecStartPre=/bin/sleep 60
ExecStart=/bin/bash "/usr/local/lib/ensure-network/ensure-network.sh"
SyslogIdentifier=ensure-access
StandardError=syslog
StandardOutput=syslog

[Install]
WantedBy=multi-user.target

(View the latest version in the GitHub Gist)

This assumes that the ensure-network.sh script is located at /usr/local/lib/ensure-network/ensure-network.sh. It also allows for an environment file to optionally be created at /etc/default/ensure-network, so that you can customise the parameters. Here's an example environment file:

CHECK_EXTERNAL_HOST=example.com
CHECK_INTERVAL=60

The above example environment file checks against example.com every minute instead of the default starbeamrainbowlabs.com every hour. You can, of course, specify any (or all) of the environment variables detailed above in the environment file if you wish.

That completes my setup - so hopefully I don't encounter any more network-related issues that lock me out of accessing my lab machine remotely! To install it yourself, you can do this:

# Create the directory for the script to live in
sudo mkdir /usr/local/lib/ensure-network
# Download the script & service file
sudo curl -L -O /usr/local/lib/ensure-network/ensure-network.sh https://gist.githubusercontent.com/sbrl/08e13f2ceedafe35ac7f8dbdfb8bfde7/raw/cc5ab4226472c08b09e448a257256936cc749193/ensure-network.sh
sudo curl -L -O /etc/systemd/system/ensure-network.service https://gist.githubusercontent.com/sbrl/08e13f2ceedafe35ac7f8dbdfb8bfde7/raw/adf5ed4009b3e1a09f857936fceb3581897072f4/ensure-network.service
# Start the service & enable it on boot
sudo systemctl daemon-reload
sudo systemctl start ensure-network.service
sudo systemctl enable ensure-network.service

You might need to replace the URLs there with the latest ones that download the raw content from the GitHub Gist.

Did you find this useful? Got a suggestion to make it better? Running into issues? Comment below!

PhD Update 4: Ginormous Data

Hello again! In the last PhD update blog post, I talked about patching HAIL-CAESAR to improve performance and implementing a Temporal Convolutional Neural Net (Temporal CNN).

Since making that post, I've had my PhD Panel 1 (very useful, thanks to everyone who was on that panel!). I've also got an initial - albeit untested - implementation of a Temporal CNN. I've also been wrangling lots of data in more ways than one. I'm definitely seeing the Big Data aspect of my project title now.

HAIL-CAESAR

I ran HAIL-CAESAR initially at 50m per pixel. This went ok, and generated lots of data out, but in 2 weeks of real time it barely hit 43 days worth of simulation time! The other issue I discovered due to the way I compressed the output of HAIL-CAESAR, for some reason it compressed the output files before HAIL-CAESAR had finished writing to them. This resulted in the data being cut off randomly in the output files for each time step.

Big problem - clearly another approach is in order.

To tackle these issues, I've done several things. Firstly, I patched HAIL-CAESAR again to support writing the output water depth files to the standard output. As a refresher, they are actually identical in format to the heightmap, which looks a bit like this:

ncols 4
nrows 3
xllcorner 400000
yllcorner 300000
cellsize 1000
1 2 3 4
1 1 2 3
0 1 1 2

The above is a 4x3 grid of points, with the bottom-left corner being at (400000, 300000) on the Ordnance Survey National Grid (I know, latitude / longitude would be so much better, but all the data I'm working with is on the OS national grid :-/). Each point represents a 1km square area.

To this end, I realised that it doesn't actually matter if I concatenate multiple files in this format together - I can guarantee that I can tell them apart. As soon as I detect a metadata line that the current file has already declared, then I know that the next file is starting and we're starting to read the next file along. To this end, I implemented a new Terrain50.ParseStream() function that is an async generator that will take a stream, and then iteratively yield the Terrain50 instances it parses out of the stream. In this way, I can split 1 big continuous stream back up again into the individual parts.

By patching HAIL-CAESAR such that it outputs the data in 1 continuous stream, it also means that I can pipe it to a single compression program. This has 2 benefits:

  • It avoids the "compressing the individual files before HAIL-CAESAR is ready" problem (the observant might note that inotifywait would solve this issue neatly too, but it isn't installed on Viper)
  • It allows for more efficient compression, as the compression program can use data from other time step files as context

Finding a compression tool was next. I wanted something CPU efficient, because I wanted to ensure that the maximum number of CPU cycles were dedicated to HAIL-CAESAR for running the simulation, rather than compressing the output it generates - since it is the bottleneck after all.

I ended up using lz4 in the end, an extremely fast compression algorithm. It compiles easily too, which is nice as I needed to compile it from source automatically on Viper.

With all this in place, I ran HAIL-CAESAR again 2 more times. The first run was at the same resolution as before, and generated 303 GiB (!) of data.

The second run was at 500m per pixel (10 times lower resolution), which generated 159 GiB (!) of data and, by my calculations, managed to run through ~4.3 years in simulation time in 5 days of real time. Some quick calculations suggest that to get through all 13 years of rainfall radar data I have it would take just over 11 days, so since I've got everything setup already, I'm going to be contacting the Viper administrators to ask about running a longer job to allow it to complete this process if possible.

Temporal CNN Preprocessing

The other major thing I've been working on since the last post is the Temporal CNN. I've already got an initial implementation setup, and I'm currently in the process of ironing out all the bugs in it.

I ran into a number of interesting bugs. One of these was to do with incorrectly specifying the batch size (due to a typo), which resulted in the null values you may have noticed in the model summary in the last post. With those fixed, it looks much more sensible:

_________________________________________________________________
Layer (type)                 Output shape              Param #   
=================================================================
conv3d_1 (Conv3D)            [32,2096,3476,124,64]     16064     
_________________________________________________________________
conv3d_2 (Conv3D)            [32,1046,1736,60,64]      512064    
_________________________________________________________________
conv3d_3 (Conv3D)            [32,521,866,28,64]        512064    
_________________________________________________________________
pooling (AveragePooling3D)   [32,521,866,1,64]         0         
_________________________________________________________________
reshape (Reshape)            [32,521,866,64]           0         
_________________________________________________________________
conv2d_output (Conv2D)       [32,517,862,1]            1601      
_________________________________________________________________
reshape_end (Reshape)        [32,517,862]              0         
=================================================================
Total params: 1041793
Trainable params: 1041793
Non-trainable params: 0
_________________________________________________________________

This model is comprised of the following:

  • 3 x 3D convolutional layers
  • 1 x pooling layer to average out the temporal dimension
  • 1 x reshaping layer to remove the redundant dimension
  • 1 x 2D convolutional layer that will produce the output
  • 1 x reshaping layer to remove another redundant dimension

I'll talk about this model in more detail in a future post in this series once I've managed to get it running and I've played around with it a bit.

Another significant one I ran into was to do with stacking tensors like an image. I ended up asking on Stack Overflow: How do I reorder the dimensions of a rank 3 tensor in Tensorflow.js?

The input to the above model is comprised of a sliding window that moves along the rainfall radar time steps. Each time step contains a 2D array, representing the amount of rain that has fallen in a given area. This needs to be combined with the heightmap, so that the AI model knows what the terrain that the rain is falling on looks like.

The heightmap doesn't change, but I'm including a copy of it with every rainfall radar time step because of the way the 3D convolutional layer works in Tensorflow.js. 2D convolutional layers in Tensorflow.js, for example, take in a 2D array of data as a tensor. They can also take in multiple channels though, much like pixels in an image. The pixels in an image might look something like this:

R1 G1 B1 A1 R2 G2 B2 A2 R3 G3 B3 A3 .....

As you might have seen in the Stack Overflow answer I linked to above, Tensorflow.js does support stacking multiple 2D tensors in this fashion. It is unfortunately extremely slow however. It is for this reason that I've been implementing a multi-process program to preprocess the data to do this stacking in advance.

As I'm writing this though, I've finally understood what the dataFormat option is for in the conv3d and conv2d layers is for, and I think I might have been barking up the wrong tree......

What's next

From here, I'm going to investigate that dataFormat option for the TemporalCNN - it would hugely simplify my setup and remove the need for me to preprocess the data beforehand, since stacking tensors directly 1 after another is very quick - it's just stacking them along a different dimension that's slow.

I'm also hoping to do a longer run of that 500m per pixel HAIL-CAESAR simulation. More data is always good, right? :P

After I've looked into the dataFormat option, I'd really like to get the Temporal CNN set off training and all the bugs ironed out. I'm so close I can almost taste it!

Finally, if I have time, I want to go looking for a baseline model of sorts. By this, I mean an existing model that's the closest thing to the task I'm doing - even though they might not be as performant or designed for my specific task.

Found this interesting? Got a suggestion of something I could do better? Confused about something I've talked about? Comment below!

PhD Aside: Reading a file descriptor line-by-line from multiple Node.js processes

Phew, that's a bit of a mouthful. We're taking a short break from the cluster series of posts (though those will be back next week I hope), because I've just run into a fascinating problem, the solution to which I thought I'd share here - since I didn't find a solution elsewhere on the web.

For my PhD, I've got a big old lump of data, and it all needs preprocessing before I train an AI model (or a variant thereof, since I'm effectively doing video-to-image translation). Unfortunately, one of the preprocessing steps is really slow. And because I'll naturally be training my AI for multiple epochs, the problem is multiplied.....

The solution, of course, is to do all the preprocessing up front such that I can just read the data in and push it directly into a Tensor in the right format. However, doing this on such a large dataset would take forever if I did the items 1 by 1. The thing is that Javascript isn't inherently multithreaded. I like this quote, as it describes the situation rather well:

In Javascript everything runs in parallel... except your code

--Felix Geisend├Ârfer

In other words, when Node.js is reading or writing to and from the network, disk, or other places it can do lots of things at the same time because it does them asynchronously. The Javascript that gets executed though is only done on a single thread though.

This is great for io-bound tasks (such as a web server), as Node.js (a Javascript runtime) can handle many requests at the same time. On a side note, this is also the reason why Nginx is more efficient than Apache (because Nginx is event based too like Javascript, unlike Apache which is thread based).

It's not so great though for CPU bound tasks, such as the one I've got on my hands. All is not lost though, because Node.js has a number of useful functions inbuilt that we can use to tackle the issue.

Firstly, Node.js has a clever forking system. By using child_process.fork(), a single Node.js process can create multiple copies of itself to act as workers:

// main.js
import child_process from 'child_process';
import os from 'os';

let workers = [];

for(let i = 0; i &lt; os.cpus().length; i++) {
    workers.push(
        child_process.fork("worker.mjs")
    );
}
// worker.js
console.log(`Hello, world from a child process!`);

Very useful! The next much more sticky problem though is how to actually preprocess the data in a performant manner. In my specific case, I'm piping the data in from a shell script that decompresses a number of gzip archives in a specific order (as of the time of typing I have yet to implement this).

Because this is a single pipe we're talking about here, the question now arises of how to allow all the child processes to access the data that's coming in from the standard input of the master process.

I've actually encountered an issue like this one before. I initially tried reading it in on the master process, and then using worker.send(message) to send it to the worker processes for processing. This didn't end up working very well, because the master process became a bottleneck as it couldn't read from the standard input and send stuff to the workers fast enough.

With this in mind, I came up with a new plan. In Node.js, when you're forking to create a worker process, you can supply it with some custom file descriptors upon initialisation. So long as it has at least IPC (inter-process communication) channel for passing messages back and forth with the .send() and .on("message", (message) => ....) method and listeners, it doesn't actually care what you do with the others.

Cue file descriptor cloning:


// main.js
import child_process from 'child_process';
import os from 'os';

let workers = [];

for(let i = 0; i 

I've highlighted the key line here (line 10 for those who can't see it). Here we tell it to clone file descriptors 0, 1, and 2 - which refer to stdin, stdout, and stderr respectively. This allows the worker processes direct access to the master process' stdin, stdout, and stderr.

With this, we can read from the same pipe with as many worker processes as we like - so long as they do so 1 at a time.

With this sorted, it gives rise to the next issue: reading line-by-line. Packages exist on npm (such as nexline, my personal favourite) to read from a stream line-by-line, but they have the unfortunate side-effect of maintaining a read buffer. While this is great for performance, it's not so great in my situation because it ends up scrambling the input! This is because said read buffer would be local to each worker process, so when the next worker along reads, it will skip a random number of bytes and start reading from the next bit along.

This means that I need to implement a custom method that reads a single line from a given file descriptor without maintaining a read buffer. I came up with this:

import fs from 'fs';

//  .....

// Global buffer to avoid unnecessary memory churn
let buffer = Buffer.alloc(4096);
function read_line_unbuffered(fd) {
    let i = 0;
    while(true) {
        let bytes_read = fs.readSync(fd, buffer, i, 1);
        if(bytes_read !== 1 || buffer[i] == 0x0A) {
            if(i == 0 && bytes_read == null) return null;
            return buffer.toString("utf-8", 0, i); // This is not inclusive, so we can abuse it to trim the \n off the end
        }

        i++;
        if(i == buffer.length) {
            let new_buffer = new Buffer(Math.ceil(buffer.length * 1.5));
            buffer.copy(new_buffer);
            buffer = new_buffer;
        }
    }
}

I read from the given file descriptor character by character directly into a buffer. As soon as it detects a new line character (\n, or character code 0x0A), it returns the new line. If we run out of space in the buffer, then we create a new larger one, copy the old buffer's contents into it, and keep going.

I maintain a global buffer here, because this helps to avoid unnecessary memory churn. In my case, the lines I'm reading in a rather long (hence the need to clone the file descriptor in the first place), and if I didn't keep a shared buffer I'd be allocating and deallocating a new pretty large buffer every time.

This also has the nice side-effect that we keep the largest buffer we've had to use so far around for next time, avoiding the need for subsequent copies to larger and larger buffers.

Finally, we can also guarantee that it won't be a problem if we call this multiple times, because as I explained above Javascript is single-threaded, so if we call the function multiple times in quick succession each read will happen 1 after another.

With this chain of Node.js features, we can read a large amount of data from and efficiently process the content of a pipe. The trick from here is to implement a proper messaging and locking system to avoid reading from the stream at the same time, and avoid write to the standard output at the same time.

Taking this further, I ended up with this:

(Licence: Mozilla Public Licence 2.0)

This correctly ensures that only 1 worker process reads from the stream at the same time. It doesn't do anything with the result though except log a message to the console, but when I implement that I'll implement a similar messaging system to ensure that only 1 process writes to the output at once.

On that note, my data is also ordered, so I'll have to implement a complicated cache system // ordering system to ensure that I write them to the standard output in the same order I read them in. When I do implement that, I'll probably blog about that too....

The main problem I still have with this solution is that I'm reading from the input stream. I haven't done any proper testing, but I'm pretty sure that doing so will be really slow. I not sure I can avoid this though and read a few KiBs at a time, because I don't currently know of any way to put the extra characters back into the input stream.

If anyone has a solution to that that increases performance, I'd love to know. Leave a comment below!

PhD Update 3: Simulating simulations with some success

Hey there! Welcome to another PhD update blog post. The last time I posted, I was still working away at getting the rainfall radar data downloader working as intended.

Thankfully, since then I've managed to get it to complete (wow, that too much longer than expected) - and I've now turned my attention to running the physics-based simulation, and beginning to implement the AI(s) that will (hopefully) implicitly learn the parameters of the model in question.

Physics-based simulation patching

Getting to this point, as you might imagine, wasn't quite as straight-forward as I initially thought. The physics-based model I'm (currently) using is HAIL-CAESAR, a (supposedly) high-performance version of CAESAR-Lisflood (yes, it's SourceForge shudder). Unfortunately, the format for the rainfall data that it takes in is especially space inefficient - after writing a converter, I found that my 4.5GiB compress JSON stream files (1 per year) would have turned into about 66GB of uncompressed ASCII! Theoretically speaking by my calculations. I don't have that much disk space free - so clearly another approach is in order.

This approach I speak of is convincing HAIL-CAESAR to take the data in via the standard input. I initially tried using a FIFO (also known as a named pipe), but I ran into this bug in Node.js.

HAIL-CAESAR by default doesn't support taking the data in on the standard input though, so I had to patch HAIL-CAESAR to add support. I did this by getting it to interpret the filename - to mean "use the standard input instead", which from my previous experiences seem to be an unofficial convention that lots of other programs follow. Perhaps at some point soon I should consider contributing my patch back to HAIL-CAESAR for others to enjoy.

Heightmap tweaking

With that sorted, I also had to mess around with the heightmap (I got this through my University's "Digimap" service thingy) I obtained to get it to be precisely the same size as the rainfall radar data I have.

It turned out that the service I got the heightmap from isn't smart enough to give you precisely the bit you asked for - instead giving you just the tiles that overlap the area you specify. In the end I found myself with ~170 separate tiles (some of which I had to get after the fact because I found I was missing some) - so I ended up implementing a program to stitch them all back together again.

That program ended up turning out much more complete as a separate whole than I thought it would. I'm pretty sure that these heightmap files I've been dealing with are in a standard format, but I'm not aware of its name (if you know, I'd love to hear from you - post a comment below!). It's for these reasons that I ended up releasing it as a pair of packages on npm.

You can find them here:

  • terrain50 - OS Terrain 50 manipulation library
  • terrain50-cli - Command-line interface for the above to make it easy to manipulate heightmaps from the command line

I'll probably make a separate blog post about them at some point soon. For the curious, the API docs (there's a link in the README of the library package too) are automatically updated with my Laminar CI setup :D

Tensor trouble

It is with a considerable amount of anticipation that I'm finally reaching the really interesting part of this experiment. This week, I've started work on implementing a Temporal Convolutional Neural Network (see also this paper). A Temporal CNN is a network type I discovered recently that takes advantage of multiple 3-dimensional CNN layers to allow a CNN-based model to learn temporal-based relationships in a dataset.

I'm not sure how well it's going to work on my particular dataset, given that the existing papers I've found on it use it for classification-based tasks, but I'm pretty hopeful that, with some tweaking, it should perform pretty well. While I haven't yet finished writing up the dataset input logic, I have implemented the core model using the Tensorflow.js layers API:

asciicast

In the end I've decided to give Tensorflow.js another go (I don't think I mentioned it, but it attempted to use it for my Master's summer project, but it didn't work out so well), since I realised that I've implemented a good portion of the data processing code in Javascript (Node.js) already (as mentioned above). Interestingly, HAIL-CAESAR spits out files in the same format as the heightmap I've been working with, which makes processing even easier!

What's next

From here, I intend to finish up my Temporal CNN implementation and get it running on the data I have so far from the HAIL-CAESAR model (which isn't unfortunately a lot - so far I've only got ~8K 5-minute time-steps worth of output which, if I'm calculating correctly, is just 29 days worth of simulation). I'm probably going to have to swap HAIL-CAESAR out at some point though, because it's really slow. Or perhaps I just don't know how to use it properly (maybe I should find someone more experienced with it and ask them first).

Anyway, I'm also going to try implementing a model inspired by the Google rainfall radar nowcasting paper I mentioned in my last post in this series. With both of these implemented, I can start to compare them and see which one is better suited for the task of flood prediction. I might even implement the Grid LSTM model I saw too.

In addition, I have my PhD panel 1 review coming up soon too - so apparently I've got a list of things I need to do to prepare for that - including writing a ~5K word report. I'll probably do this pretty soon - I don't want to be rushing it at the last minute.

Found this interesting? Got a suggestion? Want to say hi? Comment below!

PhD Update 2: The experiment, the data, and the supercomputers

Welcome to another PhD update post. Since last time, a bunch of different things have happened - which I'll talk about here. In particular, 2 distinct strands have become evident: The reading papers and theory bit - and the writing code and testing stuff out bit.

At the moment, I'm focusing much more heavily on the writing code and experimental side of things, as I've recently gained access to the 1km resolution rainfall radar dataset from CEDA. While I'm not allowed to share any data that I've now got, I'm pretty sure I'm safe to talk about how terribly it's formatted.

The data itself is with 1 directory per year, and 1 file per day inside those directories. Logical so far, right? Each of those files is a tar archive, inside which are the binary files that contain the data itself - with 1 file for every 5 minutes. These are compressed with gzip - which seems odd since they could probably gain greater compression ratios if they were tared first and compressed second (then the compression algorithm would be able to compress based on the similarities between files too).

The problems arise when you start to parse out the binary files themselves. They are in a propriety format - which has 3 different versions that don't conform to the (limited) documentation. To this end, it's been proving somewhat of a challenge to parse them and extract the bits I'm interested in.

To tackle this, I've been using Node.js and a bunch of libraries from npm (noisy pirate mutiny? nearest phase modulator? nasty popsicle machine? nah, it's probably node package manager):

  • binary-parser - For parsing the binary files themselves. Allows you to define the format of the file programmatically, and it'll parse it out into a nice object you can then manipulate.
  • gunzip-maybe - A streaming library that unzips a gzip-compressed stream
  • @icetee/ftp - An FTP client library for downloading the files (I know that FTP is insecure, that's all they offer at this time :-/)
  • tar-stream - For parsing tar files
  • nnng - Stands for No! Not National Grid!. helps with the conversion between OS national grid references and regular longitude latitude.

Aside from the binary file format, I encountered 3 main issues:

  1. The data is only a rectangle when using ordnance survey national grid references
  2. There's so much data, it needs to be streamed from the remote server
  3. Generating a valid gzip file is harder than you expect

Problem 1 here took me a while to figure out. Since as I mentioned the documentation is rather limited, I spent much longer than I would have liked attempting to parse the data in latitude longitude and finding it didn't work.

Problem 2 was rather interesting. Taking a cursory glance over the data before hand revealed that each daily tar file was about 80MiB - and with roughly 5.7K days worth of data (the dataset appears to go back to May 2004-ish), it quickly became clear that I couldn't just download them all and process them later.

It is for this reason that I chose Node.js in the first place for this. For those who haven't encountered it before, it's Javascript for the server - and it's brilliant for 2 main use-cases: networking and streaming data. Both of which were characteristics of the problem at hand - so the answer was obvious.

I'm still working on tweaking and improving my final solution, but as it stands after implementing the extractor on it's own, I've also implemented a wrapper that streams the tar archives from the FTP server, stream-reads the tar archives, streams the files in the tar archives into a gzip decompressor, parses the result, and then streams the interesting bits to disk as a disk object via a gzip compressor.

That's a lot of streams. The great part about this is that I don't accidentally end up with huge chunks of binary files in memory. The only bits that can't be streamed is the binary file parser and the bit that extracts the interesting bits.

I'm still working on the last issue, but I've been encountering nasty problems with the built-in zlib gzip compressor transformation stream. When I send a SIGINT (Ctrl + C) to the Node.js process, it doesn't seem to want to finish writing the gzip file correctly - leading to invalid gzip files with chunks missing from the end.

Since the zlib gzip transformation stream is so badly documented, I've ended up replacing it with a different solution that spawns a gzip child process instead (so you've got to have gzip installed on the machine you're running the script on, which shouldn't be a huge deal on Linux). This solution is better, but still requires some tweaks because it transpires that Node.js automatically propagates signals it receives to child processes - before you've had a chance to tie up all your loose ends. Frustrating.

Even so, I'm hopeful that I've pretty much got it to a workable state for now - though I'll need to implement a daemon-type script at some point to automatically download and process the new files as they are uploaded - it is a living dataset that's constantly being added to after all.

Papers

The other strand (that's less active at the minute) is reading papers. Last time, I mentioned the summary papers I'd read, and the direction I was considering reading in. Since then, I've both read a number of new papers and talked to a bunch of very talented people on-campus - so I've got a little bit of a better idea as to the direction I'm headed in now.

Firstly, I've looked into a few cutting-edge recurrent neural network types:

  • Grid LSTMs - Basically multi-dimensional LSTMs
  • Diluted LSTMs - Makes LSTMs less computationally intensive and better at learning long-term relationships
  • Transformer Neural Networks - more reading required here
  • NARX Networks

Many of these recurrent neural network structures appear to show promise for mapping floods. The last experiment into a basic LSTM didn't go too well (it turned out to be hugely computationally expensive), but learning from that experiment I've got lots of options for my next one.

A friend of mine managed to track down the paper behind Google's AI blog post - which turned out to be an interesting read. It transpires that despite the bold words in the blog post, the paper is more of an initial proposal for a research project - rather than a completed project itself. Most of the work they've done is actually using a traditional physics-based model - which they've basically thrown Google-scale compute power at to make pretty graphs - which they've then critically evaluated and identified a number of areas in which they can improve. They've been a bit light on details - which is probably because they either haven't started - or don't want to divulge their sekrets.

I also saw another interesting paper from Google entitled "Machine Learning for Precipitation Nowcasting from Radar Images", which I found because it was reported on by Ars Technica. It describes the short-term forecasting of rain from rainfall radar in the US (I'm in the UK) using a convolutional neural network-based model (specifically U-Net, apparently - I have yet to read up on it).

The model they use is comprised in part of convolutional neural network (CNN) layers that downsample 256x256 tiles to a smaller size, and then upscale it back to the original size. It has some extra connections that skip part of the model too. They claim that their model manages to improve on existing approaches for up to 6 hours in advance - so their network structure seems somewhat promising as inspiration for my own research.

Initial thoughts include theories as to whether I can use CNN layers like this to sandwich a more complex recurrent layer of some description that remembers the long-term relationships? I'll have to experiment.....

Found this interesting? Got a suggestion? Confused on a point? Comment below!

PhD Update 1: Directions

Welcome to my first PhD update post. I intend to post these at bimonthly intervals. In the last post, I talked a bit about my PhD project that I'm doing and my initial thoughts. Since then, I've done heaps of investigation into a number of different potential directions I could take the project. For reference, my PhD title is actually as follows:

Using the Internet of Things, Big Data, and AI to dynamically map flood risk.

There are 3 main elements to this project:

  • Big Data
  • Artificial Intelligence (AI)
  • The Internet of Things (IoT)

I'm pretty sure that each of them will have an important role to play in the final product - even if I'm not sure what those roles are just yet :P

Particularly of concern at the moment is this blog post by Google. It talks about they've managed to significantly improve flood forecasting with AI along with a seriously impressive visualisation to back it up - but I can't find a paper on it anywhere. I'm concerned that anything I try to do in the area won't be useful if they are already streets ahead of everyone else like that.

I guess one of the strong points I should try to hit is the concept of explainable AI if possible.

All the data sources!

As it stands right now, I'm currently evaluating various different potential data sources that I've managed to gain access to. My aim here is to evaluate how useful they will be in solving the wider problem - and whether they are useful enough to be worth investigating further.

Environment Agency

Some great people from the environment agency came into University recently to chat with us about what they did. The discussion we had was very interesting - but they also asked if there was anything they could do to help our PhD projects out.

Seeing the opportunity, I jumped at the chance to get a hold of some of their historical datasets. They actually maintain a network of high-quality sensors across the country that monitor everything from rainfall to river statistics. While they have a real-time API that you can use to download recent measurements, it doesn't appear to go back further than March 2017. To this end, I asked for data from 2005 up to the end of 2017, so that I could get a clearer picture of the 2007 and 2013 floods for AI training purposes.

So far, this dataset has proved very useful at least initially as a testbed for training various kinds of AI as I learn PyTorch (see my recent post for how that has been going - I've started with a basic LSTM first. For reference, an LSTM is a neural network architecture that is good at processing time-series data - but is quite computationally expensive to run.

Met Office

I've also been investigating the datasets that the Met Office provide. These chiefly appear to be in the form of their free DataPoint API. Particularly of interest are their rainfall radar images, which are 500x500 pixels and are released every 15 minutes. Sadly they are only available for a few hours at best, so you have to grab them fast if you want to be able to analyse particularly interesting ones later.

Annoyingly though, their API does not appear to give any hints as to the bounding boxes of these images - and neither can I find any information about this online. I posted in their support forum, but it doesn't appear that anyone actually monitors it - so at this point I suspect that I'm unlikely to receive a response. Without knowing the (lat, lng) co-ordinates of the images produced by the API, they are little more use than pretty wall art.

Internet of Things

On the Internet of Things front, I'm already part of Connected Humber, which have a network of sensors setup that are monitoring everything from air quality to temperature, humidity, and air pressure. While these things aren't directly related to my project, the dataset that we're collecting as a group may very well come in handy as an input to a model of some description.

I'm pretty sure that I'll need to setup some additional custom sensors of my own at some point (probably soonish too) to collect the measurement readings that I'm missing from other pre-existing datasets.

Reading a library

Whilst I've been doing this, I've also been reading up a storm. I've started by reading into traditional physics-based flood modelling simulations (such as caesar-lisflood) - which appear to fall into a number of different categories, which also have sub-categories. It's quite a rabbit hole - but apparently I'm diving all the way down to the very bottom.

The most interesting paper on this subject I found was this one from 2017. It splits physics-based models up into 3 categories:

  • Empirical models (i.e. ones that just display sensor readings, calculate some statistics, and that's about it)
  • Hydrodynamic models - the best-known models that simulate water flow etc - can be categorised as either 1D, 2D, or 3D - also very computationally expensive - especially in higher dimensions
  • Simplified conceptual models - don't actually simulate water flow, but efficient enough to be used on large areas - also can be quite inaccurate with complex terrain etc.

As I'm going to be using artificial intelligence as the core of my project, it quickly became evident that this is just stage-setting for the actual kind of work I'll be doing. After winding my way through a bunch of other less interesting papers, I found my way to this paper from 2018 next, which is similar to the previous one I linked to - just for AI and flood modelling.

While I haven't yet had a chance to follow up on all the interesting papers referenced, it has a number of interesting points to keep in mind:

  • Artificial Intelligences need lots of diverse data points to train well
  • It's important to measure a trained network's ability to generalise what it's learnt to other situations it hasn't seen yet

The odd thing about this paper is that it claims that regular neural networks were better than recurrent neural network structures - despite the fact that it is only citing a single old 2013 paper (which I haven't yet read). This led me on to read a few more papers - all of which were mildly interesting and had at least something to do with neural networks.

I certainly haven't read everything yet about flood modelling and AI, so I've got quite a way to go until I'm done in this department. Also of interest are 2 newer neural network architectures which I'm currently reading about:

Next steps

I want to continue to read about the above neural networks. I also want to implement a number of the networks I've read about in PyTorch to continue to learn the library.

Lastly, I want to continue to find new datasets to explore. If you're aware of a dataset that I haven't yet talked about on here, comment below!

PyTorch and the GPU: A tale of graphics cards

Recently, I've been learning PyTorch - which is an artificial intelligence / deep learning framework in Python. While I'm not personally a huge fan of Python, it seems to be the only library of it's kind out there at the moment (and Tensorflow.js has terrible documentation) - so it would seem that I'm stuck with it.

Anyway, as I've been trying to learn it I inevitably came to the bit where I need to learn how to take advantage of a GPU to accelerate the neural network training process. I've been implementing a few test networks to see how it performs (my latest one is a simple LSTM, loosely following this tutorial).

In PyTorch, this isn't actually done for you automatically. The basic building blocks of PyTorch are tensors (potentially multi-dimensional arrays that hold data). Each tensor is bound to a specific compute device - by default the CPU (in which the data is stored in regular RAM). TO do the calculations on a graphics card, you need to bind the data to the GPU in order to load the data into the GPU's own memory - so that the GPU can access it and do the calculation. The same goes for any models you create - they have to be explicitly loaded onto the GPU in order to run the calculations in the right place. Thankfully, this is fairly trivial:

tensor = torch.rand(3, 4)
tensor = tensor.to(COMPUTE_DEVICE)

....where COMPUTE_DEVICE is the PyTorch device object you want to load the tensor onto. I found that this works to determine the device that the data should be loaded onto quite well:

COMPUTE_DEVICE = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

Unfortunately, PyTorch (and all other AI frameworks out there) only support a technology called CUDA for GPU acceleration. This is a propriety Nvidia technology - which means that you can only use Nvidia GPUs for accelerated deep learning. Since I don't actually own an Nvidia GPU (far too expensive, and in my current laptop I have an AMD Radeon R7 M445 - and I don't plan on spending large sums of money to replace a perfectly good laptop), I've been investigating hardware at my University that I can use for development purposes - since this is directly related to my PhD after all.

Initially, I've found a machine with an Nvidia GeForce GTX 650 in it. If you run torch.cuda.is_available(), it will tell you if CUDA is available or not:

print(torch.cuda.is_available()) # Prints True if CUDA is available

.....but, as always, there's got to be a catch. Just because CUDA is available, doesn't mean to say that PyTorch can actually use it. After a bunch of testing, it transpired that PyTorch only supports CUDA devices with a capability index greater than or equal to 3.5 - and the GTX 650 has a capability index of just 3.0. You can see where this is going. I foound this webpage was helpful - it lists all of Nvidia's GPUs and their CUDA capability indices.

You can also get PyTorch to tell you more about the CUDA device it has found:

def display_compute_device():
    """Displays information about the compute device that PyTorch is using."""

    log(f"Using device: {COMPUTE_DEVICE}", newline=False)
    if COMPUTE_DEVICE.type == 'cuda':
        print(" {0} [Memory: {1}GB allocated, {2}GB cached]".format(
            torch.cuda.get_device_name(0),
            round(torch.cuda.memory_allocated(0)/1024**3, 1),
            round(torch.cuda.memory_cached(0)/1024**3, 1)
        ))

    print()

If you execute the above method, it will tell you more about the compute device it has found. Note that you can actually make use of multiple compute devices at the same time - I just haven't done any research into that yet.

Crucially, it will also generate a warning message if your CUDA device is too old. To this end, I'll be doing some more investigating as to the resources that the Department of Computer Science has available for PhD students to use....

If anyone knows of an artificial intelligence framework that can take advantage of any GPU (e.g. via OpenCL, oneAPI, or other similar technologies), do get in touch. I'm very interested to explore other options.

Starting my PhD on the mapping of flooding

Specifically, using new technologies such as AI and the Internet of Things to map and predict where it's going to flood in real-time.

This year, I'm starting a 3 year funded PhD on dynamic flood risk mapping, as part of a cluster of water-related PhDs that are all being funded at the same time. I've got some initial ideas as to the direction I'm going to take it too, which I'd like to talk a little bit about in this post.

I've got no shortage of leads as to potential data sources I can explore to achieve this. Just some of the avenues I'm exploring include:

  • Monitoring street drains
  • Fitting local council vehicles with sensors
  • Analysing geotagged tweets with natural language processing techniques
  • Predicting rainfall with aggregate mobile phone signal strength information
  • Vehicle windscreen wipers
  • Setting up static sensors? Not sure on this one.

Also, I've talked to a few people and thought of some pre-existing data sources that might come in useful:

  • Elevation maps
  • Vegetation maps
    • Normalised difference vegetation index - Map of vegetation density that's already available
    • Normalised difference water index - detects water on the surface of the earth - including that contained within leaves
  • River levels

Finally, I've been thinking about what I'm going to be doing with all this data I'd potentially be collecting. First and foremost, I'm going to experiment with InfluxDB as a storage mechanism. Apparently it's supposed to be able to handle high read and write loads, so it sounds suitable a first glance.

Next, I'm probably going to wind up training an AI - possibly incrementally - to predict flooding. Unlike my summer project, I'm probably going to be using a more cutting-edge and exotic AI architecture.

I suspect I might end up with a multi-level system too - whereby I pre-analyse the incoming data and process it into a format that the AI will take. For example, if I end up using geotagged social media posts, those will very likely filter through an AI of some description that does the natural language processing first - the output of which will be (part of) the input (or training output?) for the next AI in the chain.

I've given some thought to training data too. My current thinking is that while some data sources might be used as inputs to a network of interconnected AIs, others are more likely to take on a corrective role - i.e. improving the accuracy of the AI and correcting the model to fit a situation as it unfolds.

All this will not only require huge amounts of data, but will also require a sophisticated system that's capable of training an AI on past datasets as if they were happening in real-time. I suppose in a way the training process is a bit like chronological history lessons at speed - catching the AI up to the present day so that it can predict flood risk in real-time.

Before all this though, my first and foremost task will be to analyse what people have done already to map and predict flood risk. Only by understanding the current state of things will I be able to improve on what's already out there.

Found this interesting? Got any tips? Comment below!

Art by Mythdael