Starbeamrainbowlabs

Stardust
Blog


Archive


Mailing List Articles Atom Feed Comments Atom Feed Twitter Reddit Facebook

Tag Cloud

3d 3d printing account algorithms android announcement architecture archives arduino artificial intelligence artix assembly async audio automation backups bash batch blog bookmarklet booting bug hunting c sharp c++ challenge chrome os cluster code codepen coding conundrums coding conundrums evolved command line compilers compiling compression containerisation css dailyprogrammer data analysis debugging demystification distributed computing documentation downtime electronics email embedded systems encryption es6 features ethics event experiment external first impressions future game github github gist gitlab graphics hardware hardware meetup holiday holidays html html5 html5 canvas infrastructure interfaces internet interoperability io.js jabber jam javascript js bin labs learning library linux lora low level lua maintenance manjaro network networking nibriboard node.js operating systems own your code pepperminty wiki performance phd photos php pixelbot portable privacy problem solving programming problems projects prolog protocol protocols pseudo 3d python reddit redis reference releases rendering resource review rust searching secrets security series list server software sorting source code control statistics storage svg talks technical terminal textures thoughts three thing game three.js tool tutorial twitter ubuntu university update updates upgrade version control virtual reality virtualisation visual web website windows windows 10 xmpp xslt

Resizing Encrypted LVM Partitions on Linux

I found recently that I needed to resize some partitions on my new laptop as the Ubuntu installer helpfully decided to create only a 1GB swap partition, which is nowhere near enough for hibernation (you need a swap partition that's at least as big as your computer's RAM in order to hibernate). Unfortunately resizing my swap partition didn't allow me to hibernate successfully in the end, but I thought I'd still document the process here for future reference should I need to do it again in the future.

The key problem with resizing one's root partition is that you can't resize it without unmounting it, and you can't unmount it without turning off your computer. To get around this, we need to use a live distribution of Ubuntu. It doesn't actually matter how you boot into this - personally my preferred method is by using a multiboot USB flash drive, but you could just as well flash the latest ubuntu ISO to a flash drive directly.

Before you start though, it's worth mentioning that you really should have a solid backup strategy. While everything will probably be fine, there is a chance that you'll make a mistake and wind up loosing a lot of data. My favourite website that illustrates this is The Tao of Backup. Everyone who uses a computer (technically minded or not) should read it. Another way to remember it is the 3-2-1 rule: 3 backups, in 2 locations, with 1 off-site (i.e. in a different physical location).

Anyway, once you've booted into a live Ubuntu environment, open the terminal, and start a root shell. Your live distribution should come with LUKS and LVM already, but just in case it doesn't execute the following:

sudo apt update && sudo apt install -y lvm2 cryptsetup

I've talked about LVM recently when I was setting up an LVM-managed partition on an extra data hard drive for my research data. If you've read that post, then the process here may feel a little familiar to you. In this case, we're interacting with a pre-existing LVM setup that's encrypted with LUKS instead of setting up a new one. The overall process look a bit like this:

A flowchart showing the process we're going to follow. In short: open luks → LVM up → make changes → LVM down → close luks → reboot

With this in mind, let's get started. The first order of business is unlocking the LUKS encryption on the drive. This is done like so:

sudo modprobe dm-crypt
sudo cryptsetup luksOpen /dev/nvme0n1p3 crypt1

The first command there ensures that the LUKS kernel module is loaded if it isn't already, and the second unlocks the LUKS-encrypted drive. Replace /dev/nvme0n1p3 with the path to your LVM partition - e.g. /dev/sda1 for instance. The second command will prompt you for the password to unlock the drive.

It's worth mentioning here before continuing the difference between physical partitions and LVM partitions. Physical partitions are those found in the partition table on the physical disk itself, that you may find in a partition manage like GParted.

LVM partitions - for the purpose of this blog post - are those exposed by LVM. They are virtual partitions that don't have a physical counterpart on disk and are handled internally by LVM. As far as I know, you can't ask LVM easily where it stores them on disk - this is calculated and managed automatically for you.

In order to access our logical LVM partitions, the next step is to bring up LVM. To do this, we need to get LVM to re-scan the available physical partitions since we've just unlocked the one we want it to use:

sudo vgscan --mknodes

Then, we activate it:

sudo vgchange -ay

At this point, we can now do our maintenance and make any changes we need to. A good command to remember here is lvdisplay, which lists all the available LVM partitions and their paths:

sudo lvdisplay

In my case, I have /dev/vgubuntu/root and /dev/vgubuntu/swap_1. tldr-pages (for which I'm a maintainer) has a number of great LVM-related pages that were contributed relatively recently which are really helpful here. For example, to resize a logical LVM partition to be a specific size, do something like this:

sudo lvresize -L 32G /dev/vgubuntu/root

To extend a partition to fill all the remaining available free space, do something like this:

sudo lvextend -l +100%FREE /dev/vgubuntu/root

After resizing a partition, don't forget to run resize2fs. It ensures that the ext4 filesystem on top matches the same size as the logical LVM partition:

sudo resize2fs /dev/vgubuntu/root

In all of the above, replace /dev/vgubuntu/root with the path to your logical LVM partition in question of course.

Once you're done making changes, we need to stop LVM and close the LUKS encrypted disk to ensure all the changes are saved properly and to avoid any issues. This is done like so:

sudo vgchange -an
sudo cryptsetup luksClose crypt1

With that, you're done! You can now reboot / shutdown from inside the live Ubuntu environment and boot back into your main operating system. All done!

Found this helpful? Encountering issues? Comment below! It really helps my motivation.

The Spaceship | Why open-source is important

It was a day like any other: Captain Ralph of the small cargo ship SS carryalot was taking a shipment of bavarian noodles to Epsilon V in time for their annual noodle festival with his partner, Egbert the Engineer.

Unfortunately though, he was having some FTL drive trouble. An hour or two previously, he docked with the layover station Alpha 9 to refuel, but yet the FTL drive suddenly abruptly cut out - leaving them stranded in space - and showed an ominous-looking triangle logo on the maintenance screen.

After trying the steps in the troubleshooting guide, he and his engineer were no further forward, so they called the official Shade Co helpline, the manufacturer of his engine and explained the problem. The Shade Co representative offered to organise a tow to the nearest official Shade Co. repair station to investigate the issue, but noted that they would charge a fee for this service.

Grumbling about delays, Ralph accepted the offer. Once they reached the repair station, a Shade Co engineer investigated the issue. A quick fix was not to be found though, as the Shade Co engineer said that there wasn't anything wrong with the engine mechanically, and that Ralph must have installed some unofficial firmware.

Ralph and Egbert had not installed unofficial firmware in fact, as a Shade Co software developer later verified. After a number of days of debugging and testing by an official Shade Co software developer (who had to be flown in all the way from Delta IV), it was discovered that the engine had actually been infected with ransomware that had encrypted taken over and encrypted all the engine's hyperparameters through a bug in the firmware that was introduce 5 years prior - yet nobody had noticed the issue previously.

Since the Shade Co engine was configured to only accept official firmware signed with the Shade Co firmware signing key, the software developer had to get the engineer to extract the engine, and send it back to HQ for analysis.

Fed up with the situation with Shade Co, Ralph got a new Teklacki engine fitted on the double. The new engine had the open-source light-rider firmware installed, and he took out a support contract with Teklacki - just in case anything went wrong.

Thankfully, Ralph and Egbert were able to deliver their cargo of bavarian noodles a few days late, but in time for the finale of the noodle festival.

5 copies of a pixel art spaceship launching from a launch pad, activing as a horizontal rule. Artwork by me - ask for permission before reproducing

It sounds like a problem straight out of a science-fiction novel, but Ralph's issue is one that is unfortunately all too common. I can't remember who said it, but I'll always remember this quote I heard while doing my undergraduate Computer Science course:

All software that is sufficiently complex to be useful, will contain bugs

This is true - but in the case of closed-source software (as in the fictional Shade Co's FTL drive firmware), nobody else is able to audit the software and check it for security issues - or any other kinds of bugs, for that matter. Open software (and open hardware) is a critically important problem and something I'm rather passionate about.

Of course, open-source software can contain bugs too (e.g. the heartbleed bug in OpenSSL), but the difference is both in the probability that such bugs will be found, and in the choice and control that users have over how they use and configure such software.

Support contracts that are popular with commercial close-source software are essential to businesses. These don't have to be exclusive to closed-source software: Indeed, Canonical for example provide a commercial support contract for Ubuntu (a distribution of Linux) called landscape.

Another example of this problem are the recent issues with the Intel Management Engine. For those not in the know, Intel CPUs run a complex piece of close-source firmware called the Intel Management Engine. Bugs in this mean that a clever attacker can access everything running on your computer at all times - yet for all the security audits I'm sure they will have done, nobody else (such as independent security researchers) can see the source code. This leaves it in a race between malicious entities and security researchers doing black-box testing to see who finds a flaw first.

The same goes with hardware as well as software. If the design of the hardware is open, then it makes it possible for others to analyse it, find and fix bugs, and contribute new features.

These are just some of the reasons that I really believe that open-source is the answer to a number of key issues with software and hardware today. It's not a perfect silver-bullet - unmaintained open-source projects can be a nightmare (just not as much of a nightmare as an abandoned closed-source program), and governance of an open-source project can be really challenging (thanks so much to @waldyrious to teaching me about this while I've been a maintainer for tldr-pages!).

All things considered though, I believe it's a better approach than locking software behind paywalls and hoping that nobody will discover a flaw in it. After all, security through obscurity is never a good idea.

Pixel art by me: Ask before copying or reproducing.

Running multiple local versions of CUDA on Ubuntu without sudo privileges

I've been playing around with Tensorflow.js for my PhD (see my PhD Update blog post series), and I had some ideas that I wanted to test out on my own that aren't really related to my PhD. In particular, I've found this blog post to be rather inspiring - where the author sets up a character-based recurrent neural network to generate text.

The idea of transcoding all those characters to numerical values and back seems like too much work and too complicated just for a quick personal project though, so my plan is to try and develop a byte-based network instead, in the hopes that I can not only teach it to generate text as in the blog post, but valid Unicode as well.

Obviously, I can't really use the University's resources ethically for this (as it's got nothing to do with my University work) - so since I got a new laptop recently with an Nvidia GeForce RTX 2060, I thought I'd try and use it for some machine learning instead.

The problem here is that Tensorflow.js requires only CUDA 10.0, but since I'm running Ubuntu 20.10 with all the latest patches installed, I have CUDA 11.1. A quick search of the apt repositories on my system reveals nothing that suggests I can install older versions of CUDA alongside the newer one, so I had to devise another plan.

I discovered some months ago (while working with Viper - my University's HPC - for my PhD) that you can actually extract - without sudo privileges - the contents of the CUDA .run installers. By then fiddling with your PATH and LD_LIBRARY_PATH environment variables, you can get any program you run to look for the CUDA libraries elsewhere instead of loading the default system libraries.

Since this is the second time I've done this, I thought I'd document the process for future reference.

First, you need to download the appropriate .run installer for the CUDA libraries. In my case I need CUDA 10.0, so I've downloaded mine from here:

https://developer.nvidia.com/cuda-10.0-download-archive?target_os=Linux&target_arch=x86_64&target_distro=Ubuntu&target_type=runfilelocal

Next, we need to create a new subdirectory and extract the .run file into it. Do that like so:

cd path/to/runfile_directory;
mkdir cuda-10.0
./cuda_10.0.130_410.48_linux.run --extract=${PWD}/cuda-10.0/

Make sure that the current working directory contains no spaces, no preferably no other special characters either. Also, adjust the file and directory names to suit your situation.

Once done, this will have extract 3 subfiles - which also have the suffix .run. We're only interested in CUDA itself, so we only need to extract the the one that starts with cuda-linux. Do that like so (adjusting file/directory names as before):

cd cuda-10.0;
./cuda-linux.10.0.130-24817639.run -noprompt -prefix=$PWD/cuda;
rm *.run;
mv cuda/* .;
rmdir cuda;

If you run ./cuda-linux.10.0.130-24817639.run --help, it's actually somewhat deceptive - since there's a typo in the help text! I corrected it for this above though. Once done, this should leave the current working directory containing the CUDA libraries - that is a subdirectory next to the original .run file:

+ /path/to/some_directory/
    + ./cuda_10.0.130_410.48_linux.run
    + cuda-10.0/
        + version.txt
        + bin/
        + doc/
        + extras/
        + ......

Now, it's just a case of fiddling with some environment variables and launching your program of choice. You can set the appropriate environment variables like this:

export PATH="/absolute/path/to/cuda-10.0/bin:${PATH}";
if [[ ! -z "${LD_LIBRARY_PATH}" ]]; then
    export LD_LIBRARY_PATH="/absolute/path/to/cuda-10.0/lib64:${LD_LIBRARY_PATH}";
else
    export LD_LIBRARY_PATH="/absolute/path/to/cuda-10.0/lib64";
fi

You could save this to a shell script (putting #!/usr/bin/env bash before it as the first line, and then running chmod +x path/to/script.sh), and then execute it in the context of the current shell for example like so:

source path/to/activate-cuda-10.0.sh

Many deep learning applications that use CUDA also use CuDNN, a deep learning library provided by Nvidia for accelerating deep learning applications. The archived versions of CuDNN can be found here: https://developer.nvidia.com/rdp/cudnn-archive

When downloading (you need an Nvidia developer account, but thankfully this is free), pay attention to the version being requested in the error messages generated by your application. Also take care to download the version of CUDA you're using, and match the CuDNN version appropriately.

When you download, select the "cuDNN Library for Linux" option. This will give you a tarball, which contains a single directory cuda. Extract the contents of this directory over the top of your CUDA directory from following my instructions above, and it should work as intended. I used my graphical archive manager for this purpose.

PhD Update 6: The road ahead

Hey, I'm back early with another post in my PhD series! Turns out there was a bit of mix up last time, and I misplaced update 4 - so I've renamed the duplicate update 4 to update 5. Before we continue, here are all the posts in this series so far:

Earlier today, I had my PhD panel 2. For those reading who don't know, the primary assessments for a PhD (at least in my case) take the form of 6 panels - in which you write a big report, and then your supervisors get together with you and discuss it. The even numbered ones at the end of each year are the more important ones, I'm led to believe - so I was understandably nervous.

Thankfully, it went well, and I ended up writing a report that's a third longer than my undergraduate dissertation at ~9k words O.o Anyway, I wanted to make another post in this series, as the process of writing the report (and research plan) for my panel has made me look at the bigger picture of where my PhD is going, and how it's all going to tie together - and I wanted to share this here.

Temporal CNN

In the last few posts, I've given some insights into the process I've been working through to train a Temporal CNN to predict the output of HAIL-CAESAR. This is starting to reach its conclusion - though there are a number of tasks I have yet to complete to close this chapter of the story. In particular, I've (finally!) got the results from the hyperparameter optimisation I've been doing. Let's check out a heatmap:

The aforementioned heatmap - explained below.

I plotted the above with GNUPlot, but ran into a number of issues (of course) while generating it, which I'd like to briefly discuss here. If you're interested in a more detailed discussion of this, please get in touch (I'll also need to know your real name and where you're working / studying, just in case) via a private communication method - such as my email address on my website.

Firstly, those who are particularly particularly perceptive will notice the rather strange filename for the above image. As indicated, I ran into some instability in my early stopping algorithm. For each epoch, the algorithm I implemented looked at the error from 3 epochs ago - and if it's greater than the error value for the epoch that's just finished it allows training to continue. Unfortunately, due to the nature of the task at hand, this sometimes caused it to stop training too soon. Further analysis of the results revealed that it sometimes allowed it to continue training too long as well (which is reflected in the above chart), but I'm baffled on that one.

For this reason, all hyperparameter combinations that didn't train for quite long enough were omitted in the above graph.

Secondly, there's a large area of the chart that isn't filled in. This is because both increasing the number of filters and the temporal depth increases memory usage - and the area that's unfilled is the area in which it failed to train because it ran out of GPU memory (I used 4 x Nvidia GeForce V100 - thanks very much to my department for providing this!).

Finally, the most important thing to note is that after reviewing other similar models, I discovered that this really isn't the best way to evaluate the performance of this kind of model. Rather, it would be better to consider this as a classification-based task instead. This can be achieved by creating a number of different bins for the water depth values, and assign a probability to each pixel for each water depth bin. The technical name for this is cross-entropy loss as far as I'm aware, which in my case I've been tending to prefix with pixel-based.

In this fashion, I can use things like a confusion matrix (can't find a good explanation of this, so I'll talk about it in a future blog post) to evaluate the performance of the model - which allows me to see the models strengths and weaknesses. What are its strengths? What are its weaknesses? I hope to find out - though I'm a little nervous about it, as I haven't yet looked into how to generate one with Tensorflow.js or tested my initial cross-entropy loss implementation with my kind of dataset yet.

Having a probability-based system like this should also make the output more useful. By this, I mean that having probabilities assigned to different water depths should be easier to interpret than a single value which nobody knows how the model came to that conclusion, or how uncertain the model is about the result.

Let's take look at 1 more set of graphs before we move on:

4 graphs showing the training / validation root mean squared error - see below for explanation.

In this set of 4 graphs, I've taken the training and validation root mean squared error from 4 different combinations of hyperparameters. These rather effectively illustrate the early stopping stability here (top left), and they also demonstrate some instability in the training process (all graphs). I'm guessing the latter is probably caused by insufficient training data - this I can remedy quite easily as I have plenty more besides the 5K time steps I trained on (~1.5M time steps to be precise), but in the interests of time I limited the training dataset so that I could train a variety of different models.

It also shows the shortcomings of the root mean-squared-error approach I've been using so far - is an average error of 2 biased towards deeper or shallower water? What can it predict well, and what does it struggle with?

My supervisor also wants me to write a publication on the Temporal CNN stuff I've done when it's complete, so that's going to be an interesting new experience for me too (I'm both excited and terribly nervous at the same time).

Social Media Analysis

Moving forwards, I'm hoping to complement my Temporal CNN model with some social media analysis (subject to ethical approval of course - filling out the paperwork for this is part of my task list for the rest of this week). If I get the go-ahead, my intention is to bring some natural language processing AI model to the subject of flood mapping using social media - as I've noticed that there's.... limited existing material on this so far.

I have yet to read up properly on the subject, but most recently I've found this paper rather interesting. It uses an unsupervised model to identify the topic(s) a tweet is talking about, which they then follow up with some statistical analysis.

I'm a little fuzzy on how they identify where tweets are talking about (especially since this paper mentions that only a very small percentage of tweets have a geotag attached, and even of these a fraction will be wrong or misleading for 1 reason or another), but the researchers put together a rather nice map by chaining together several statistical techniques techniques I'm currently unfamiliar with. It shows hot spots and cold spots, which indicate where the damage for an earthquake has occurred.

If possible, it would be very nice indeed to plot a flood on a map (stretch goal: in real-time!). I anticipate this to be a key issue I'll need to pay attention to.

In terms of my most immediate starting point, I'm going to be doing some more reading on the subject, and then have a chat with my supervisor about the next steps (she's particularly knowledgable about natural language processing :D).

Conclusion

In conclusion, I'm starting to come to the end of the Temporal CNN chapter of my PhD, although I suspect it's going to keep coming back over and over again to say hello. Moving forwards, I'm hoping to complement work I've done so far with some social media analysis (subject to ethical approval) using AI-based Natural Language Processing - which will probably consist of improving an existing model or something more directly computer science related (my existing work is classified more as a contribution to environmental sciences, apparently).

Look out for more posts in this series in the future, as I'm sure I'll have plenty to talk about soon (I might do another one in a month's time, or it might end up being nearer 2 months depending on circumstances).

Proteus VIII Laptop from PC Specialist in Review

Recently I bought a new laptop from PC Specialist. Unfortunately I'm lost the original quote / specs that were sent to me, but it was a Proteus VIII. It has the following specs:

  • CPU: Intel i7-10875H
  • RAM: 32 GiB DDR4 2666MHz
  • Disk: 1 TiB SSD (M.2; nvme)
  • GPU: Nvidia GeForce RTX 2060

In this post, I want to give a review now that I've had the device for a short while. I'm still experiencing some teething issues (more on those later), but I've experienced enough of the device to form an opinion on it. This post will also serve as a sort-of review of the installation process of Ubuntu too.

It arrived in good time - thankfully I didn't have any issues with their choice of delivery service (DPD in my area have some problems). I did have to wait a week or 2 for them to build the system, but I wasn't in any rush so this was fine for me. The packaging it arrived it was ok. It came in a rather large cardboard box, inside which there was some plastic padding (sad face), inside which there was another smaller cardboard box. Work to be done in the eco-friendly department, but on the whole good here.

I ordered without an operating system, as my preferred operating system is Ubuntu (the latest version is currently 20.10 Groovy Gorilla). The first order of business was the OS installation here. This went went fine - but only after I could actually get the machine to boot! It turns out that despite it appearing to have support for booting from USB flash drives as advertised in the boot menu, this feature doesn't actually work. I tried the following:

  • The official Ubuntu ISO flashed to a USB 3 flash drive
  • A GRUB installation on a USB 3 flash drive
  • A GRUB installation on a USB 2 flash drive
  • Ubuntu 20.10 burned to a DVD in an external DVD drive (ordered with the laptop)

....and only the last one worked. I've worked with a diverse range of different devices, but never have I encountered one that completely refused to boot at all from a USB drive. Clearly some serious work is required on the BIOS. The number of different settings in the BIOS were also somewhat limited compared to other systems I've poked around on, but I can't give any specific examples here of things that were missing (other than a setting to toggle the virtualisation extensions, which was on by default) - so I guess it doesn't matter all that much. The biggest problem is the lack of USB flash drive boot support - that was really frustrating.

When installing Ubuntu this time around, I decided to try enabling LVM (Logical Volume Management, it's very cool I've discovered) and a LUKS encrypted hard drive. Although I've encountered these technologies before, this will be my first time using them regularly myself. Thankfully, the Ubuntu installer did a great job of setting this up automatically (except the swap partition, which was too small to hibernate, but I'll talk about that in a moment).

Once installed, I got to doing the initial setup. I'm particularly picky here - I use the Unity 7.5 Desktop (yes, I know Ubuntu now uses the GNOME shell, and no I haven't yet been able to get along with it). I'll skip over the details of the setup here, as it's not really relevant to the review. I will mention though that I'm also using X11, not Wayland at the moment - and that I have the propriety Nvidia driver installed (version 450 at the time of typing).

Although I've had a discrete graphics card before (most recently an AMD Radeon R7 M445, and an Nvidia 525M), this is the first time I've had one that's significantly more powerful than the integrated graphics that's built into the CPU. My experience with this so far is mostly positive (it's rather good at rendering in Blender, but I have yet to stress it significantly), and in some graphical tests it gives significantly higher frame rates than the integrated graphics. If you use the propriety graphics drivers, I recommend going into the Nvidia X server settings (accessed through the launcher) → PRIME Profiles, changing it to "On-Demand", and then rebooting. This will prolong your battery life and reduce the noise from the fans by using the integrated graphics by default, but allow you to run select applications on the GPU (see my recent post on how to do this).

It's not without its teething issues though. I think I'm just unlucky, but I have yet to setup a system with an Nvidia graphics card where I haven't had some kind of problem. In this case, it's screen flickering. To alleviate this somewhat, I found and followed the instructions in this Ask Ubuntu Answer. I also found I had to enable the Force synchronization between X and GLX workaround (and maybe another one as well, I can't remember). Even with these enabled, sometimes I still get flickering after it resumes from suspension / stand by.

Speaking of stand by mode, I've found that this laptop does not like hibernation at all. I'm unsure as to whether this is just because I'm using LVM + LUKS, or whether it's an issue with the device more generally, but if I try sudo pm-hibernate from the terminal, the screen flashes a bit, the mouse cursor disappears, and then the fan spins up - with the screen still on and all my windows apparently still open.

I haven't experimented with the quirks / workarounds provided yet, but I guess ties into the early issues with the BIOS, in that there are some clear issues with the BIOS that need to be resolved.

This hibernation issue also ties into the upower subsystem, in that even if you tell it (in both the Unity and GNOME desktop shells) to "do nothing" on low battery, it will forcefully turn the device off - even if you're in the middle of typing a sentence! I think this is because upower doesn't seem to have an option for suspend or "do nothing" in /etc/Upower/UPower.conf or something? I'm still investigating this issue (if you have any suggestions, please do get in touch!).

Despite these problems, the build quality seems good. It's certainly nice having a metal frame, as it feels a lot more solid than my previous laptop. The keyboard feels great too - the feedback from pressing the keys enhances the feeling of a solid frame. The keyboard is backlit too, which makes more a more pleasant experience in dimly lit rooms (though proper lighting is a must in any workspace).

The layout of the keyboard feels a little odd to me. It's a UK keyboard yes (I use a UK keyboard myself), but it doesn't have dedicated Home / End / Page Up / Page Down keys - these are built into the number pad at the right hand side of the keyboard. It's taken some getting used to toggling the number lock every time I want to use these keys, which increases cognitive load.

It does have a dedicated SysRq key though (which my last laptop didn't have), so now I can articles like this one and use the SysRq feature to talk to the Linux Kernel directly in case of a lock-up or crash (I have had the screen freeze on me once or twice - I later discovered this was because it had attempted to hibernate and failed, and I also ran into this problem, which I have yet to find a resolution to), or in case I accidentally set off a program that eats all of the available RAM.

The backlight of the keyboard goes from red at the left-hand side to green in the middle, and blue at the right-hand side. According to the PC Specialist forums, there's a driver that you can install to control this, but the installation seems messy - and would probably need recompiling every time you install a new kernel since DKMS (Dynamic Kernel Module System, I think) isn't used. I'm ok with the default for now, so I haven't bothered with this.

The touchpad does feel ok. It supports precision scrolling, has a nice feel to it, and isn't too small, so I can't complain about it.

The laptop doesn't have an inbuilt optical drive, which is another first for me. I don't use optical disks often, but it was nice having a built-in drive for this in previous laptops. An external one just feels clunky - but I guess I can't complain too much because of the extra components and power that are built-in to the system.

The airflow of the system - as far as I can tell so far, is very good. Air comes in through the bottom, and is then pushed out again through the back and the back of the sides by 2 different fans. These fans are, however, rather noisy at times - and have taken some getting used to as my previous Dell laptop's fans were near silent until I started to stress the system. The noise they make is also slightly higher pitched too, which makes it more noticeable - and sound like a jet engine (though I admit I've never heard a real one in person, and I'm also somewhat hypersensitive to sound) when at full blast. Curiously, there's a dedicated key on the keyboard that - as far as I can tell - toggles between the normal on-demand fan mode and locking the fans at full blast. Great to quickly cool down the system if the fans haven't kicked in yet, but not so great for your ears!

I haven't tested the speakers much, but from what I can tell they are appropriately placed in front of the keyboard just before the hinge for the screen - which is a much better placement than on the underside at the front in my last laptop! Definitely a positive improvement there.

I wasn't sure based on the details on the PC specialist website, but the thickness of the base is 17.5mm at the thickest point, and 6mm for the screen - making ~23.5mm in total (although my measurements may not be completely accurate).

To summarise, the hardware I received was great - overlooking a few pain points such as the BIOS and poor keyboard layout decisions. Some work is still needed on environmental issues and sustainability, but packaging was on the whole ok. Watch out for the delivery service, as my laptop was delivered by DPD who don't have a great track record in my area.

Overall, the hardware build quality is excellent. I'm not sure if I can recommend them yet, but if you want a new PC or laptop they are certainly not a bad place to look.

Found this helpful? Got a suggestion? Want to say hi? Comment below!

Run a Program on your dedicated Nvidia graphics card on Linux

I've got a new laptop, and in it I have an Nvidia graphics card. The initial operating system installation went ok (Ubuntu 20.10 Groovy Gorilla is my Linux distribution of choice here), but after I'd done a bunch of configuration and initial setup tasks it inevitably came around to the point in time that I wanted to run an application on my dedicated Nvidia graphics card.

Doing so is actually really quite easy - it's figuring out the how that was the hard bit! To that end, this is a quick post to document how to do this so I don't forget next time.

Before you continue, I'll assume here that you're using the Nvidia propriety drivers. If not, consult your distribution's documentation on how to install and enable this.

With Nvidia cards, it depends greatly on what you want to run as to how you go about doing this. For CUDA applications, you need to install the nvidia-cuda-toolkit package:

sudo apt install nvidia-cuda-toolkit

...and then there will be an application-specific menu you'll need to navigate to tell it which device to use.

For traditional graphical programs, the process is different.

In the NVIDIA X Server Settings, go to PRIME Profiles, and then ensure NVIDIA On-Demand mode is selected. If not, select it and then reboot.

The NVIDIA X Server Settings dialog showing the correct configuration as described above.

Then, to launch a program on your Nvidia dedicated graphics card, do this:

__NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia command_name arguments

In short, the __NV_PRIME_RENDER_OFFLOAD environment variable must be set to 1 , and the __GLX_VENDOR_LIBRARY_NAME environment variable must be set to nvidia. If you forget the latter here or try using DRI_PRIME=1 instead of both of these (as I advise in my previous post about AMD dedicated graphics cards), you'll get an error like this:

libGL error: failed to create dri screen
libGL error: failed to load driver: nouveau

The solution is to set the above environment variables instead of DRI_PRIME=1. Thanks to this Ask Ubuntu post for the answer!

Given that I know I'll be wanting to use this regularly, I've created a script in my bin folder for it. It looks like this:

#!/usr/bin/env bash
export __NV_PRIME_RENDER_OFFLOAD=1;
export __GLX_VENDOR_LIBRARY_NAME=nvidia;

$@;

Save the above to somewhere in your PATH (e.g. your bin folder, if you have one). Don't forget to run chmod +x path/to/file to mark it executable.

Found this useful? Does this work on other systems and setups too? Comment below!

Adventurous Accounts: The Pepperminty Wiki Android App

I've found recently that there are a growing number of things I haven't blogged about on here yet because I either haven't finished them, or I have run into difficulties. I'd like to start blogging about these projects here and there, so in this post I'm going to talk about the Pepperminty Wiki Android App.

For those not in the know, Pepperminty Wiki is a wiki engine I've built myself from scratch that is all packed into a single file using a custom build system. It's pretty stable, and I'm committed to supporting it in the long term - as I have more than 1 rather important wikis hosted using the software myself.

Anyway, as a companion to this I have also implemented a simple Android app companion that reads information from the Pepperminty Wiki Rest API. It's read-only at the moment, but long-term I'd like to build an app that can write back changes as well.

3 screenshots of the Pepperminty Wiki Android App

In it's current form, it works quite well actually. It's functional (although there are a few bugs lurking around). You can even download it from the Google Play Store: https://play.google.com/store/apps/details?id=com.sbrl.peppermint

I haven't really worked on it for a while though, and I'd like to talk about why that is.

Trying to implement additional features is where things get complicated though. The app is written in Kotlin, which I found much easier to use than Java (example: spawning a new thread is 3 lines instead of 30!). For some crazy reason when I was reading the guidance on building such an app, one of the guidelines was to keep the class structure as flat as possible.

This was a bad idea.

For so many reasons as well. Without using lots of files and multiple classes, a codebase gets difficult to follow and build upon. Taking this advice has severely impacted my ability to continue to add new features to the project, so a large refactor is needed to bring this under control. Add to this the complexity and ambiguity of the Android APIs themselves (how about a OnListFragmentInteractionListener, or an android.support.design.widget.BottomNavigationView? the API to display a list of things is also particularly complicated), and you can see how it can spiral out of control.

In addition, I've discovered that I have some serious problems updating it for Android 10. When you build an Android app, you have to regularly update the target platform version (I forget the exact name) - otherwise you get nagging emails from Google Play about this (I've had several already). I'm pretty sure you can't use newer APIs either if you don't keep the platform version updated.

It's been a while since I've tried to update it so I can't remember the exact error messages, but I do remember doing some pretty extensive searching around on the Internet and having no luck.

To this end, the app works in its current form quite well, but at some point (eventually) I'm going to make another attempt at writing a replacement for it. To do this though, I'd like to find some kind of framework that eases the process just a little bit - building an Android app at the moment is unhelpfully complicated and a larger time investment than I can afford at the moment with the regular maintenance required - since regular updates have to be done to keep the app updated, which don't seem to serve much purpose at all to me other than being a bother and creating extra work - since often when you update dependencies and the build system behind an Android app everything breaks, which you then have to fix.

This is different from the maintenance for Pepperminty Wiki itself, which is still regular - but constitutes fixing bugs as they crop up and occasionally implementing new features - and it has settled into a delightfully sedate pace that allows me to follow my inspiration. I find this to be much more enjoyable.

If anyone has any suggestions of alternative approaches I could try, I'm definitely interested. Comment below!

Lua in Review 2

The Lua Logo Back in 2015, I reviewed the programming language Lua. A few months ago I rediscovered the maze generation implementation I ported as part of that post, and since then I've been writing quite a bit of Lua - so I thought I'd return to the thoughts in that original post and write another language review now that I've had some more experience with the language.

For those not in the know, Lua is a lightweight scripting language. You can find out more here: https://www.lua.org/

In the last post, I mentioned that Lua is very lightweight. I still feel this is true today - and it has significant advantages in that the language is relatively simple to understand and get started in - and feels very predictable in how it functions.

It is often said that Lua is designed to be embedded in other programs (such as to provide a modding interface to a game, for example) - and this certainly seems to hold true. Lua definitely seems to be well-suited for this kind of use-case.

The lightweightness comes at a cost though. The first of these is the standard library. Compared to other languages such as C♯ and even Javascript, the standard library sucks. At least half of the time you find yourself reimplementing some algorithm that should have been packaged with the language itself:

  • Testing if a string starts with a given substring
  • Rounding a number to the nearest integer
  • Making a shallow copy of a table

Do you want to do any of these? Too bad, you'll have to implement them yourself in Lua. While these really aren't a big deal, my point here is that with functions like these it can be all too easy to make a mistake when implementing them, and then your code has a bug in it. If you find and fix an obscure edge case for example, that fix will only apply to your code and not the hundreds of other ad-hoc implementations other developers have had to cook up to get things done, leading to duplicated and wasted effort.

A related issue I'm increasingly finding is that of the module system and the lack of reusable packages. In Lua, if you want to import code from another file as a self-contained module, you use the require function, like this:

local foo = require("foo")

The above will import code from a file named foo.lua. However, this module import here is done relative to the entrypoint of your program, and not the file that's requesting the import, leading to a number of issues:

  • If you want to move a self-contained subsection of a codebase around, suddenly you have to rewrite all the imports of not only the rest of the codebase (as normal), but also of all the files in the subdirectory you've just moved
  • You can't have a self-contained 'package' of code that, say, you have in a git submodule - because the code in the submodule can't predict the path to the original entrypoint of your program relative to itself

While LuaRocks attempts to alleviate this issue to some extent (and I admit I haven't yet investigated it in any great detail), as far as I can tell it installs packages globally, which doesn't help if you're writing some Lua that is going to be embedded inside another program, as the global package may or may not be available. Even if it is available, it's debatable as to whether you'd be allowed to import it anyway, since many embedded environments have restrictions in place here for security purposes.

Despite these problems, I've found Lua to be quite a nice language to use (if a little on the verbose side, due to syntactical structure and the lack of a switch statement). Although it's not great at getting out of your way and letting you get on with your day (Javascript is better at this I find), it does have some particularly nice features - such as returning multiple values from a single function (which mostly makes up for the lack of exceptions), and some cute table definition syntax.

It's not the kind of language you want to use for your next big project, but it's certainly worth experimenting with to broaden your horizons and learn a new language that forces you to program in a significantly different style than you would perhaps use normally.

New website for Pepperminty Wiki

By now, Pepperminty Wiki is quite probably my longest running project - and I'm absolutely committed to continuing to support and improve it over time (I use it to host quite a lot of very important information myself).

As part of this, one of the things I'm always looking to improve is the installation process and the first impression users get when they first visit Pepperminty Wiki. Currently, this has a GitHub repository. This is great (as it shows people that we're open-source), but it isn't particularly user-friendly for those who are less technically inclined.

To this end, I've built a shiny new website to introduce people to Pepperminty Wiki and the features it has to offer. I've been thinking about this for a while, and I realised that actually despite the fact that I haven't yet incremented the version number to v1.0 yet (as of the time of posting the latest stable release is v0.22), Pepperminty Wiki is actually pretty mature, easy to deploy and use, and stable.

The new website for Pepperminty Wiki (link below)

(Above: The new Pepperminty Wiki website. Check it out here!)

The stability is a new one for me, as it isn't something I've traditionally put much of a focus on - instead focusing on educational purposes. Development of Pepperminty Wiki has sort of fallen into a pattern of 2-3 releases per year - each of which is preceded by one or more beta releases. I always leave at least 1 week between releasing a beta and the subsequent stable release to give myself and beta testers (of which Pepperminty Wiki has some! If you're reading this, I really appreciate it) time to spot any last-minute issues.

Anyway, the website can be found here: https://peppermint.mooncarrot.space/

Share it with your friends! :D

The initial plan was to buy a domain name like pepperminty.wiki for it, but after looking into the prices (~£36.29 per year) I found it was waaay too expensive for a project that I'm not earning a penny from working on (of course, if you're feeling that way inclined I have a Liberapay setup if you'd like to contribute towards server costs, but it's certainly not required).

Instead, I used a subdomain of one of my existing domains, mooncarrot.space (I use this one mostly for personal web app instances on my new infrastructure I'm blogging about in my cluster series), which is a bit shorter and easier to spell/say than starbeamrainbowlabs.com if you're not used to it.

After a few false starts, I settled on using Eleventy as my static site generator of choice. I'm not making use of all it's features (not even close), but I've found it fairly easy to use and understand how it ticks - and also flexible enough such that it will work with me, rather attempting to force me into a particular way of working.

Honourable mentions here include Hugo (great project, but if I recall correctly I found it confusing and complicated to setup and use), documentation (an epic documentation generator for JS projects, but not suitable for this type of website - check out some of the docs I auto-generate via my Laminar CI setup: powahroot, applause-cli, terrain50).

The Pepperminty Wiki website light theme

(Above: The light theme for the website - which one you see depends on your system preference - I use prefers-colour-scheme here. Personally I prefer the dark theme myself, as it's easier on my eyes)

The experience of implementing the website was an interesting one. Never having built a website to 'sell' something before (even if this is for a thing that's free), I found the most challenging part of the experience determining what text to use to appropriately describe the features of Pepperminty Wiki.

From the beginning I sort of had a vision for how I wanted the website to look. I wanted an introductory bit at the top (with a screenshot at a cool angle!), followed by a bit that explained the features, the some screenshots with short descriptions, followed finally by a download section. I also wanted it to be completely mobile-friendly.

A screenshot of the website as viewed by a mobile device

(Above: A screenshot of the website as viewed by a mobile device. The Firefox Developer Tools were useful for simulating this)

For the most part, this panned out quite well. Keeping the design relatively simple enabled me to support mobile devices as I went along, with minimal tweaks needed at the end of the process (mobile support really needs to be part of the initial design process).

The cool screenshot at the top and the fancy orange buttons you'll see in various places across the site were especially fun to put together - the iterative process of adding CSS directives to bring the idea I had in my head as to how I wanted it to look to life was very satisfying. I think I'll use the same basic principle I used for the fancy buttons again elsewhere (try hovering over them and clicking them to see the animations).

The bottom of the website, showing the fancy orange buttons

(Above: The bottom of the website, showing the fancy orange buttons)

I did contemplate the idea of using a CSS framework for the website, but not having seriously used one before for a personal project combined with the advent of the CSS grid ended up in the decision to abandon the use of a framework once again (I'll learn one eventually, I'm sure ).

So far my experience with frameworks is that they just get in the way when you want to do something that wasn't considered when the framework was built, but I suppose that given their widespread use elsewhere that I really should make an effort to learn at least one framework to get that experience (any suggestions in the comments are welcome).

All in all the experience of building the Pepperminty Wiki website was an enjoyable one. It took a number of hours over a number of days to put together (putting the false starts aside), but I feel as though it was definitely worth it.

Find the website here: https://peppermint.mooncarrot.space/

If I end up moving it at a later date, I'll ensure there's a redirect in place so the above link won't break.

Found this useful? Got a comment about or a suggestion to improve the website? Comment below! I'd love to hear from you.

Stitching videos from frames with ffmpeg (and audio/video editing tricks)

ffmpeg is awesome. In case you haven't heard, ffmpeg is a command-line tool for editing and converting audio and video files. It's taken me a while to warm up to it (the command-line syntax is pretty complicated), but since I've found myself returning to it again and again I thought I'd blog about it so you can use it too (and also for my own reference :P).

I have 2 common tasks I perform with ffmpeg:

  1. Converting audio/video files to a more efficient format
  2. Stitching PNG images into a video file

I've found that most tasks will fall into 1 of the 2 boxes. Sometimes I want to do something more complicated, but I'll usually just look that up and combine the flags I find there with the ones I usually use for one of the 2 above tasks.

horizontal rule made up of an orange rotating maze cube

Stitching PNG images into a video

I don't often render an animation, but when I do I always render to individual images - that way if it crashes half way through I can more easily restart where I left off (and also divide the workload more easily across multiple machines if I need it done quickly).

Individual image files are all very well, but they have 2 problems:

  1. They take up lots of space
  2. You can't easily watch them like a video

Solving #1 is relatively easy with optipng (sudo apt install optipng), for example:

find . -iname "*.png" -print0 | xargs -0 -P4 -n1 optipng -preserve

I cooked that one up a while ago and I've had it saved in my favourites for ages. It finds all png images in the current directory (recursively) and optimises them with optipng.

To resolve #2, we can use ffmpeg as I mentioned earlier in this post. Here's one I use:

ffmpeg -r 24 -i path/to/frame_%04d.png -c:v libvpx-vp9 -b:v 2000k -crf 31 path/to/output.webm

A few points on this one:

  • -r 24: This sets the frames per second.
  • -i path/to/frame_%04d.png: The path to the png images to stitch. %04d refers to 4 successive digits in a row that count up from 1 - e.g. 0001, 0002, etc. %d would mean 1, 2, ...., and %02d for example would mean 01, 02, 03....
  • -c:v libvpx-vp9: The codec. We're exporting to webm, and vp9 is the latest available codec for webm video files.
  • -b:v 2000k: The bitrate. Higher values mean more bits will be used per second of video to encode detail. Raise to increase quality.
  • -crf 31: The encoding quality. Should be anumber betwee 0 and 63 - higher means lower quality and a lower filesize.

Read more about encoding VP9 webm videos here: Encode/VP9- ffmpeg

Converting audio / video files

I've talked about downmuxing audio before, so in this post I'm going to be focusing on video files instead. The above command for encoding to webm can be used with minimal adjustment to transcode videos from 1 format to another:

ffmpeg -hide_banner -i "path/to/input.avi" -c:v libvpx-vp9 -c:a libopus -crf 31 -b:v 0 "path/to/output/webm";

In this one, we handle the audio as well as the video all in 1 go, encoding the audio to use the opus codec, and the video as before. This is particularly useful if you've got a bunch of old video files generated by an old camera. I've found that often old cameras like to save videos as raw uncompressed AVI files, and that I can reclaim a sigificant amount of disk space if I transcode them to something more efficient.

Naturally, I cooked up a one-liner that finds all relevant video files recursively in a given directory and transcodes them to 1 standard efficient format:

find . -iname "*.AVI" -print0 | nice -n20 xargs --verbose -0 -n1 -I{} sh -c 'old="{}"; new="${old%.*}.webm"; ffmpeg -hide_banner -i "${old}" -c:v libvpx-vp9 -c:a libopus -crf 30 -b:v 0 "${new}" && rm "${old}"';

Let's break this down. First, let's look at the commands in play:

  • find: Locates the target files to transcode
  • nice: Runs the following command in the background
  • xargs: Runs the following command over and over again for every line of input
  • sh: A shell, to execute multiple command in sequence
  • ffmpeg: Does the transcoding
  • rm: Deletes the old file if transcoding completes successfully

The old="{}"; new="${old%.*}.webm"; bit is a bit of gymnastics to pull in the path to the target file (the -I {} tells xargs where to substitute it in) and determine the new filepath by replacing the file extension with .webm.

I've found that this does the job quite nicely, and I can set it running and go off to do something else while it happily sits there and transcodes all my old videos, reclaiming lots of disk space in the process.

Found this useful? Got a cool command for processing audio/video/image files in bulk? Comment below!

Art by Mythdael