Starbeamrainbowlabs

Stardust
Blog


Archive


Mailing List Articles Atom Feed Comments Atom Feed Twitter Reddit Facebook

Tag Cloud

3d 3d printing account algorithms android announcement architecture archives arduino artificial intelligence artix assembly async audio automation backups bash batch blog bookmarklet booting bug hunting c sharp c++ challenge chrome os cluster code codepen coding conundrums coding conundrums evolved command line compilers compiling compression containerisation css dailyprogrammer data analysis debugging demystification distributed computing docker documentation downtime electronics email embedded systems encryption es6 features ethics event experiment external first impressions freeside future game github github gist gitlab graphics hardware hardware meetup holiday holidays html html5 html5 canvas infrastructure interfaces internet interoperability io.js jabber jam javascript js bin labs learning library linux lora low level lua maintenance manjaro minetest network networking nibriboard node.js operating systems own your code pepperminty wiki performance phd photos php pixelbot portable privacy problem solving programming problems project projects prolog protocol protocols pseudo 3d python reddit redis reference release releases rendering resource review rust searching secrets security series list server software sorting source code control statistics storage svg talks technical terminal textures thoughts three thing game three.js tool tutorial tutorials twitter ubuntu university update updates upgrade version control virtual reality virtualisation visual web website windows windows 10 xmpp xslt

I bought a 3d printer! | Ender 3 v2 in review

Hey there! Recently, I bought an Ender 3 v2 3d printer, and now that I've used it enough I can talk about how I've found it. In this post, I'll be covering my thoughts on the 2 parts of 3d printing: the printer itself, and the slicing software that you run models through to turn them into G-code that the 3d printer understands.

A photo of my ender 3 v2 in my loft.

(Above: A photo of my ender 3 v2 in my loft.)

Let's start with the printer itself. Compared to my earlier efforts, it is immediately apparent that the design of the printer is considerably more robust. The frame has 2 pillars that are pretty much impossible to screw together incorrectly (it will be obvious at later steps if you do it incorrectly). The drive belt does not come pre-installed though, which I found to be the most frustrating part of the build.

The instruction booklet was noticeably more sparse and unclear than the Axis 3d instructions though - at some points I found myself having to look up some assembly instructions in order to understand them.

Once assembled, the printer was fairly easy to use. It has a colour display that as I understand it has a significantly higher resolution than previous models by Creality, which allows room for things like icons which greatly enhance the usability of the printer. I suspect that this upgrade may be due to the presence of a 32 bit microprocessor (likely an ARM) rather than an 8 bit one found in previous models, which is bound to come with more RAM etc as standard.

While the printer does come with a small amount of filament, I recommend buying a reel or 2 to use with it. Because the filament that comes with it is not on a reel, it easily tangles into nasty knots. I'm going to empty a reel of other filament first before winding the white filament that came with the printer back onto the empty reel.

Loading the filament takes practice. The advice in the booklet to cut a 45° angle on the end of the filament is really important, as otherwise it's impossible to load the filament past the NEMA motor filament loading mechanism. You have to get the end of the filament at just the right angle too to catch the PTFE tubing that leads to the hot end nozzle. This does get easier though with time - personally I think I need to make or purchase an extra clip-on light, as because my printer is in my loft it can be difficult to see what I'm doing when changing the filament.

Before you print, you have to manually level the bed of the printer. This is done by adjusting the wheels under the 4 corners of the build plate until a piece of plain paper on the build plate just gently scratches the tip of the nozzle. If the wheel in 1 corner doesn't appear to go far enough, try coming back to it and doing the other corners first. By adjusting the wheel in the other corners, the corner in question will be adjusted as well. It is for this reason it is also recommended to go around 2-3 times to make sure it's all level before beginning.

Printing itself is fairly simple. You insert the microSD card containing the G-code, preheat the nozzle to the right temperature, use the auto-home feature, and then select the file you want to print using the menu. I found that it's absolutely essential that you make sure that the build plate itself is as far back is possible - just touching the end-stop - as otherwise you get nasty loud belt grinding noises when it runs through the preamble at the beginning the G-code that causes the hot end to move all the way to the front of the build plate and back again.

Once a print is complete, I've found the supplied scraping tool to be sufficient to extract prints from the print bed. It's much easier to wait between 5 and 10 minutes for the heated bed to cool down before attempting to scrape it off - many prints can just be lifted off the bed with no scraping required (tested with some PLA filament).

Speaking of the build plate, it has a glass surface on top. My research suggested that this leads to a much more even surface on the bottom of prints, and I've certainly found this to be true. While you do have to be careful not to scratch it, the glass build plate the Ender 3 v2 comes with as standard is a nice addition to the printer.

To summarise, the Ender 3 v2 is a really nice solid printer. It's well built and relatively easy to setup and use, though filament organisation and anti-tangling will be the first project you work on when you start printing.

4 calibration cats! 3 in blue and 1 in white. The 3 in buile have some stringing issues.

(Above: 4 calibration cats! 3 in blue and 1 in white - the 3 in buile have some stringing issues. I'll definitely be printing more of these :D)

Ultimaker Cura

In order to print things, you need to use a slicer, which takes a 3D model (e.g. in a .obj or .stl file). My choice here is Ultimaker Cura. While it's in the default Ubuntu repositories, I found the AppImage on GitHub to be more up-to-date, so I packaged it into my apt repository.

Since Cura doesn't appear to have explicit support for the Ender 3 v2 just yet, I've been using the Ender 3 Pro profile instead, which seems to work just fine.

Cura has a large number of feature which are reasonably well organised for preparing prints. You can import the aforementioned .obj or .stl files and apply various transformations to imported models, such as translate, scale, and rotate. Cura also helpfully auto-snaps the bottom of a model to the virtual build plate for you, so you don't have to worry about getting the alignment right.

Saving and loading project files is annoying. It asks you to specify the place you want to save a project to every time you hit the save button and doesn't remember the location of the project file you saved to last (e.g. like a text editor, GIMP, LibreOffice, etc), which is really frustrating. I guess it's a better than crashing on save like the version of Cura in the default Ubuntu repositories though, so I'll count that as a win?

It would also be helpful if there was a system for remembering or bookmarking the commonly adjusted settings. I've found that I usually need to adjust the same small set of setting over and over again, and it's a pain having to find them in the "expert" settings list or using the search bar.

The preview mode is also useful, as it shows you precisely what your printer will actually end up printing. Great for checking that text is large / thick enough, that parts are large though or avoid losing detail, and for double checking that supports will print the way you expect (I recommend trying the tree mode for supports if you have to use them).

Given that I've used Blender before (exhibit a), it would be very nice to have the ability customise keyboard shortcuts or even better have a Blender keyboard shortcut scheme I could enable. Hitting R, then X, then 90 is just 1 example of a whole range of keyboard shortcuts I keep trying to use in Cura, but Cura doesn't support them.

On the whole, Cura works well as a slicer. It provides many tweakable settings for adjusting things based on the filament you're using (I need to look into making a profile for the filament I use at the moment, as I'm sure the next roll of filament will require different settings). The 3d preview window is intuitive and easy to use. While the program does have some rough edges as mentioned above, these are minor issues that could easily be corrected.

Tensorflow / Tensorflow.js in Review

For my PhD, I've been using both Tensorflow.js (Tensorflow for Javascript) and more recently Tensorflow for Python (including the bundled Keras) extensively for implementing multiple different models. Given the experiences I've had so far, I thought it was high time I put my thoughts to paper so to speak and write a blog post reviewing the 2 frameworks.

Tensorflow logo

Tensorflow for Python

Let's start with Tensorflow for Python. I haven't been using it as long as Tensorflow.js, but as far as I can tell they've done a great job of ensuring it comes with batteries included. It has layers that come in an enormous number of different flavours for doing everything you can possibly imagine - including building Transformers (though I ended up implementing the time signal encoding in my own custom layer).

Building custom layers is not particularly difficult either - though you do have to hunt around a bit for the correct documentation, and I haven't yet worked out the all the bugs with loading model checkpoints that use custom layers back in again.

Handling data as a generic "tensor" that contains an n-dimension slab of data is - once you get used to it - a great way of working. It's not something I would recommend to the beginner however - rather I would recommend checking out Brain.js. It's easier to setup, and also more transparent / easier to understand what's going on.

Data preprocessing however is where things start to get complicated. Despite a good set of API reference docs to refer to, it's not clear how one is supposed to implement a performant data preprocessing pipeline. There are multiple methods for doing this (tf.data.Dataset, tf.utils.Sequence, and others), and I have as of yet been unable to find a definitive guide on the subject.

Other small inconsistencies are also present, such as both the Keras website and the Tensorflow API docs both documenting the Keras API, which in and of itself appears to be an abstraction of the Tensorflow API.... it gets confusing. Some love for the docs more generally is also needed, as I found some the wording in places ambiguous as to what it meant - so I ended up guessing and having to work it out by experimentation.

By far the biggest issue I encountered though (aside from the data preprocessing pipeline, which is really confusing and frustrating) is that a highly specific version of CUDA is required for each version of Tensorflow. Thankfully, there's a table of CUDA / CuDNN versions to help you out, but it's still pretty annoying that you have to have a specific version. Blender manages to be CUDA enabled while supporting enough different versions of CUDA that I haven't had an issue on stock Ubuntu with the propriety Nvidia drivers and the system CUDA version, so whhy can't Tensorflow do it too?

Tensorflow.js

This brings me on to Tensorflow.js, the Javascript bindings for libtensorflow (the underlying C++ library). This also has the specific version of CUDA issue, but in addition the version requirement documented in the README is often wrong, leaving you to make random wild guesses as to which version is required!

Despite this flaw, Tensorflow.js fixes a number of inconsistencies in Tensorflow for Python - I suspect that it was written after Tensorflow for Python was first implemented. The developers have effectively learnt valuable lessons from the Python version of Tensorflow, which has resulted in a coherent and cohesive API that makes a much more sense than the API in Python. A great example of this is tf.Dataset: The data preprocessing pipeline in Tensorflow.js is well designed and easy to use. The Python version could learn a lot from this.

While Tensorflow.js doesn't have quite the same set of features (a number of prebuilt layers that exist in Python don't yet existing in Tensorflow.js), it still provides a reasonable set of features that satisfy most use-cases. I have noticed a few annoying inconsistencies in the loss functions though and how they behave - for example I implemented an autoencoder in Tensorflow.js, but it only returned black and white pixels - whereas Tensorflow for Python returned greyscale as intended.

Aside from improving CUDA support and adding more prebuilt layers, the other thing that Tensorflow.js struggles with is documentation. It has comparatively few guides with respect to Tensorflow for Python, and it also has an ambiguity problem in the API docs. This could be mostly resolved by being slightly more generous with explanations as to what things are and do. Adding some examples to the docs in question would also help - as would also fixing the bug where it does not highlight the item you're currently viewing in the navigation page (DevDocs support would be awesome too).

Conclusion

Tensorflow for Python and Tensorflow.js are feature-filled and performant frameworks for machine learning and processing large datasets with GPU acceleration. Many tutorials are provided to help newcomers to the frameworks, but once you've followed a tutorial or 2 you're left very much to be on your own. A number of caveats and difficulties such as CUDA versions and confusing APIs / docs make mastering the framework difficult.

Proteus VIII Laptop from PC Specialist in Review

Recently I bought a new laptop from PC Specialist. Unfortunately I'm lost the original quote / specs that were sent to me, but it was a Proteus VIII. It has the following specs:

  • CPU: Intel i7-10875H
  • RAM: 32 GiB DDR4 2666MHz
  • Disk: 1 TiB SSD (M.2; nvme)
  • GPU: Nvidia GeForce RTX 2060

In this post, I want to give a review now that I've had the device for a short while. I'm still experiencing some teething issues (more on those later), but I've experienced enough of the device to form an opinion on it. This post will also serve as a sort-of review of the installation process of Ubuntu too.

It arrived in good time - thankfully I didn't have any issues with their choice of delivery service (DPD in my area have some problems). I did have to wait a week or 2 for them to build the system, but I wasn't in any rush so this was fine for me. The packaging it arrived it was ok. It came in a rather large cardboard box, inside which there was some plastic padding (sad face), inside which there was another smaller cardboard box. Work to be done in the eco-friendly department, but on the whole good here.

I ordered without an operating system, as my preferred operating system is Ubuntu (the latest version is currently 20.10 Groovy Gorilla). The first order of business was the OS installation here. This went went fine - but only after I could actually get the machine to boot! It turns out that despite it appearing to have support for booting from USB flash drives as advertised in the boot menu, this feature doesn't actually work. I tried the following:

  • The official Ubuntu ISO flashed to a USB 3 flash drive
  • A GRUB installation on a USB 3 flash drive
  • A GRUB installation on a USB 2 flash drive
  • Ubuntu 20.10 burned to a DVD in an external DVD drive (ordered with the laptop)

....and only the last one worked. I've worked with a diverse range of different devices, but never have I encountered one that completely refused to boot at all from a USB drive. Clearly some serious work is required on the BIOS. The number of different settings in the BIOS were also somewhat limited compared to other systems I've poked around on, but I can't give any specific examples here of things that were missing (other than a setting to toggle the virtualisation extensions, which was on by default) - so I guess it doesn't matter all that much. The biggest problem is the lack of USB flash drive boot support - that was really frustrating.

When installing Ubuntu this time around, I decided to try enabling LVM (Logical Volume Management, it's very cool I've discovered) and a LUKS encrypted hard drive. Although I've encountered these technologies before, this will be my first time using them regularly myself. Thankfully, the Ubuntu installer did a great job of setting this up automatically (except the swap partition, which was too small to hibernate, but I'll talk about that in a moment).

Once installed, I got to doing the initial setup. I'm particularly picky here - I use the Unity 7.5 Desktop (yes, I know Ubuntu now uses the GNOME shell, and no I haven't yet been able to get along with it). I'll skip over the details of the setup here, as it's not really relevant to the review. I will mention though that I'm also using X11, not Wayland at the moment - and that I have the propriety Nvidia driver installed (version 450 at the time of typing).

Although I've had a discrete graphics card before (most recently an AMD Radeon R7 M445, and an Nvidia 525M), this is the first time I've had one that's significantly more powerful than the integrated graphics that's built into the CPU. My experience with this so far is mostly positive (it's rather good at rendering in Blender, but I have yet to stress it significantly), and in some graphical tests it gives significantly higher frame rates than the integrated graphics. If you use the propriety graphics drivers, I recommend going into the Nvidia X server settings (accessed through the launcher) → PRIME Profiles, changing it to "On-Demand", and then rebooting. This will prolong your battery life and reduce the noise from the fans by using the integrated graphics by default, but allow you to run select applications on the GPU (see my recent post on how to do this).

It's not without its teething issues though. I think I'm just unlucky, but I have yet to setup a system with an Nvidia graphics card where I haven't had some kind of problem. In this case, it's screen flickering. To alleviate this somewhat, I found and followed the instructions in this Ask Ubuntu Answer. I also found I had to enable the Force synchronization between X and GLX workaround (and maybe another one as well, I can't remember). Even with these enabled, sometimes I still get flickering after it resumes from suspension / stand by.

Speaking of stand by mode, I've found that this laptop does not like hibernation at all. I'm unsure as to whether this is just because I'm using LVM + LUKS, or whether it's an issue with the device more generally, but if I try sudo pm-hibernate from the terminal, the screen flashes a bit, the mouse cursor disappears, and then the fan spins up - with the screen still on and all my windows apparently still open.

I haven't experimented with the quirks / workarounds provided yet, but I guess ties into the early issues with the BIOS, in that there are some clear issues with the BIOS that need to be resolved.

This hibernation issue also ties into the upower subsystem, in that even if you tell it (in both the Unity and GNOME desktop shells) to "do nothing" on low battery, it will forcefully turn the device off - even if you're in the middle of typing a sentence! I think this is because upower doesn't seem to have an option for suspend or "do nothing" in /etc/Upower/UPower.conf or something? I'm still investigating this issue (if you have any suggestions, please do get in touch!).

Despite these problems, the build quality seems good. It's certainly nice having a metal frame, as it feels a lot more solid than my previous laptop. The keyboard feels great too - the feedback from pressing the keys enhances the feeling of a solid frame. The keyboard is backlit too, which makes more a more pleasant experience in dimly lit rooms (though proper lighting is a must in any workspace).

The layout of the keyboard feels a little odd to me. It's a UK keyboard yes (I use a UK keyboard myself), but it doesn't have dedicated Home / End / Page Up / Page Down keys - these are built into the number pad at the right hand side of the keyboard. It's taken some getting used to toggling the number lock every time I want to use these keys, which increases cognitive load.

It does have a dedicated SysRq key though (which my last laptop didn't have), so now I can articles like this one and use the SysRq feature to talk to the Linux Kernel directly in case of a lock-up or crash (I have had the screen freeze on me once or twice - I later discovered this was because it had attempted to hibernate and failed, and I also ran into this problem, which I have yet to find a resolution to), or in case I accidentally set off a program that eats all of the available RAM.

The backlight of the keyboard goes from red at the left-hand side to green in the middle, and blue at the right-hand side. According to the PC Specialist forums, there's a driver that you can install to control this, but the installation seems messy - and would probably need recompiling every time you install a new kernel since DKMS (Dynamic Kernel Module System, I think) isn't used. I'm ok with the default for now, so I haven't bothered with this.

The touchpad does feel ok. It supports precision scrolling, has a nice feel to it, and isn't too small, so I can't complain about it.

The laptop doesn't have an inbuilt optical drive, which is another first for me. I don't use optical disks often, but it was nice having a built-in drive for this in previous laptops. An external one just feels clunky - but I guess I can't complain too much because of the extra components and power that are built-in to the system.

The airflow of the system - as far as I can tell so far, is very good. Air comes in through the bottom, and is then pushed out again through the back and the back of the sides by 2 different fans. These fans are, however, rather noisy at times - and have taken some getting used to as my previous Dell laptop's fans were near silent until I started to stress the system. The noise they make is also slightly higher pitched too, which makes it more noticeable - and sound like a jet engine (though I admit I've never heard a real one in person, and I'm also somewhat hypersensitive to sound) when at full blast. Curiously, there's a dedicated key on the keyboard that - as far as I can tell - toggles between the normal on-demand fan mode and locking the fans at full blast. Great to quickly cool down the system if the fans haven't kicked in yet, but not so great for your ears!

I haven't tested the speakers much, but from what I can tell they are appropriately placed in front of the keyboard just before the hinge for the screen - which is a much better placement than on the underside at the front in my last laptop! Definitely a positive improvement there.

I wasn't sure based on the details on the PC specialist website, but the thickness of the base is 17.5mm at the thickest point, and 6mm for the screen - making ~23.5mm in total (although my measurements may not be completely accurate).

To summarise, the hardware I received was great - overlooking a few pain points such as the BIOS and poor keyboard layout decisions. Some work is still needed on environmental issues and sustainability, but packaging was on the whole ok. Watch out for the delivery service, as my laptop was delivered by DPD who don't have a great track record in my area.

Overall, the hardware build quality is excellent. I'm not sure if I can recommend them yet, but if you want a new PC or laptop they are certainly not a bad place to look.

Found this helpful? Got a suggestion? Want to say hi? Comment below!

Lua in Review 2

The Lua Logo Back in 2015, I reviewed the programming language Lua. A few months ago I rediscovered the maze generation implementation I ported as part of that post, and since then I've been writing quite a bit of Lua - so I thought I'd return to the thoughts in that original post and write another language review now that I've had some more experience with the language.

For those not in the know, Lua is a lightweight scripting language. You can find out more here: https://www.lua.org/

In the last post, I mentioned that Lua is very lightweight. I still feel this is true today - and it has significant advantages in that the language is relatively simple to understand and get started in - and feels very predictable in how it functions.

It is often said that Lua is designed to be embedded in other programs (such as to provide a modding interface to a game, for example) - and this certainly seems to hold true. Lua definitely seems to be well-suited for this kind of use-case.

The lightweightness comes at a cost though. The first of these is the standard library. Compared to other languages such as C♯ and even Javascript, the standard library sucks. At least half of the time you find yourself reimplementing some algorithm that should have been packaged with the language itself:

  • Testing if a string starts with a given substring
  • Rounding a number to the nearest integer
  • Making a shallow copy of a table

Do you want to do any of these? Too bad, you'll have to implement them yourself in Lua. While these really aren't a big deal, my point here is that with functions like these it can be all too easy to make a mistake when implementing them, and then your code has a bug in it. If you find and fix an obscure edge case for example, that fix will only apply to your code and not the hundreds of other ad-hoc implementations other developers have had to cook up to get things done, leading to duplicated and wasted effort.

A related issue I'm increasingly finding is that of the module system and the lack of reusable packages. In Lua, if you want to import code from another file as a self-contained module, you use the require function, like this:

local foo = require("foo")

The above will import code from a file named foo.lua. However, this module import here is done relative to the entrypoint of your program, and not the file that's requesting the import, leading to a number of issues:

  • If you want to move a self-contained subsection of a codebase around, suddenly you have to rewrite all the imports of not only the rest of the codebase (as normal), but also of all the files in the subdirectory you've just moved
  • You can't have a self-contained 'package' of code that, say, you have in a git submodule - because the code in the submodule can't predict the path to the original entrypoint of your program relative to itself

While LuaRocks attempts to alleviate this issue to some extent (and I admit I haven't yet investigated it in any great detail), as far as I can tell it installs packages globally, which doesn't help if you're writing some Lua that is going to be embedded inside another program, as the global package may or may not be available. Even if it is available, it's debatable as to whether you'd be allowed to import it anyway, since many embedded environments have restrictions in place here for security purposes.

Despite these problems, I've found Lua to be quite a nice language to use (if a little on the verbose side, due to syntactical structure and the lack of a switch statement). Although it's not great at getting out of your way and letting you get on with your day (Javascript is better at this I find), it does have some particularly nice features - such as returning multiple values from a single function (which mostly makes up for the lack of exceptions), and some cute table definition syntax.

It's not the kind of language you want to use for your next big project, but it's certainly worth experimenting with to broaden your horizons and learn a new language that forces you to program in a significantly different style than you would perhaps use normally.

Rust Review Redux

It was aaaages ago that I first reviewed Rust. For those not in the know, Rust is a next-generation compiled language (similar to Go, but this is where they diverge) developed by Mozilla - out of a need to have a safer alternative to C++ for writing key components of Firefox in.

Since then, I've obtained both a degree and a masters in computer science. I've also learnt a number of programming languages since then. I have been searching for a better alternative to C++ that's easier to use and doesn't fight you at every step - and I decided to give Rust another go.

After a few false starts, I managed to get going with starting to build a little web app (which will probably take a while until I can really show it off here). The tooling for the compiler is pretty good once you actually get it installed - although the installer itself is truly shocking (ly bad):

  • rustup - Manages multiple versions of Rust installed (I haven't used it much yet; apparently it's like nvm the Node Version Manager, but I don't use that either)
  • cargo - Orchestrates the building of your project and the installation of dependencies, which are known as crates.
  • rustc - The compiler itself. You probably won't interact with it directly much - instead going through cargo most of the time.

Together (and with the right Atom packages installed), they make for a relatively pleasant development experience. I mention the installer in particular though, because it's awful. I noted a number of issues with it:

  • The official website forces you to download an installation script that pipes to sh
  • It will only install on a per-user basis (goodbye disk space, hello extra system config complexity)
  • It doesn't even tell you how much disk space it's going to use (which wouldn't be an issue if they just setup an apt repository....)

These issues aside, other aspects of the experience were also worthy of note. First, the error messages the Rust compiler generates are actually useful. Much better than they were the last time I really dove into Rust, they provide you with much moree detail as to what's gone wrong, and there's even a special rustc --explain ERROR_CODE command you can execute to get more detail about what went wrong, why, and how to fix it.

This as a feature is certainly helpful for me as a beginner Rust programmer, but I think it's also a pretty essential feature given Rust's weirdness as a language.

I'm seriously not kidding - Rust is a nutty language. For one, classes exist.... sort of - but only as structs. Which are passed by reference (again, sort of) by default and may not contain methods - that's the job of an impl, which is short for an implementation. Implementations are a strange mix between C♯'s interfaces and multiple inheritance (in C++ I think it is?). And there are traits, which I haven't really looked into fully yet, but are a mix between interfaces and abstract classes..... you get the picture.

Point is, all this funky strangeness that goes on in Rust makes it a very challenging language to learn. A challenge that I feel is worth persevering with, but a challenge nonetheless. Rust does have a number of very powerful features that make it worth the effort, in my opinion.

For example, it catches entire classes of critically nasty bugs that plague other low-level systems languages such as C and C++ like use-after-free and the really awful concurrency race conditions at compile time - which is incredible, if you ask me. Such bugs have been a serious bother to many high-profile software projects that exist today and have caused a number of security issues. Rust is a testament to what can be achieved when you start from scratch and fix these issues by designing them out of the language.

For the curious, it does this by a complex system of variable lifetime, ownership, moving, and borrowing. I don't yet understand all the details, but the system enables the Rust compiler to be able to trace the lifetime of a variable at compile time, so you get the benefit of having a garbage collector without any of the overhead, since it's all been done at compile-time and built into your program that way.

This deep understanding of how data is passed around also yields performance and efficiency benefits too. C and C++ do not have such an understanding, so there are a number of performance optimisations the Rust compiler can make that would be considered far too dangerous for gcc to do. The net result of this is that sometimes code written in Rust will actually be faster than C and C++. This is a significant accomplishment, as the speed of C and C++ has been held as the gold standard for a long time (see exhibits A and B just for starters).

These are just some of the reasons that I'm persisting with learning Rust. So far, it seems like a "slow and steady wins the race" kinda deal - in that I'm taking it one concept at a time. There's a huge amount to take in, so I can't recommend that you try and do it all at once - time to consolidate what I've learnt so far is quite important I've found.

Rust is absolutely one of the hardest languages I've tried to learn, as it reinvents a lot of concepts which have been a staple of programming languages for a long time. However, it also comes with key benefits ease-of-use (once learnt, compared to C and C++), performance, and program execution safety at runtime (it was originally invented by Mozilla specifically to make Firefox a safer and faster browser, IIRC). To this end, I'm going to try my best to keep learning the language - and report back here at some point with cool stuff I've created (at the moment it's still in a state of flux and I'm refactoring heavily at each successive stage) :D

Edit: I've just remembered. I do currently have 2 big issues with rust: compilation time and disk space usage. When you install a dependency, it not only builds it from source,e but also recursively builds all of it's dependencies from source too. Not only does this take forever, but it also eats huge volumes of disk space for breakfast!

Found this interesting? Got some helpful advice or a question about Rust? Comment below!

Why the TICK stack probably isn't for me

Recently, I've been experimenting with upgrading my monitoring system. The TICK stack consists of a number of different elements:

Together, these 4 programs provide everything you need to monitor your infrastructure, generate graphs, and send alerts via many different channels when things go wrong. This works reasonably well - and to give the developers credit, the level of integration present is pretty awesome. Telegraf seamlessly inserts metrics into InfluxDB, and Chronograf is designed to integrate with the metrics generated by Telegraf.

I haven't tried Kapacitor much yet, but it has an impressive list of integrations. For reference, I've been testing the TICK stack on an old Raspberry Pi 2 that I had lying around. I also tried Grafana too, which I'll talk about later.

The problems start when we talk about the system I've been using up until now (and am continuing to use). I've got a Collectd setup going - with Collectd Graph Panel (CGP) as a web interface, which is backed by RRD databases.

CGP, while it has it's flaws, is pretty cool. Unlike Chronograf, it doesn't require manual configuration when you enable new metric types - it generates graphs automatically. For a small personal home network, I don't really want to be spending hours manually specifying what all the graphs should look like for all the metrics I'm collecting. It's seriously helpful to have it done automatically.

Grafana also stumbles here. Before I installed the CK part of the TICK stack, I tried Grafana. After some initial installation issues (the Raspberry Pi 2's CPU only supports up to ARMv6, and Grafana uses ARMv7l instructions, causing some awkward and unfortunate issues that were somewhat difficult to track down). While it has an incredible array of different graphs and visualisations you can configure, like Chronograf it doesn't generate any of these graphs for you automatically.

Both solutions do have an import / export system for dashboards, which allows you to share prebuilt dashboards - but this isn't the same as automatic graph generation.

The other issue with the TICK stack is how heavy it is. Spoiler: it's very heavy indeed - especially InfluxDB. It managed to max out my poor Raspberry Pi 2's CPU - and ate all my RAM too! It look quite a bit of tuning to configure it such that it didn't eat all of my RAM for breakfast and knock my SSH session offline.

I'm sure that in a business setting you'd have heaps of resources just waiting to be dedicated to monitoring everything from your mission-critical servers to your cat's lunch - but in a home setting it takes up more resources passively when it isn't even doing anything than everything else I'm monitoring..... combined!

It's for these reasons that I'm probably not going to end up using the TICK (or TIG, for that matter) stack. For the reasons I've explained above, while it's great - it's just not for me. What I'm going to use instead though, I'm not sure. Development on CGP ceased in 2017 (or probably before that) - and I've got a growing list of features I'd like to add to it - including (but not limited to) fixing the SMART metrics display, reconfiguring the length of time metrics are stored for, and fixing a super annoying bug that makes the graphs go nuts when you scroll on them on a touchpad with precise scrolling enabled.

Got a suggestion for another different system I could try? Comment below!

Quick File Management with Gossa

Recently a family member needed to access some documents at a remote location that didn't support USB flash drives. Awkward to be sure, but I did some searching around and found a nice little solution that I thought I'd blog about here.

At first, I thought about setting up Filestash - but I discovered that only installation through Docker is officially supported (if it's written in Go, then shouldn't it end up as a single binary? What's Docker needed for?).

Docker might be great, but for a quick solution to an awkward issue I didn't really want to go to the trouble for installing Docker and figuring out all the awkward plumbing problems for the first time. It definitely appeared to me that it's better suited to a setup where you're already using Docker.

Anyway, I then discovered Gossa. It's also written in Go, and is basically a web interface that lets you upload, download, and rename files (click on a file or directory's icon to rename).

A screenshot of Gossa listing the contents of my CrossCode music folder. CrossCode is awesome, and you should totally go and play it - after finishing reading this post of course :P

Is it basic? Yep.

Do the icons look like something from 1995? Sure.

(Is that Times New Roman I spy? I hope not)

Does it do the job? Absolutely.

For what it is, it's solved my problem fabulously - and it's so easy to setup! First, I downloaded the binary from the latest release for my CPU architecture, and put it somewhere on disk:

curl -o gossa -L https://github.com/pldubouilh/gossa/releases/download/v0.0.8/gossa-linux-arm

chmod +x gossa
sudo chown root: gossa
sudo mv gossa /usr/local/bin/gossa;

Then, I created a systemd service file to launch Gossa with the right options:

[Unit]
Description=Gossa File Manager (syncthing)
After=syslog.target rsyslog.service network.target

[Service]
Type=simple
User=gossa
Group=gossa
WorkingDirectory=/path/to/dir
ExecStart=/usr/local/bin/gossa -h [::1] -p 5700 -prefix /gossa/ /path/to/directory/to/serve
Restart=always

StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=gossa


[Install]
WantedBy=multi-user.target

_(Top tip! Use systemctl cat service_name to quickly see the service file definition for any given service)_

Here I start Gossa listening on the IPv6 local loopback address on port 5700, set the prefix to /gossa/ (I'm going to be reverse-proxying it later on using a subdirectory of a pre-existing subdomain), and send the standard output & error to syslog. Speaking of which, we should tell syslog what to do with the logs we send it. I put this in /etc/rsyslog.d/gossa.conf:

if $programname == 'gossa' then /var/log/gossa/gossa.log
if $programname == 'gossa' then stop

After that, I configured log rotate by putting this into /etc/logrotate.d/gossa:

/var/log/gossa/*.log {
    daily
    missingok
    rotate 14
    compress
    delaycompress
    notifempty
    create 0640 root adm
    postrotate
        invoke-rc.d rsyslog rotate >/dev/null
    endscript
}

Very similar to the configuration I used for RhinoReminds, which I blogged about here.

Lastly, I configured Nginx on the machine I'm running this on to reverse-proxy to Gossa:

server {

    # ....

    location /gossa {
        proxy_pass http://[::1]:5700;
    }

    # ....

}

I've configured authentication elsewhere in my Nginx server block to protect my installation against unauthorised access (and oyu probably should too). All that's left to do is start Gossa and reload Nginx:

sudo systemctl daemon-reload
sudo systemctl start gossa
# Check that Gossa is running
sudo systemctl status gossa

# Test the Nginx configuration file changes before reloading it
sudo nginx -t
sudo systemctl reload

Note that reloading Nginx is more efficient that restarting it, since it doesn't kill the process - only reload the configuration from disk. It doesn't matter here, but in a production environment that receives a high volume of traffic you it's a great way make configuration changes while avoid dropping client connections.

In your web browser, you should see something like the image at the top of this post.

Found this interesting? Got another quick solution to an otherwise awkward issue? Comment below!

Orange Pi 3 in review

An Orange Pi 3, along with it's logo. Of course, I'm not affiliated with the manufacturers in any way. In fact, they are probably not aware that this post even exists

I recently bought an Orange Pi 3 (based on the Allwinner H6 chipset) to perform a graphics-based task, and I've had an interesting enough time with it that I thought I'd share my experiences in a sort of review post here.

The first problem when it arrived was to find an operating system that supports it. My initial thought was to use Devuan, but I quickly realised that practically the only operating system that supports it at the moment is Armbian.

Not to be deterred, after a few false starts I got Armbian based on Ubuntu 18.04 Bionic Beaver installed. The next order of business was to install the software I wanted to use.

For the most part, I didn't have too much trouble with this - though it was definitely obvious that the arm64 (specifically sunxi64) architecture isn't a build target that's often supported by apt repository owners. This wasn't helped by the fact that apt has a habit of throw really weird error messages when you try to install something that exists in an apt repository, but for a different architecture.

After I got Kodi installed, the next order of business was to get it to display on the screen. I ended up managing this (eventually) with the help of a lot of tutorials and troubleshooting, but the experience was really rather unpleasant. I kept getting odd errors, like failed to load driver sun4i-drm when trying to start Kodi via an X11 server and other strangeness.

The trick in the end was to force X11 to use the fbdev driver, but I'm not entirely sure what that means or why it fixed the issue.

Moving on, I then started to explore the other capabilities of the device. Here, too, I discovered that a number of shortcomings in the software support provided by Linux, such as a lack of support for audio via HDMI and Bluetooth. I found the status matrix of the SunXI project, which is the community working to add support for the Allwinner H6 chipset to the Linux Kernel.

They do note that support for the H6 chipset is currently under development and is incomplete at the moment - and I wish I'd checked on software support before choosing a device to purchase.

The other big problem I encountered was a lack of kernel headers provided by Armbian. Normally, you can install the headers for your kernel by installing the linux-headers-XXXXXX package with your favourite package manager, where XXXXXX is the same as the string present in the linux-image-XXXXXX package you've got installed that contains the kernel itself.

This is actually kind of a problem, because it means that you can't compile any software that calls kernel functions yourself without the associated header files, preventing you from installing various dkms-based kernel modules that auto-recompile against the kernel you've got installed.

I ended up finding this forum thread, but the response who I assume is an armbian developer was less than stellar - they basically said that if you want kernel headers, you need to compile the kernel yourself! That's a significant undertaking, for those not in the know, and certainly not something that should be undertaken lightly.

While I've encountered a number of awkward issues that I haven't seen before, the device does have some good things worth noting. For one, it actually packs a pretty significant punch: it's much more powerful than a Raspberry Pi 3B+ (of which I have one; I bought this device before the Raspberry Pi 4 was released). This makes it an ideal choice for more demanding workloads, which a Raspberry Pi wouldn't quite be suitable for.

In conclusion, while it's a nice device, I can't recommend it to people just yet. Software support is definitely only half-baked at this point with some glaring holes (HDMI audio is one of them, which doesn't look like it's coming any time soon).

I think part of the problem is that Xunlong (that company that makes the device and others in it's family) don't appear to be interested in supporting the community at all, choosing instead to dump custom low-quality firmware for people to use as blobs of binary code (which apparently doesn't work) - which causes the SunXI community a lot of extra work to reverse-engineer it all and figure out how it all works before they can start implementing support in the Linux Kernel.

If you're interested in buying a similar embedded board, I can recommend instead using HackerBoards to find one that suits your needs. Don't forget to check for operating system support!

Found this interesting? Thinking of buying a board yourself? Had a different experience? Comment below!

Solo hardware security key review

Sometime last year (I forget when), I backed a kickstarter that promised the first open-source hardware security key that supports FIDO2. Since the people doing the kickstarter have done this before for an older standard, I decided to back it.

Last week they finally arrived, and the wait was totally worth it! I got 1 with a USB type c connector (in yellow below), and 1 type a regular type a connector that also supports nfc (in red, for using with my phone).

Before I get into why they are so awesome, it's probably a good idea if we take small step back and look at what a hardware security key does and why it does it.

My Solos!

In short, a hardware security key has a unique secret key baked into it that you can't extract. If I understand it, this is sometimes known as a physically unclonable function (correct me in a comment if I'm wrong). It makes use of this secret key for authentication purposes by way of a chain of protocols, which are collectively known as FIDO2.

A diagram showing the different FIDO2 protocols. It's basically WebAuthn between browser and OS, and CTAP2 between OS and hardware security key

There are 2 important protocols here: WebAuthn that the browser provides to web pages to interact with hardware devices, and CTAP2 - which allows the browser to interface with the hardware security key through a channel that the operating system provides (be that over USB, NFC, Bluetooth, or some other means).

FIDO2 is new. Like very very new. To this end, browsers and websites don't yet have full support for it. Those that do aren't always enabled by default (in Firefox you've got to set security.webauth.u2f, security.webauth.webauthn, and security.webauth.webauthn_enable_usbtoken to true, but I think these will set by default in a coming update) or incorrectly 'detect' support by sniffing the user-agent string ( cough I'm looking at you, GitHub and Facebook cough ).

Despite this, when it is supported it works fabulously. Solo goes a long way to making the process as painless as possible - supporting both CTAP (for the older U2F protocol) and CTAP 2 (which is part of the FIDO 2 protcol suite). It's designed well (though the cases on the NFC-enabled version called the Solo Tap are a bit on the snug side), and since it's open source you can both inspect and contribute to the firmware to improve the Solo and add new features for everyone to enjoy.

Extra features like direct access to the onboard TRNG (true random number generator) are really nice to have - and the promise of more features to come makes it even better. I'm excited to see what new capabilities my Solo will gain with future updates!

In the future I want to take a deeper dive into Webauthn and implement support in applications I've written (e.g. Pepperminty Wiki). It looks like it might be quite complicated, but I'll post here when I've figured it out.

ASP.NET: First Impressions

Admittedly, I haven't really got too far into ASP.NET (core). I've only gone through the first few tutorials or so, and based on what I've found so far, I've decided that it warrants a full first impressions blog post.

ASP.NET is fascinating, because it takes the design goals centred around developer efficiency and combines them with the likes of PHP to provide a framework with which one can write a web-server. Such a combination makes for a promising start - providing developers with everything they need to rapidly create a web-based application that's backed by any one of a number of different types of database.

Coming part-and-parcel with the ASP.NET library comes Entity Framework. It's purpose is to provide an easy mechanism by which developers can both create and query a database. I haven't really explored it much, but it appears to perform this task well.

If I were to criticise it, I'd probably say that the existing tutorials on how to use it are far too Windows and Visual Studio-oriented. Being a Linux user, I found it somewhat of a challenge to wade though the large amount of Visual Studio-specific parts of the tutorial and piece together how it actually works - independently of the automatic code generators built-in to Visual Studio.

This criticism, I've found is a running theme throughout ASP.NET and ASP.NET Core. Even in the official tutorials (which, although they say you can use Visual Studio Code on macOS and Linux, don't actually make any accommodations for users of anything other than Visual Studio), it leans heavily on the inbuilt code and template generators - choosing to instruct you on how to make the absolute minimum amount of changes to the templates provided in order to achieve the goal of the tutorial.

This, unfortunately, leaves the reader wondering precisely how ASP.NET core works under the hood. For example, what does services.AddDefaultIdentity<IdentityUser>().AddEntityFrameworkStores<ApplicationDbContext>(); do? Or what's an IdentityUser, and how do I customise it? Why isn't ASP.NET just a NuGet package I can import? None of these things are explained.

Being the kind of person who works from the ground up, I'm increasingly finding the "all that matters is that it works" approach taken by ASP.NET to, ironically enough, ease the experience for developers new to the library, rather frustrating. For me, it's hard to work with something if I don't understand what it does and how it works - so a tutorial that leans heavily on templates and scaffolding (don't even get me started on that) confusing and unhelpful.

To an extent, I can compare my experience starting out with ASP.NET with my experience starting out with Android development in Java. Both experiences were rather painful, and both experiences were unpleasant because of the large amount of pre-generated template code.

Having said this, in Java's case, there was the additional pain from learning a new language (even if it is similar to C♯), and the irritation in keeping a constant balance between syntax errors from not catching an exception, and being unable to find a bug because it's actually an exception that's been eaten somewhere that I can't see.

Although ASP.NET doesn't have terrible exception handling rules, it does have it's fair share of issues. It's equivalent, I guess, would be the number of undocumented and difficult-to-search bugs and issues one encounters when setting it up for the first time - both on Windows (with Microsoft's own Visual Studio!) and on Linux (though, to be fair, it's only .NET Core that has issues here). The complexity of the system and the lack of decent tutorials and documentation result in a confusing and irritating experience trying to get it to work (especially on Windows).

In conclusion, I'm finding ASP.NET to be a failed attempt at bringing the amazing developer efficiency from .NET to web development, but I suspect that this is largely down to me being inexperienced with it. Hampered by unhelpful tutorials, opaque black-boxed frameworks with blurred lines between library and template (despite the fact it's apparently open-source), and heavy tie-ins with Visual Studio, I think I'll be using other technologies such as Node.js to develop web-based projects in the future.

Art by Mythdael