Starbeamrainbowlabs

Stardust
Blog


Archive


Mailing List Articles Atom Feed Comments Atom Feed Twitter Reddit Facebook

Tag Cloud

3d 3d printing account algorithms android announcement architecture archives arduino artificial intelligence artix assembly async audio automation backups bash batch blog bookmarklet booting bug hunting c sharp c++ challenge chrome os cluster code codepen coding conundrums coding conundrums evolved command line compilers compiling compression containerisation css dailyprogrammer data analysis debugging demystification distributed computing documentation downtime electronics email embedded systems encryption es6 features ethics event experiment external first impressions future game github github gist gitlab graphics hardware hardware meetup holiday holidays html html5 html5 canvas infrastructure interfaces internet interoperability io.js jabber jam javascript js bin labs learning library linux lora low level lua maintenance manjaro network networking nibriboard node.js operating systems own your code pepperminty wiki performance phd photos php pixelbot portable privacy problem solving programming problems project projects prolog protocol protocols pseudo 3d python reddit redis reference releases resource review rust searching secrets security series list server software sorting source code control statistics storage svg talks technical terminal textures thoughts three thing game three.js tool tutorial twitter ubuntu university update updates upgrade version control virtual reality virtualisation visual web website windows windows 10 xmpp xslt

Rust Review Redux

It was aaaages ago that I first reviewed Rust. For those not in the know, Rust is a next-generation compiled language (similar to Go, but this is where they diverge) developed by Mozilla - out of a need to have a safer alternative to C++ for writing key components of Firefox in.

Since then, I've obtained both a degree and a masters in computer science. I've also learnt a number of programming languages since then. I have been searching for a better alternative to C++ that's easier to use and doesn't fight you at every step - and I decided to give Rust another go.

After a few false starts, I managed to get going with starting to build a little web app (which will probably take a while until I can really show it off here). The tooling for the compiler is pretty good once you actually get it installed - although the installer itself is truly shocking (ly bad):

  • rustup - Manages multiple versions of Rust installed (I haven't used it much yet; apparently it's like nvm the Node Version Manager, but I don't use that either)
  • cargo - Orchestrates the building of your project and the installation of dependencies, which are known as crates.
  • rustc - The compiler itself. You probably won't interact with it directly much - instead going through cargo most of the time.

Together (and with the right Atom packages installed), they make for a relatively pleasant development experience. I mention the installer in particular though, because it's awful. I noted a number of issues with it:

  • The official website forces you to download an installation script that pipes to sh
  • It will only install on a per-user basis (goodbye disk space, hello extra system config complexity)
  • It doesn't even tell you how much disk space it's going to use (which wouldn't be an issue if they just setup an apt repository....)

These issues aside, other aspects of the experience were also worthy of note. First, the error messages the Rust compiler generates are actually useful. Much better than they were the last time I really dove into Rust, they provide you with much moree detail as to what's gone wrong, and there's even a special rustc --explain ERROR_CODE command you can execute to get more detail about what went wrong, why, and how to fix it.

This as a feature is certainly helpful for me as a beginner Rust programmer, but I think it's also a pretty essential feature given Rust's weirdness as a language.

I'm seriously not kidding - Rust is a nutty language. For one, classes exist.... sort of - but only as structs. Which are passed by reference (again, sort of) by default and may not contain methods - that's the job of an impl, which is short for an implementation. Implementations are a strange mix between C♯'s interfaces and multiple inheritance (in C++ I think it is?). And there are traits, which I haven't really looked into fully yet, but are a mix between interfaces and abstract classes..... you get the picture.

Point is, all this funky strangeness that goes on in Rust makes it a very challenging language to learn. A challenge that I feel is worth persevering with, but a challenge nonetheless. Rust does have a number of very powerful features that make it worth the effort, in my opinion.

For example, it catches entire classes of critically nasty bugs that plague other low-level systems languages such as C and C++ like use-after-free and the really awful concurrency race conditions at compile time - which is incredible, if you ask me. Such bugs have been a serious bother to many high-profile software projects that exist today and have caused a number of security issues. Rust is a testament to what can be achieved when you start from scratch and fix these issues by designing them out of the language.

For the curious, it does this by a complex system of variable lifetime, ownership, moving, and borrowing. I don't yet understand all the details, but the system enables the Rust compiler to be able to trace the lifetime of a variable at compile time, so you get the benefit of having a garbage collector without any of the overhead, since it's all been done at compile-time and built into your program that way.

This deep understanding of how data is passed around also yields performance and efficiency benefits too. C and C++ do not have such an understanding, so there are a number of performance optimisations the Rust compiler can make that would be considered far too dangerous for gcc to do. The net result of this is that sometimes code written in Rust will actually be faster than C and C++. This is a significant accomplishment, as the speed of C and C++ has been held as the gold standard for a long time (see exhibits A and B just for starters).

These are just some of the reasons that I'm persisting with learning Rust. So far, it seems like a "slow and steady wins the race" kinda deal - in that I'm taking it one concept at a time. There's a huge amount to take in, so I can't recommend that you try and do it all at once - time to consolidate what I've learnt so far is quite important I've found.

Rust is absolutely one of the hardest languages I've tried to learn, as it reinvents a lot of concepts which have been a staple of programming languages for a long time. However, it also comes with key benefits ease-of-use (once learnt, compared to C and C++), performance, and program execution safety at runtime (it was originally invented by Mozilla specifically to make Firefox a safer and faster browser, IIRC). To this end, I'm going to try my best to keep learning the language - and report back here at some point with cool stuff I've created (at the moment it's still in a state of flux and I'm refactoring heavily at each successive stage) :D

Edit: I've just remembered. I do currently have 2 big issues with rust: compilation time and disk space usage. When you install a dependency, it not only builds it from source,e but also recursively builds all of it's dependencies from source too. Not only does this take forever, but it also eats huge volumes of disk space for breakfast!

Found this interesting? Got some helpful advice or a question about Rust? Comment below!

Why the TICK stack probably isn't for me

Recently, I've been experimenting with upgrading my monitoring system. The TICK stack consists of a number of different elements:

Together, these 4 programs provide everything you need to monitor your infrastructure, generate graphs, and send alerts via many different channels when things go wrong. This works reasonably well - and to give the developers credit, the level of integration present is pretty awesome. Telegraf seamlessly inserts metrics into InfluxDB, and Chronograf is designed to integrate with the metrics generated by Telegraf.

I haven't tried Kapacitor much yet, but it has an impressive list of integrations. For reference, I've been testing the TICK stack on an old Raspberry Pi 2 that I had lying around. I also tried Grafana too, which I'll talk about later.

The problems start when we talk about the system I've been using up until now (and am continuing to use). I've got a Collectd setup going - with Collectd Graph Panel (CGP) as a web interface, which is backed by RRD databases.

CGP, while it has it's flaws, is pretty cool. Unlike Chronograf, it doesn't require manual configuration when you enable new metric types - it generates graphs automatically. For a small personal home network, I don't really want to be spending hours manually specifying what all the graphs should look like for all the metrics I'm collecting. It's seriously helpful to have it done automatically.

Grafana also stumbles here. Before I installed the CK part of the TICK stack, I tried Grafana. After some initial installation issues (the Raspberry Pi 2's CPU only supports up to ARMv6, and Grafana uses ARMv7l instructions, causing some awkward and unfortunate issues that were somewhat difficult to track down). While it has an incredible array of different graphs and visualisations you can configure, like Chronograf it doesn't generate any of these graphs for you automatically.

Both solutions do have an import / export system for dashboards, which allows you to share prebuilt dashboards - but this isn't the same as automatic graph generation.

The other issue with the TICK stack is how heavy it is. Spoiler: it's very heavy indeed - especially InfluxDB. It managed to max out my poor Raspberry Pi 2's CPU - and ate all my RAM too! It look quite a bit of tuning to configure it such that it didn't eat all of my RAM for breakfast and knock my SSH session offline.

I'm sure that in a business setting you'd have heaps of resources just waiting to be dedicated to monitoring everything from your mission-critical servers to your cat's lunch - but in a home setting it takes up more resources passively when it isn't even doing anything than everything else I'm monitoring..... combined!

It's for these reasons that I'm probably not going to end up using the TICK (or TIG, for that matter) stack. For the reasons I've explained above, while it's great - it's just not for me. What I'm going to use instead though, I'm not sure. Development on CGP ceased in 2017 (or probably before that) - and I've got a growing list of features I'd like to add to it - including (but not limited to) fixing the SMART metrics display, reconfiguring the length of time metrics are stored for, and fixing a super annoying bug that makes the graphs go nuts when you scroll on them on a touchpad with precise scrolling enabled.

Got a suggestion for another different system I could try? Comment below!

Quick File Management with Gossa

Recently a family member needed to access some documents at a remote location that didn't support USB flash drives. Awkward to be sure, but I did some searching around and found a nice little solution that I thought I'd blog about here.

At first, I thought about setting up Filestash - but I discovered that only installation through Docker is officially supported (if it's written in Go, then shouldn't it end up as a single binary? What's Docker needed for?).

Docker might be great, but for a quick solution to an awkward issue I didn't really want to go to the trouble for installing Docker and figuring out all the awkward plumbing problems for the first time. It definitely appeared to me that it's better suited to a setup where you're already using Docker.

Anyway, I then discovered Gossa. It's also written in Go, and is basically a web interface that lets you upload, download, and rename files (click on a file or directory's icon to rename).

A screenshot of Gossa listing the contents of my CrossCode music folder. CrossCode is awesome, and you should totally go and play it - after finishing reading this post of course :P

Is it basic? Yep.

Do the icons look like something from 1995? Sure.

(Is that Times New Roman I spy? I hope not)

Does it do the job? Absolutely.

For what it is, it's solved my problem fabulously - and it's so easy to setup! First, I downloaded the binary from the latest release for my CPU architecture, and put it somewhere on disk:

curl -o gossa -L https://github.com/pldubouilh/gossa/releases/download/v0.0.8/gossa-linux-arm

chmod +x gossa
sudo chown root: gossa
sudo mv gossa /usr/local/bin/gossa;

Then, I created a systemd service file to launch Gossa with the right options:

[Unit]
Description=Gossa File Manager (syncthing)
After=syslog.target rsyslog.service network.target

[Service]
Type=simple
User=gossa
Group=gossa
WorkingDirectory=/path/to/dir
ExecStart=/usr/local/bin/gossa -h [::1] -p 5700 -prefix /gossa/ /path/to/directory/to/serve
Restart=always

StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=gossa


[Install]
WantedBy=multi-user.target

_(Top tip! Use systemctl cat service_name to quickly see the service file definition for any given service)_

Here I start Gossa listening on the IPv6 local loopback address on port 5700, set the prefix to /gossa/ (I'm going to be reverse-proxying it later on using a subdirectory of a pre-existing subdomain), and send the standard output & error to syslog. Speaking of which, we should tell syslog what to do with the logs we send it. I put this in /etc/rsyslog.d/gossa.conf:

if $programname == 'gossa' then /var/log/gossa/gossa.log
if $programname == 'gossa' then stop

After that, I configured log rotate by putting this into /etc/logrotate.d/gossa:

/var/log/gossa/*.log {
    daily
    missingok
    rotate 14
    compress
    delaycompress
    notifempty
    create 0640 root adm
    postrotate
        invoke-rc.d rsyslog rotate >/dev/null
    endscript
}

Very similar to the configuration I used for RhinoReminds, which I blogged about here.

Lastly, I configured Nginx on the machine I'm running this on to reverse-proxy to Gossa:

server {

    # ....

    location /gossa {
        proxy_pass http://[::1]:5700;
    }

    # ....

}

I've configured authentication elsewhere in my Nginx server block to protect my installation against unauthorised access (and oyu probably should too). All that's left to do is start Gossa and reload Nginx:

sudo systemctl daemon-reload
sudo systemctl start gossa
# Check that Gossa is running
sudo systemctl status gossa

# Test the Nginx configuration file changes before reloading it
sudo nginx -t
sudo systemctl reload

Note that reloading Nginx is more efficient that restarting it, since it doesn't kill the process - only reload the configuration from disk. It doesn't matter here, but in a production environment that receives a high volume of traffic you it's a great way make configuration changes while avoid dropping client connections.

In your web browser, you should see something like the image at the top of this post.

Found this interesting? Got another quick solution to an otherwise awkward issue? Comment below!

Orange Pi 3 in review

An Orange Pi 3, along with it's logo. Of course, I'm not affiliated with the manufacturers in any way. In fact, they are probably not aware that this post even exists

I recently bought an Orange Pi 3 (based on the Allwinner H6 chipset) to perform a graphics-based task, and I've had an interesting enough time with it that I thought I'd share my experiences in a sort of review post here.

The first problem when it arrived was to find an operating system that supports it. My initial thought was to use Devuan, but I quickly realised that practically the only operating system that supports it at the moment is Armbian.

Not to be deterred, after a few false starts I got Armbian based on Ubuntu 18.04 Bionic Beaver installed. The next order of business was to install the software I wanted to use.

For the most part, I didn't have too much trouble with this - though it was definitely obvious that the arm64 (specifically sunxi64) architecture isn't a build target that's often supported by apt repository owners. This wasn't helped by the fact that apt has a habit of throw really weird error messages when you try to install something that exists in an apt repository, but for a different architecture.

After I got Kodi installed, the next order of business was to get it to display on the screen. I ended up managing this (eventually) with the help of a lot of tutorials and troubleshooting, but the experience was really rather unpleasant. I kept getting odd errors, like failed to load driver sun4i-drm when trying to start Kodi via an X11 server and other strangeness.

The trick in the end was to force X11 to use the fbdev driver, but I'm not entirely sure what that means or why it fixed the issue.

Moving on, I then started to explore the other capabilities of the device. Here, too, I discovered that a number of shortcomings in the software support provided by Linux, such as a lack of support for audio via HDMI and Bluetooth. I found the status matrix of the SunXI project, which is the community working to add support for the Allwinner H6 chipset to the Linux Kernel.

They do note that support for the H6 chipset is currently under development and is incomplete at the moment - and I wish I'd checked on software support before choosing a device to purchase.

The other big problem I encountered was a lack of kernel headers provided by Armbian. Normally, you can install the headers for your kernel by installing the linux-headers-XXXXXX package with your favourite package manager, where XXXXXX is the same as the string present in the linux-image-XXXXXX package you've got installed that contains the kernel itself.

This is actually kind of a problem, because it means that you can't compile any software that calls kernel functions yourself without the associated header files, preventing you from installing various dkms-based kernel modules that auto-recompile against the kernel you've got installed.

I ended up finding this forum thread, but the response who I assume is an armbian developer was less than stellar - they basically said that if you want kernel headers, you need to compile the kernel yourself! That's a significant undertaking, for those not in the know, and certainly not something that should be undertaken lightly.

While I've encountered a number of awkward issues that I haven't seen before, the device does have some good things worth noting. For one, it actually packs a pretty significant punch: it's much more powerful than a Raspberry Pi 3B+ (of which I have one; I bought this device before the Raspberry Pi 4 was released). This makes it an ideal choice for more demanding workloads, which a Raspberry Pi wouldn't quite be suitable for.

In conclusion, while it's a nice device, I can't recommend it to people just yet. Software support is definitely only half-baked at this point with some glaring holes (HDMI audio is one of them, which doesn't look like it's coming any time soon).

I think part of the problem is that Xunlong (that company that makes the device and others in it's family) don't appear to be interested in supporting the community at all, choosing instead to dump custom low-quality firmware for people to use as blobs of binary code (which apparently doesn't work) - which causes the SunXI community a lot of extra work to reverse-engineer it all and figure out how it all works before they can start implementing support in the Linux Kernel.

If you're interested in buying a similar embedded board, I can recommend instead using HackerBoards to find one that suits your needs. Don't forget to check for operating system support!

Found this interesting? Thinking of buying a board yourself? Had a different experience? Comment below!

Solo hardware security key review

Sometime last year (I forget when), I backed a kickstarter that promised the first open-source hardware security key that supports FIDO2. Since the people doing the kickstarter have done this before for an older standard, I decided to back it.

Last week they finally arrived, and the wait was totally worth it! I got 1 with a USB type c connector (in yellow below), and 1 type a regular type a connector that also supports nfc (in red, for using with my phone).

Before I get into why they are so awesome, it's probably a good idea if we take small step back and look at what a hardware security key does and why it does it.

My Solos!

In short, a hardware security key has a unique secret key baked into it that you can't extract. If I understand it, this is sometimes known as a physically unclonable function (correct me in a comment if I'm wrong). It makes use of this secret key for authentication purposes by way of a chain of protocols, which are collectively known as FIDO2.

A diagram showing the different FIDO2 protocols. It's basically WebAuthn between browser and OS, and CTAP2 between OS and hardware security key

There are 2 important protocols here: WebAuthn that the browser provides to web pages to interact with hardware devices, and CTAP2 - which allows the browser to interface with the hardware security key through a channel that the operating system provides (be that over USB, NFC, Bluetooth, or some other means).

FIDO2 is new. Like very very new. To this end, browsers and websites don't yet have full support for it. Those that do aren't always enabled by default (in Firefox you've got to set security.webauth.u2f, security.webauth.webauthn, and security.webauth.webauthn_enable_usbtoken to true, but I think these will set by default in a coming update) or incorrectly 'detect' support by sniffing the user-agent string ( cough I'm looking at you, GitHub and Facebook cough ).

Despite this, when it is supported it works fabulously. Solo goes a long way to making the process as painless as possible - supporting both CTAP (for the older U2F protocol) and CTAP 2 (which is part of the FIDO 2 protcol suite). It's designed well (though the cases on the NFC-enabled version called the Solo Tap are a bit on the snug side), and since it's open source you can both inspect and contribute to the firmware to improve the Solo and add new features for everyone to enjoy.

Extra features like direct access to the onboard TRNG (true random number generator) are really nice to have - and the promise of more features to come makes it even better. I'm excited to see what new capabilities my Solo will gain with future updates!

In the future I want to take a deeper dive into Webauthn and implement support in applications I've written (e.g. Pepperminty Wiki). It looks like it might be quite complicated, but I'll post here when I've figured it out.

ASP.NET: First Impressions

Admittedly, I haven't really got too far into ASP.NET (core). I've only gone through the first few tutorials or so, and based on what I've found so far, I've decided that it warrants a full first impressions blog post.

ASP.NET is fascinating, because it takes the design goals centred around developer efficiency and combines them with the likes of PHP to provide a framework with which one can write a web-server. Such a combination makes for a promising start - providing developers with everything they need to rapidly create a web-based application that's backed by any one of a number of different types of database.

Coming part-and-parcel with the ASP.NET library comes Entity Framework. It's purpose is to provide an easy mechanism by which developers can both create and query a database. I haven't really explored it much, but it appears to perform this task well.

If I were to criticise it, I'd probably say that the existing tutorials on how to use it are far too Windows and Visual Studio-oriented. Being a Linux user, I found it somewhat of a challenge to wade though the large amount of Visual Studio-specific parts of the tutorial and piece together how it actually works - independently of the automatic code generators built-in to Visual Studio.

This criticism, I've found is a running theme throughout ASP.NET and ASP.NET Core. Even in the official tutorials (which, although they say you can use Visual Studio Code on macOS and Linux, don't actually make any accommodations for users of anything other than Visual Studio), it leans heavily on the inbuilt code and template generators - choosing to instruct you on how to make the absolute minimum amount of changes to the templates provided in order to achieve the goal of the tutorial.

This, unfortunately, leaves the reader wondering precisely how ASP.NET core works under the hood. For example, what does services.AddDefaultIdentity<IdentityUser>().AddEntityFrameworkStores<ApplicationDbContext>(); do? Or what's an IdentityUser, and how do I customise it? Why isn't ASP.NET just a NuGet package I can import? None of these things are explained.

Being the kind of person who works from the ground up, I'm increasingly finding the "all that matters is that it works" approach taken by ASP.NET to, ironically enough, ease the experience for developers new to the library, rather frustrating. For me, it's hard to work with something if I don't understand what it does and how it works - so a tutorial that leans heavily on templates and scaffolding (don't even get me started on that) confusing and unhelpful.

To an extent, I can compare my experience starting out with ASP.NET with my experience starting out with Android development in Java. Both experiences were rather painful, and both experiences were unpleasant because of the large amount of pre-generated template code.

Having said this, in Java's case, there was the additional pain from learning a new language (even if it is similar to C♯), and the irritation in keeping a constant balance between syntax errors from not catching an exception, and being unable to find a bug because it's actually an exception that's been eaten somewhere that I can't see.

Although ASP.NET doesn't have terrible exception handling rules, it does have it's fair share of issues. It's equivalent, I guess, would be the number of undocumented and difficult-to-search bugs and issues one encounters when setting it up for the first time - both on Windows (with Microsoft's own Visual Studio!) and on Linux (though, to be fair, it's only .NET Core that has issues here). The complexity of the system and the lack of decent tutorials and documentation result in a confusing and irritating experience trying to get it to work (especially on Windows).

In conclusion, I'm finding ASP.NET to be a failed attempt at bringing the amazing developer efficiency from .NET to web development, but I suspect that this is largely down to me being inexperienced with it. Hampered by unhelpful tutorials, opaque black-boxed frameworks with blurred lines between library and template (despite the fact it's apparently open-source), and heavy tie-ins with Visual Studio, I think I'll be using other technologies such as Node.js to develop web-based projects in the future.

Redis in Review: First Impressions

The red Redis logo backed by a yellow voronoi diagram

(Above: The Redis logo backed by a voronoi diagram generated by my specially-upgraded voronoi diagram generator :D)

I've known about Redis for a while now - but I've never gotten around to looking into it until now. A bit of background: I'm building a user management panel thingy for this project (yes, again. I'll be blogging about it for a bit more yet before it's ready). Since I want it to be a separate, and optional, component that you don't have to run in order to use the main program, I decided to make it a separate program in a different language (Nojde.JS is rather good at doing this sort of thing I've found).

Anyway, whilst writing the login system, I found that I needed a place to store session tokens such that they persist. I considered a few different options here:

  1. A JSON file on disk - This wouldn't be particularly scalable if I found myself with lots of users. It also means I've got to read in and decode a bunch of JSON when my program starts up, which can get mighty fiddly. Furthermore, persisting it back to disk would also be rather awkward and inefficient.
  2. An SQL database of some description - Better, not still not great. The setup for a database is complicated, and requires a complex and resource-hungry process running at all times. Even if I used SQLite, I wouldn't be able to escape the complexities associated with database schemas - which I'd likely end up spending more time fiddling with than actually writing the program itself!

Then I remembered Redis. At first, I thought it was an in-memory data storage solution. While I was right, I discovered that there's a lot more to Redis than I first thought. For one, it does actually allow you to persist things to disk - and is very transparent about it too, allowing a great deal of control over how data is persisted to disk (also this post is a great read on the subject, even if it's a bit old now - it goes into a great deal of depth about how the operating system handles and caches writes to disk).

When it comes to actually storing your data, Redis provides just a handful of delightfully simple data structures for you to model your data with. Unlike a traditional SQL-based database, storing data in Redis requires a different way of thinking. Rather than considering how data relates to one another and trying to represent it as a series of interconnected tables, in Redis you store things in a manner that's much closer to that which you might find in the program that you're writing.

The core of Redis is a key-value store - rather like localStorage in the browser. Unlike localStorage though, the following data types are provided:

  • A simple string value
  • A hash-table
  • A list of values (which can be also be treated as a queue)
  • A set of unique strings (which can be optionally sorted)
  • Binary blobs (called bit arrays)
  • A HyperLogLog (apparently a probabilistic data structure of some kind)

The official Redis tutorial on the subject does a much better job of explaining this than I could (I have only been playing with it for about a week after all!), so I'd recommend you read it if you're curious.

All this has some interesting effects. Firstly, the data storage system can be more closely representative of the data structures it is storing, allowing for clever data-specific optimisations to be made in ways that simply aren't possible with a traditional SQL-based system. To this end, a custom index can be created in Redis via simple values to accelerate the process of finding a particular hash-table given a particular piece of identifying information.

Secondly, it makes Redis very versatile. You could use it as a caching layer to store the results of queries made against a traditional SQL-based database. In that scenario, no persistence to disk would be required to make it as fast as possible. You could use it for storing session tokens and short-lived chunks of data, utilising the inbuilt EXPIRE command to tell Redis to automatically delete certain keys after a specified amount of time. You could use it store all your data in and do away with an SQL server altogether. You could even do all of these things at the same time!

When storing data in Redis, it's important to make sure you've got the right mindset. I've found that despite learning about the different commands and data structures Redis has to offer, it's taken me a few goes to figure out how I'm going to store my program's data in a way that both makes sense and makes it easy to get it back out again. I'm certain that I'll take quite a bit of practice (and trial and error!) to figure out what works and what doesn't.

It's not the kind of thing that would make you want to drop your database system like a hot potato and move everything to Redis before the week is out. But it is a tool to keep in mind that can be used to solve a class of problems that SQL-based database servers have traditionally been very bad at handling - in particular high-throughput systems that need to store lots of mainly short-lived data (e.g. session tokens, etc.).

Found this interesting? Got your own thoughts on Redis? Post a comment below!

#movingtogitlab: What's up, Thoughts, and First Impressions

I'm moving some of my repositories to GitLab! Read on to find out why.

You've probably heard by now that GitHub has been bought by Microsoft. It was certainly huge news at the time! While I did tweet at the time (also here too), I've been waiting until I've gotten all of the repositories I've decided to move over to GitLab settled in before blogging about the experience and sharing my thoughts.

While I've got some of them settled in (you can tell which ones I've done because they are public on GitLab and I've deleted the GitHub repository in most cases), I've still got a fair few to go - there should be 13 in total on GitLab once I'm finished.

Since it's taking me longer than I anticipated to properly update them after the transfer (which in and of itself was actually really quick and painless - thanks GitLab!), I thought I'd blog about the experience so far - as it's probably going to be a little while before I've got all my repositories sorted :P

What's up?

Firstly, Microsoft. Buying GitHub. Who'd have thought it? I know that I was certainly surprised. I even had to check multiple websites to triple-check that I wasn't seeing things.... GitHub have certainly done an excellent job of hiding that fact that they were looking for a buyer. Upon further inspection, it appears that the issues GitHub have been facing are twofold. Firstly, they've been without a boss of the company for quite a while, and from the way things are looking, they are having a little bit of trouble finding a new one. Secondly, there's been talk that they haven't been making enough money.

Let's back up a second. How does GitHub actually make money? Good question. The answer is surprisingly simple: GitHub Enterprise - which is a version of GitHUb for businesses that they can host on their own servers for a significant fee. From what I can tell, it's targeted at medium-to-large companies.

If they aren't making enough money to sustain themselves, then something obviously needs to be done about that, or the github.com service that open-source developers the world over suddenly vanish overnight O.o! This presents GitHub with a very awkward problem. In this case, they chose to seek a buyer, who could then perhaps help them to sell GitHub Enterprise better.

Looking at the alternative companies, I think that GitHub could have done a lot worse than Microsoft. Alternatives include Apple (who operate a gigantic walled garden called macOS and iOS), IBM (International Business Machines, who sell big and powerful mainframes to other big and powerful businesses. Most of the time, we - or I at least - don't hear from them, unless they are showing off some research or other that they've just completed), and Facebook (who don't have a particularly great reputation right now). The problem is that they need a big company who have lots of money, because they are also a big company (so they'll be worth a lot of money, like it or not. Also, in order to sort out the money making issue they'll need cash in the meantime).

Microsoft ha ve been quite active in the open-source scene as of late, and seem to really have changed their tune in recent years, with their open-sourcing of large parts of the .NET framework - and their code editor Visual Studio Code. While they aren't the best at naming things (I'm looking at you, .NET Core / .NET Standard / .....), they do appear to be more in-line with GitHub's goals and ethics than alternative companies, as discussed above.

What's the problem?

So why do I have a problem with this deal? Clearly, Microsoft are the best of the bunch to have bought GitHub. It could be a lot worse. Microsoft have even announced that GitHub can continue to operate as a separate entity. The new boss of GitHub did an Ask Me Anything on reddit, and he seems like a really cool guy, and genuinely wants the best for GitHub.

Well, it's complicated. For me, it all boils down to independence. GitHub is (or was) an independent company. To that end, they can make their own decisions, and aren't being told what to or at risk of being told what to do by someone else. They aren't being influenced by someone else. That's not to say that GitHub will now be influenced by Microsoft (though they probably will be) - and it's not to say that it's a bad thing.

Coupled with the some 85 million open-source repositories, 28 million developers using the service, and 1.8 million businesses utilising github.com, it's a huge responsibility. In order to effectively serve the community that has grown around github.com, I feel that GitHub has to remain impartial. It's absolutely essential. The number of different workflows, tools, programs, operating systems, and more that those 28 million developers use will be staggering - and I feel that only a completely independent GitHub will truly be able to meet the needs of that community. Microsoft is great for some things, but taking on GitHub is asking them to actively support users of one of their services using their competitors software, such as Linux - or devices running macOS.

What's the alternative then?

That's huge ask, and I guess only time will tell whether they are able to pull it off. I find myself asking though: What's the alternative? if they are losing money as the rumours say, then what could they do about it?

Obviously, I don't and can't know everything that's going on inside GitHub - I can only read marketing-y web articles, theorise, and make guesses. I'm sure they've tried it, but I don't see why they can't enter a partnership with Microsoft instead. How about a partnership in which Microsoft helps sell GitHub Enterprise, and they get a cut of the profits in return? Microsoft have a much bigger base on enterprisey companies, so they'd be much better placed to do that part of things, and GitHub would be able to retain it's independence.

Where does GitLab fit into this puzzle?

All of this brings me to GitLab. Similar to GitHub, GitLab offers free code hosting on gitlab.com, along with free continuous integration. Unlike GitHub though, GitLab's source code is open - and you can download and run your own instance if you like! They sell support packages to businesses - thereby making enough money to support the continued development of GitLab. I think that they sell a few additional features with GitLab Enterprise too - but they aren't anything that the average user would want - only businesses. The also do a free package for students and open-source developers too - all whilst staying and independent company. Very cool.

As of today, I'm moving a total of 13 of my smaller repositories to GitLab. My reasoning here is that I really don't want to keep all my eggs in one basket. By moving some repositories to GitLab, I can ensure that if one or the other goes in a direction I seriously object to, I've got an alternative that I'm familiar with that I can move all my repositories to in a hurry.

I've used GitLab before. Last academic year (2016 / 2017) I interned at a local company, at which I helped to move over to Git as their primary version-control system. To that end, they chose GitLab as the server they'd use for the job - and so I got quite a bit of experience with it!

I'd use it for my personal git server but unfortunately it uses far too many resources - which I can't currently dedicate to it full-time - so I use Gitea, a lighter-weight alternative.

How does GitLab compare?

Back to the matter at hand. The experience with gitlab.com and my open-soruce repositories so far has been a hugely positive one so far! The migration process itself was completely painless - it just took a while since there were thousands of other developers doing the exact same thing I was :P

Updating my repositories and adjusting all the links & references has been a tedious process so far - but that's nothing that GitLab can really help me with - I've gotta do it myself :-) GitLab's documentation has been really helpful for those times when I've needed some assistance to figure something out, with plenty of GitLab CI examples available to get me started.

If there's one thing I'd change, it's the releases (or tags) system. The system itself is fine, but the way the tags are presented is horrible. The text doesn't even wrap properly! A /latest shortcut url would would be welcome too.

You can't attach release binaries to tags either, which is also rather annoying. There is a workaround though - you can link directly to the GitLab CI (Continuous-Integration) artifacts instead, which seems to work just fine for my purposes so far. If it doesn't work so well in the future, I'm probably going to have to find an alternative solution for hosting release binaries - at least until they implement the feature (if at all).

Other than that, the interface, while very different to GitHub's, does feel appropriately polished and suitably easy to use. If anything, I feel as though it's rather difficult to explore any of the existing repositories on GitLab - I hadn't realised until using GitLab just how useful the new repository tags GitHub have implemented are in exploring others' repositories. I'm sure that there's a 3rd-party website that enables one to explore the repositories on gitlab.com much more effectively - I just haven't found it yet.

The sidebar when viewing a repository is quite handy - but a little unwieldy at times. Similarly, the 2-3 layers of navigation directly below the repository tagline are also rather unwieldy, and difficult to tell their differing functions apart. Perhaps a rethink here would bring these 2 different parts of the user interface together in a more cohesive and usable fashion?

All in all, gitlab.com is a great service for hosting my open-source repositories. The continuous integration services provided are superb - and completely unparalleled in my (admittedly limited) experience. The interface is functional, for the most part, and gets the job done - there are just a few rough edges in places that could do with a slight rethink.

Enjoyed this? Have your own opinion about what's happened to GitHub? Found a better service? Comment below!

Markdown editors compared

Parts of the 3 markdown editors I'll be comparing in this post.

If you didn't know already, I write all my blog posts here in markdown. I've used several markdown editors over the years (wow it's strange to write that), and I thought I'd talk a little bit about the ones I've used, what I liked about them (and the things I didn't), and what my current preference is.

Firstly though, why would you want one? Couldn't you just use a regular text editor like Notepad++? Well, yes - but a dedicated editor has several benefits: Proper spell-checking for one, a live-preview for another, and other nice features that make the experience just that little bit better (I'm even writing one of my reports for University in Markdown, and I have to say that the experience is much more pleasurable than using Microsoft Word :P).

I like Markdown itself rather a lot too. First invented by John Gruber over on daringfireball.net, Markdown is a simple markup language that's inspired by the things that people already do in instant messaging and other text-based mediums. It's designed to be both easy to read and understand on it's own, and easy to write in - such that it doesn't break your flow as a writer by requiring you to look up how to figure out how to apply that particular bit of formatting (I find myself having to do that with LaTeX and others a lot).

A Screenshot of StackEdit.

(Above: A Screenshot of StackEdit.)

The first contender up is StackEdit. It's an in-browser offering, which saves it's data to your local machine (or the cloud). It comes with a number of nice features - apart from not having to install it of course - such as synchronised scrolling in the live-preview, and a 'publish' button to send your document to a number of different sources automatically.

Since I used it last (which was quite a while ago, actually), it appears to have received a sizeable update, updating the user-interface to be more polished and aesthetically pleasing, and adding a toggleable folder structure to the left-hand-side, amongst other things.

If you can't install anything or run portable programs from a flash drive, StackEdit would be my recommendation.

A screenshot of Classeur.

(Above: A Screenshot of Classeur.)

Next up on my list is Classeur. It's another browser-based offering, with many of the same features, just with a different UI. When I discovered it I was using Stack Edit, and at the time the interface of Classeur was vastly superior.

The main thing I don't like about it is that it's 'freemium' -- meaning that you get to keep about 100 documents on it, and then you either have to delete something or pay. While Markdown is just text documents I can keep on my computer, if I'm going to use a browser-based solution I would prefer to keep them all in the same place (though I never did hit this limit :P).

A screenshot of me writing this post in ghostwriter. Meta!

More recently, now that I've got a travel-laptop that is running Linux (and not Chrome OS, as nice that was I ended up out-growing it), I've been using ghostwriter. It's a desktop application for both Windows and Linux. While it doesn't have synchronised-scrolling for the live-preview as Stack Edit does, it allows you to save your text files to your local disk (or other mounted partition!), and open them as you would a Spreadsheet or other file - in a way that you can't with a browser-based tool.

The interface is also highly customisable - if you don't like the built-in themes, you can write your own. You can also write your own stylesheet for exported documents too. In addition, it automatically detects multiple different markdown renderers that may or may not have installed, allowing you to switch between them (and the inbuilt sundown processor) at will to get the exported document (e.g. HTML, PDF, EPUB, etc.) looking just the way you want it to.

For me, you can't beat the feeling of a native desktop application, so currently ghostwriter is my markdown editor of choice. If I can't use ghostwriter, I'll probably use StackEdit, with Classeur coming at the bottom of the pile.

If you're thinking of doing some writing, I'd highly suggest considering using a proper markdown editor such as the ones I've mentioned here. If you're not familiar with markdown, fear not! It's easy to learn, and all 3 of the editors featured here feature a quick-reference guide sidebar (or floating window) that you can enable to help you along.

Found this useful? Got a different editor of choice? Comment below!

Java: First Impressions

The logos of a few of the tools and language I've been using recently.

(Above: The Android, Android Studio, and Java logos. I don't own any of these - nor is this post endorsed by any of the entities represented here - they are just for illustrative purposes.)

I've been using Java pretty extensively recently, as I've been doing a module on Android development at University. It's a pretty interesting language, so I thought I'd share my first impressions here. Later on in a separate post, I'll also talk a little bit about Kotlin, Google's new language they are championing for development on the Android platform.

Firstly, Android Studio has made it really easy to get started. The code hinting / autocompletion is fairly intelligent, and provides enough support that it's not too much of a bother programming in a new environment that you've never seen before - lessening the burden of learning a new language.

It seems to me that the whole build process for Java applications has been greatly overcomplicated though. It's slow, and keeps throwing random errors - especially when I've only just opened Android Studio. This non-determinism proves especially challenging for beginners, such as myself - as sometimes there's no real way to know what's gone wrong (the error messages are not particularly helpful - I've seen several languages with much more helpful ones).

There seem to be a bunch of assumptions that the developers have made too about the user's setup and programming style - leading to confusing situations in which it doesn't work - but there's no real way to know why, as there aren't any obvious error messages.

Despite this, Java as a language has some interesting features. As a whole, I can definitely see where Microsoft got their inspiration for C♯ from, as it's very similar - just without a lot of the syntactical sugar I'm used to in C♯ that makes expressing complex data structures and algorithms much easier, such as getters and setters.

Particularly of note is the exception system. In Java, if you want to throw an exception, you have to add throws ExceptionName to the method signature. Since your main activity in Android contains overridden methods at the top level, this means that you have to use lots of try..catch blocks to trap exceptions and deal with them before they bubble up to higher levels - otherwise it's a compilation error!

While this can be helpful, I've found that it can lead to awkward bugs in which an exception is eaten higher up, and the default value that's returned by the method that eats the exception causes strange things to happen that aren't immediately obvious - and it's only when you check the log that you realise what happened.....

The other bothersome thing I've found is the deeply-nested folder structure that a Java project appears to generate for even the simplest of projects. This makes it a rather difficult and involved process to find any code outside of the IDE - which I often do because Android Studio is far too slow and bulky just to check on or reference something quickly.

Finally, the last issue that concerns me are the licensing issues that have plagued Java in recent years. If you haven't heard, Google and Oracle (the company that owns Java) have been in disagreement over licensing fees which Oracle claims Google should pay them because they used Java in the making of Android (which is an open-source project). If Oracle are going after Google over licensing fees for just using a language, then what does that say about any projects I do? It's not exactly confidence inspiring, that's for sure. I for one will be keeping as much of my code library out of the Java ecosystem as possible.

Java seems to be the kind of language with a lot of history. While some of this has led to innovations that have ultimately improved the language, I feel that as a language it's being bogged down by lots of bloat and unnecessary garbage that it could really do without. C♯ has done a brilliant job so cutting through this clutter and rubbish, creating a language that both works with you and is easy to understand (except .NET Standard and .NET Core, but that's a story for another time :P).

Art by Mythdael