Starbeamrainbowlabs

Stardust
Blog


Archive


Mailing List Articles Atom Feed Comments Atom Feed Twitter Reddit Facebook

Tag Cloud

3d account algorithms android announcement architecture archives arduino artificial intelligence artix assembly async audio backups bash batch blog bookmarklet booting c sharp c++ challenge chrome os code codepen coding conundrums coding conundrums evolved command line compilers compiling compression css dailyprogrammer debugging demystification distributed computing documentation downtime electronics email embedded systems encryption es6 features event experiment external first impressions future game github github gist gitlab graphics hardware hardware meetup holiday holidays html html5 html5 canvas infrastructure interfaces internet io.js jabber jam javascript js bin labs learning library linux lora low level lua maintenance manjaro network networking nibriboard node.js operating systems performance photos php pixelbot portable privacy problem solving programming problems projects prolog protocol protocols pseudo 3d python reddit redis reference releases resource review rust searching secrets security series list server software sorting source code control statistics storage svg technical terminal textures three thing game three.js tool tutorial twitter ubuntu university update updates upgrade version control virtual reality virtualisation visual web website windows windows 10 xmpp xslt

Question: How do you recover a deleted file that's been overwritten?

Answer: With the greatest of difficulty.

The blog post following this one in a few days time is, ironically, about backing things up. However, I actually ended up losing the entire post during the upload process to my server (it replaced both the source and destination files with an empty file!). I'd already saved it to disk, but still almost lost it anyway......

Recovery of deleted files is awkward at best. It relies on the fact that when you 'delete' something, it doesn't erase it from disk at all - just deallocates the sectors on disk that it was taking up and re-enters them into free pool of space - which is then re-used at will.

The most important thing to remember when you've just lost something is to not touch anything. Shutdown your computer, and, if you're not confident enough yourself, call someone who knows what they're doing to help you out.

The best way to recover a file is to boot into a live cd. This is a CD (or flash drive) that holds an (or multiple!) entire operating system(s). This way, no additional writing is done to the disk containing the deleted file - potentially corrupting it.

After fiddling about with this (I had to update my bootable flash drive, as Ubuntu 15.10 is out of support and I couldn't download the extundelete tool, whichI'll mention shortly), I found that I'd hit a dead-end.

I was using the extundelete (sudo apt install extundelete, apt) tool, and it claimed that it couldn't restore the file because it had been reallocated. Here's the command I used:

sudo extundelete --restore-file /absolute/path/to/file /dev/sda7

I suspect that it was getting confused because I had a file by that name on disk that was now empty.

Anyway, after doing something else for a while, I had an idea. Since my blog posts are just text files on disk, shouldn't it be on my disk somewhere? Could I locate it at all?

As it turns out, the answer is yes. Remembering a short sentence from the post I'd just written, I started a brute-force search of my disk:

sudo dd if=/dev/sda7 | strings | grep -i "AWS S3"

This has several components to it Explain Shell is great at providing an explanation of each bit in turn. Here's a short summary:

  • dd - This reads in the entire contents of a partition and pushes it into the following command. Find the partition name with lsblk.
  • strings - This extracts all runs of printable characters from the input stream.
  • grep - This searches (case-insensitively with -i) for an specified string in the input

I started to get results - a whole line from the blog post that had supposedly been deleted and overwritten! This wasn't really enough though. Taking a longer snippet to reduce the noise in the output, I tried again:

sudo dd if=/dev/sda7 | strings | grep -i -C100 "To start, we'll need an AWS S3 bucket"

This time, I added -C100. This tells grep that I want to see 100 lines before and after any lines that contain the specified search string.

With this, I managed to recover enough of the blog post to quickly re-edit and upload it. It did appear to remove blank lines and the back-ticks at the end of a code block, but they are easy to replace.

Note to self: Always copy first when crossing file system boundaries, and delete later. Don't move all in one go!

How to set up a shared PDF printer on your local network

I've recently ended up setting up a PDF printer on my local network in an effort to transfer some pictures out of a ridiculous i-device (I tell you, Apple'e iOS is the worst for being a walled garden). Since the process for doing so wasn't entirely obvious, I'm documenting it in this blog post to remind myself for later. If you find it useful, please let me know in the comments below!

Firstly, you'll need a machine running Linux. Any distribution will do, but I'll be using an apt-based distribution, so you may need to alter some of the commands here to suit your system.

Firstly, we need install the cups (which stands for the Common Unix Printing Service) PDF printer driver. It comes with a lot of junk if you're not careful, so here I use --no-install-recommends to avoid installing any unnecessary packages.

sudo apt install printer-driver-cups-pdf --no-install-recommends

If you've got a firewall running (which you really should - see this post of mine for more information on that), then you'll need to open the port 631 for TCP traffic to allow people to print. If you're using ufw, then this should do the trick:

sudo ufw allow cups

If not, then you may need to specify the port number explicitly:

sudo ufw allow 631/tcp

With the printer installed, we next need to open it to the world. Before that though, we should make some changes to the configuration file, which is located at /etc/cups-pdf.conf. Firstly, I wanted to put the resulting PDFs into my file server's shared folder. This is achieved by editing the Out and AnonDirName settings. They should already be present in the configuration file - it's just a matter of changing their values:

Out         /absolute/path/to/output/dir
AnonDirName /absolute/path/to/output/dir

I also wanted to customise the user account and permissions that it saves the pdfs with. I did this through the AnonUser and AnonUMask settings - which should also be present by default:

AnonUser    username
AnonUMask   0007

The umask is basically an inverted permission octal. I found a good calculator calculator online to do it for me :P (Don't forget the preceding 0 - it's important!)

Finally, I experienced an issue whereby cups kept overwriting the same file again and again because the iPad wasn't smart enough to send the photos to print with their actual filenames - instead opting to send them all as Photo.pdf. Thankfully though, cups-pdf has the Label option (also specified by default) that ensures that output filenames don't clash. Setting it to 1 instead of 0 solved the problem for me:

Label       1

Note that some of these properties may be prefixed with a hash (#). You'll need to remove this in order for it to take effect.

With the new PDF printer configured, it's time to open it up to our local network. Here's how to do that:

sudo cupsctl --share-printers
sudo lpadmin -p pdf -o printer-is-shared=true

Note that if you want to open it up to more than your local subnet you'll need to do some additional configuration - such as configuring authentication, for instance. Such things are beyond the scope of this blog post, but if there's the demand (comment below!) I can certainly investigate writing something up.

Found this useful? Got a better / different solution? Comment below!

Job Scheduling on Linux

Scheduling jobs to happen at a later time on a Linux based machine can be somewhat confusing. Confused by 5 4 8-10/4 6/4 * baffled by 5 */4 * * *? All will be revealed!

cron

Scheduling jobs on a Linux machine can be done in several ways. Let's start with cron - the primary program that orchestrates the whole proceeding. Its name comes from the Greek word Chronos, which means time. By filling in a crontab (read cron-table), you can tell it what to do when. It's essentially a time-table of jobs you'd like it to run.

Your Linux machine should come with cron installed already. You can check if cron is installed and running by entering this command into your terminal:

if [[ "$(pgrep -c cron)" -gt 0 ]]; then echo "Cron is installed :D"; else echo "Cron is not installed :-("; fi

If it isn't installed or running, then you'll have to investigate why this isn't the case. The most common is that it isn't installed. It's normally in the official repositories for most distributions - on Debian-based system sudo apt install cron should suffice. Arch-based users may need to check to make sure that the system service is enabled and do so manually.

With cron setup and ready to go, we can start adding jobs to it. This is done by way of a crontab, as explained above. Each user has their own crontab such that they can each configure their own individual sets jobs. To edit it, type this:

crontab -e

This will open your favourite editor with your crontab ready for editing (if you'd like to change your editor, do sudo update-alternatives --config editor or change the EDITOR environment variable). You should see a bunch of lines like this:

# Edit this file to introduce tasks to be run by cron.
# 
# Each task to run has to be defined through a single line
# indicating with different fields when the task will be run
# and what command to run for the task
# 
# To define the time you can provide concrete values for
# minute (m), hour (h), day of month (dom), month (mon),
# and day of week (dow) or use '*' in these fields (for 'any').# 
# Notice that tasks will be started based on the cron's system
# daemon's notion of time and timezones.
# 
# Output of the crontab jobs (including errors) is sent through
# email to the user the crontab file belongs to (unless redirected).
# 
# For example, you can run a backup of all your user accounts
# at 5 a.m every week with:
# 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/
# 
# For more information see the manual pages of crontab(5) and cron(8)
# 
# m h  dom mon dow   command

I'd advise you keep this for future reference - just in case you find yourself in a pinch later - so scroll down to the bottom and start adding your jobs there.

Let's look at the syntax for telling cron about a job next. This is best done by example:

0 1 * * 7   cd /root && /root/run-backup

This job, as you might have guessed, runs a custom backup script. It's one I wrote myself, but that's a story for another time (comment below if you'd like me to post about that). What we're interested in is the bit at the beginning: 0 1 * * 7. Scheduling a cron job is done by specifying 5 space-separated values. In the case of the above, the job will run at 1am every Sunday morning. The order is as follows:

  • Minute
  • Hour
  • Day of the Month
  • Month
  • Day of the week

For of these values, a number of different specifiers can be used. For example, specifying an asterisk (*) will cause the job to run at every interval of that column - e.g. every minute or every hour. If you want to run something on every minute of the day (such as a logging or monitoring script), use * * * * *. Be aware of the system resources you can use up by doing that though!

Specifying number will restrict it to a specific time in an interval. For example, 10 * * * * will run the job at 10 minutes past every hour, and 22 3 * * * will run a job at 03:22 in the morning every day (I find such times great for maintenance jobs).

Sometimes, every hour or every minute is too often. Cron can handle this too! For example 3 */2 * * * will run a job at 3 minutes past every second hour. You can alter this at your leisure: The value after the forward slash (/) decides the interval (i.e. */3 would be every third, */15 would be every 15th, etc.).

The last column, the day of the week, is an alternative to the day of the month column. It lets you specify, as you may assume, the day oft he week a job should run on. This can be specified in 2 way: With the numbers 0-6, or with 3-letter short codes such as MON or SAT. For example, 6 20 * * WED runs at 6 minutes past 8 in the evening on Wednesday, and 0 */4 * * 0 runs every 4th hour on a Sunday.

The combinations are endless! Since it can be a bit confusing combining all the options to get what you want, crontab.guru is great for piecing cron-job specifications together. It describes your cron-job spec in plain English for you as you type!

crontab.guru showing a random cronjob spec.

(Above: crontab.guru displaying a random cronjob spec)

What if I turn my computer off?

Ok, so cron is all very well, but what if you turn your machine off? Well, if cron isn't running at the time a job should be run, then it won't get executed. For those of us who don't leave their laptops on all the time, all is not lost! It's time to introduce the second piece of software at our disposal.

Enter stage left: anacron. Built to be a complement to cron, anacron sets up 3 folders:

  • /etc/cron.daily
  • /etc/cron.weekly
  • /etc/cron.monthly`

Any executable scripts in this folder will be run at daily, weekly, and monthly intervals respectively by anacron, and it respects the hash-bang (that #! line at the beginning of the script) too!

Most server systems do not come with anacron pre-installed, though it should be present if your distributions official repositories. Once you've installed it, edit root's crontab (with sudo crontab -e if you can't remember how) and add a job that executes anacron every hour like so:

# Run anacron every hour
5 * * * *   /usr/sbin/anacron

This is important, as anacron does not in itself run all the time like cron does (this behaviour is called a daemon in the Linux world) - it needs a helping hand to get it to run.

If you've got more specific requirements, then anacron also has it's own configuration file you can edit. It's found at /etc/anacrontab, and has a different syntax. In the anacron table, jobs follow the following pattern:

  • period - The interval, in days, that the job should run
  • delay - The offset, in minutes, that the job should run at
  • job identifier - A textual identifier (without spaces, of course) that identifies the job
  • command - The command that should be executed

You'll notice that there are 3 jobs specified already - one for each of the 3 folders mentioned above. You can specify your own jobs too. Here's an example:`

# Do the weekly backup
7   20  run-backup  cd /root/data-shape-backup && ./do-backup;

The above job runs every 7 days, with an offset of 20 minutes. Note that I've included a command (the line starting with a hash #) to remind myself as to what the job does - I'd recommend you always include such a comment for your own reference - whether you're using cron, anacron, or otherwise.

I'd also recommend that you test your anacron configuration file after editing it to ensure it's valid. This is done like so:

anacron -T

I'm not an administrator, can I still use this?

Sure you can! If you've got anacron installed (you could even compile it from source locally if you haven't) and want to specify some jobs for your local account, then that's easily done too. Just create an anacrontab file anywhere you please, and then in your regular crontab (crontab -e), tell anacron where you put it like this:

# Run anacron every hour
5 * * * *   /usr/sbin/anacron -t "path/to/anacrontab"

What about one-off jobs?

Good point. cron and anacron are great for repeating jobs, but what if you want to set up a one-off job to auto-disable your firewall before enabling it just in case you accidentally lock yourself out? Thankfully, there's even an answer for this use-case too: atd.

atd is similar to cron in that it runs a daemon in the background, but instead of executing jobs specified in a crontab, you tell it when you want it to execute a series of commands, and then enter the commands themselves. For example:

$ at now + 10 minutes
warning: commands will be executed using /bin/sh
at> echo -e "Testing"  
at> uptime
at> <EOT>
job 4 at Thu Jul 12 14:36:00 2018

In the above, I tell it to run the job 10 minutes from now, and enter a pair of commands. To end the command list, I hit CTRL + D on an empty line. The output of the job will be emailed to me automatically if I've got that set up (cron and anacron also do this).

Specifying a time can be somewhat fiddly, but its also quite flexible:

  • at tomorrow
  • at now + 5 hours
  • at 16:06
  • at next month
  • at 2018 09 25

....and so on. Listing the current scheduled jobs is also just as easy:

atq

This will output a list of scheduled jobs that haven't been run yet. You can't see any jobs that aren't created by you unless you're root (use sudo), though. You can use the job ids listed here to cancel a job too:

# Remove job id 4:
atrm 4

Conclusion

That just about concludes this whirlwind tour of job scheduling on Linux systems. We've looked at how to schedule jobs with cron, and how to ensure our jobs get run - even when the target machine isn't turned on all the time with anacron. We've also looked at one-time jobs with atd, and how to manage the job queue.

As usual, this is a starting point - not an ending point! Job scheduling is just the beginning. From here, you can look at setting up automated backups. You could investigate setting up an email server, and how that integrates with cron. You can utilise cron to perform maintenance for your next great web (or other!) application. The possibilities are endless!

Found this useful? Still confused? Comment below!

Read / Write Disk Performance Testing in Bash

Recently I needed to quickly (and non-destructively) test the read / write performance of a flash drive of mine. Naturally, I turned my attention to my terminal. This post is me documenting what I did so that I can remember for next time :P

Firstly, to test the speed of a disk, we need some data to test with. Since lots of small files will inevitably cause slowdowns due to the overhead of writing the file metadata and inode information to the superblock, it makes the most sense to use one gigantic file rather than tons of small ones. Here's what I did to generate a 1 Gigabyte file filled with zeroes:

dd if=/dev/zero of=/tmp/testfile.bin bs=1M count=1024

Cool. Next, we need to copy it to the target disk and measure the time it took. Then, since we know the size of the file (1073741824 bytes, to be exact), we can calculate the speed at which the copy took place. Here's my first attempt:

time dd if=/tmp/testfile.bin >testfile.bin

If you run this, you might find that it doesn't take it very long at all, and you get a speed of something like ~250MiB / sec! While impressive, I seriously doubt that my flash drive has that kind of speed behind it. Typically, flash memory takes longer to write to and read from - and I'm pretty sure that it can't read from it that fast either. So what's going on?

Well, it turns out that Linux is caching the disk write operations in a buffer, and then doing them in the background for us. Whilst fine for ordinary operation, this doesn't give us an accurate representation of how fast it's actually writing to the disk. Thankfully, there's something we can do about this: Use the sync command. sync will flush all cached write operations to disk for us, giving us the actual time it took to write the 1 GiB file to disk. Here's the altered command:

sync;
time sh -c 'dd if=/tmp/testfile.bin >testfile.bin; sync'

Very cool! Now, we can just take the time it took and do some simple maths to calculate the write speed of our disk. What about the read speed though? Well, to test that, we'll first need to clear out the page cache - another one of Linux's (many) caches that holds portions of files that have recently been accessed for faster retrieval - because as before, we're not interested in the speed of the cache! Here's how to do that:

echo 1 | sudo tee /proc/sys/vm/drop_caches

With the correct cache cleared, we can test the read speed accurately. Here's how I did it:

time dd if=testfile.bin of=/dev/null

Fairly simple, right? At a later date I might figure out a way of automating this, but for the occasional use now and again this works just fine :)

Found this useful? Got a better way of doing it? Want to say hi? Post in the comments below!

Rendering LaTeX documents to PDF: Attempt #2

It was all going rather well, actually - until I discovered that pandoc doesn't support regular bibliographies / references. Upon discovering this, I ended up with a bit of problem. Thankfully, the answer lay in pdflatex - but getting to the point where I could use it without having it crash on me (which, by the way it can't accomplish properly - it gives an exit code of 0 when crashing! O.o) was not a trivial journey.

This blog post is a follow up to my first post on rendering LaTeX documents with pandoc, and is my attempt to document what I did to get it to work. To start with, I installed texlive properly. Here's how to do that on apt-based systems:

sudo apt install texlive-latex-extra --no-install-recommends

The no-install-recommends is useful here to avoid ~450MiB of useless documentation (in PDF form, apparently) being dumped to your hard drive. I've also got an arch-based system (it's actually Artix Linux, that I've blogged about) which I've done this on, so here's the install command for those kind of systems:

sudo pacman -S texlive-latexextra

Once that's installed, we can use it to render our LaTeX document to PDF. Upon discussing my issues with my Lecturer at University, I discovered that you actually have to run 3 commands in succession in order to render a single PDF. Here they are:

bibtex filename
pdflatex --output-directory=. filename.tex
pdflatex --output-directory=. filename.tex

The first one compiles the bibliography using BiBTeX. If it isn't installed already, you might need to search your distribution's repositories and install it. Next, we run the LaTeX file through pdflatex from TeXLive not once but twice - as it apparently needs to resolve the references on the first pass (why it can't do them all in once pass I have no idea :P).

It's also worth noting that the bibtex command doesn't like you to append the filename extension - it does it automatically, apparently.

That's about everything I've got on the process so far. If you've got anything else to add, please let me know in the comments below (I'm rather new to this whole LaTeX thing....)

Further Reading

Rendering LaTeX documents to PDF on Linux (and maybe Windows too)

I'm starting to write another report for University, and unlike other reports, this one apparently has to be a rather specific format. To that end, I've got two choices, apparently: Use the provided Word / LibreOffice template, or use a LaTeX template instead. After the trouble and frustration I had with LibreOffice for my previous report, I've naturally decided that using the LaTeX template might be a good idea.

After downloading it, I ended up doing some research and troubleshooting to get it to render properly to a PDF. Now that I've figured it out, I thought I'd share it here for anyone else who ends up experiencing difficulties or is unsure on how it's done.

The way I'm going to be using it is with a tool called Pandoc. First, install it like so:

sudo apt install pandoc texlive-fonts-recommended

Adjust as necessary for your distribution - Windows users will need to read the download instructions. The texlive-font-recommended package is ~66MiB(!), but it contains a bunch of fonts that are needed when you're rendering LaTeX documents, apparently.

With the dependencies installed, here's the command to convert a LaTeX document to a PDF:

pandoc -s input.tex -o output.pdf

Replace index.tex with the path to your input file, and output.pdf with the desired path to the output file. I haven't figured out how to set the font to sans-serif yet, but I'll probably make another post about it when I do.

Found this helpful? Still having issues? Let me know below! I don't have analytics on here, so that's the only way I'll know if anyone reads this :-)

Sources and Further Reading

Jump around a filesystem with a bit of bash

Your shell in the middle of a teleporter.

(Banner remixed from images found on openclipart)

I've seen things like jump, which allow you to bookmark places on your system so that you can return to them faster. The trouble is, I keep forgetting to use it. I open the terminal and realise that I need to be in a specific directory, and forget to bookmark it once I cd to it - or I forget that I bookmarked it and cd my way there anyway :P

To solve the problem, I thought I'd try implementing my own simplified system, under the name teleport, telepeek, and telepick. Obviously, we'll have to put these scripts in something like .bash_aliases as functions - otherwise it won't cd in the terminal itself. Let's start with teleport:

function teleport() {
    cd "$(find . -type d | grep -iP "$@" | head -n1)";
}

Not bad for a first attempt! Basically, it does a find to list all the subdirectories in the current directory, filters the results with the specified regex, and changes directory to the first result returned. Here's an example of how it's used:

~ $ teleport 'pep.*mint'
~/Documents/code/some/path/pepperminty-wiki/ $ 

We can certainly improve it though. Let's start by removing that head call:

function teleport() {
    cd "$(find . -type d | grep -m1 -iP "$@")";
}

What about all those Permission denied messages that pop up when you're jumping around places that you might not have permission to go everywhere? Let's suppress those too:

function teleport() {
    cd "$(find . -type d 2>/dev/null | grep -m1 -iP "$@")";
}

Much better. With a teleport command in hand, it might be nice to inspect the list of directories the find + grep combo finds. For that, let's invent a telepeek variant:

function telepeek() {
    find . -type d 2>/dev/null | grep -iP "$@" | less
}

Very cool. It doesn't have line numbers though, and they're useful. Let's fix that:

function telepeek() {
    find . -type d 2>/dev/null | grep -iP "$@" | less -N
}

Better, but I'd prefer them to be highlighted so that I can tell them apart from the directory paths. For that, we've got to change our approach to the problem:

function telepeek() {
    find . -type d 2>/dev/null | grep -iP "$@" | cat -n | sed 's/^[ 0-9]*[0-9]/\o033[34m&\o033[0m/' | less -R
}

By using a clever combination of cat -n to add the line numbers and a strange sed recipe (which I found in a comment on this Stack Overflow answer) to highlight the numbers themselves, we can get the result we want.

This telepeek command has given me an idea. Why not ask for an index to jump to after going to the trouble of displaying line numbers and jump to that directory? Let's cook up a telepick command!

function telepick() {
    telepeek $1;
    read -p "jump to index: " line_number;
    cd "$(find . -type d 2>/dev/null | grep -iP "$@" | sed "${line_number}q;d")";
}

That wasn't too hard. By using a few different commands rather like lego bricks, we can very easily create something that does what we want with minimal effort. The read -p "jump to index: " line_number bit fetches the index that the user wants to jump to, and sed comes to the rescue again to pick out the line number we're interested in with sed "${line_number}q;d".

Update April 20th 2018: I've updated the approach here to support spaces everywhere by adding additional quotes, and utilising $@ instead of $1.

Pepperminty Wiki CLI

The Pepperminty Wiki CLI. in a terminal window, with a peppermint overlaid in the top left of the image.

I've got a plan. Since I'm taking the Mobile Development module next semester, I'd like to create an Android app for Pepperminty Wiki that will let me edit one or more instances of Pepperminty Wiki while I'm, say, on a bus.

To this end, I'll need to make sure that Pepperminty Wiki itself is all ready to go - which primarily entail making sure that its REST API is suitably machine-friendly, so that I can pull down all the information I need in the app I build.

Testing this, however, is a bit of a challenge - since I haven't actually started the module yet. My solution, as you might have guessed by the title of this blog post, is to build a command-line interface (CLI) instead. I've been writing a few bash scripts recently, to I tried my hand at creating something that's slightly more polished. Here's a list of the features supported at the time of posting:

  • Listing all pages
  • Viewing a specific page
  • Listing all revisions of a page
  • Viewing a specific revision of a page

Support for searching is on the cards, but it's currently waiting on support for grabbing search results as json / plain text from Pepperminty Wiki itself.

I'll be updating it with other things too as I think of them, but if you'd like to give it a try now, then here's the source:

It should update dynamically as I update the script. Simply save it to a file called peppermint - and then you can run ./peppermint to get an overview of the commands it supports. To get detailed help on a specific command, simply run ./peppermint {command_name} to get additional help about that specific command - and additional help for that command will be displayed if it supports any further arguments (it will be executed directly if not).

Sound interesting? Any particular aspect of this script you'd like explaining in more detail? Want to help out? Leave a comment below!

The Great Migration of Manjaro

The artix linux logo. It's relevance will become clear by the end! :P

It was just before lunch in the library, and I was checking my university emails on my travelling laptop that runs Manjaro OpenRC. While that was going on, I was inducing a few updates that it notified me about - and I started to install them with yaourt -Syua. First mistake.

During the installation, it decided to upgrade OpenRC to the version in the AUR (Arch User Repository), but I didn't think anything of it particularly - I knew that Manjaro OpenRC was dying deprecated. Second mistake.

Once the updates were complete, I shut it down and sent on my way - or at least I tried to - it wouldn't shut down, instead proceeding to log out and leave it at that. I resolved to investigate the problem when I got home. Third mistake.

By the time I came to use it again, I was greeted with an ominous message:

[Firmware Bug]: TSC_DEADLINE disabled due to Errata; please update microcode to version: 0x52 (or later)
Failed to execute /init (error -2)
Kernel panic - not syncing: No working init found. Try passing init= option to kernel. See Linux Documentation/admin-guide/init.rst
CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.13.2-1-MANJARO #1
Hardware name: Entroware Apollo/Apollo, BIOS 1.05.05 04/27/2017

Hrm. That's odd. Maybe something went wrong in the update? Linux has what's called kernel parameters that tell it how to boot. They specify things like "here's the root partition of the system", and "please let me edit files on the system after booting". To understand how this fits into the next part of the story, it's first necessary to take a quick retour and look at how, precisely the linux kernel goes about booting a system. This is best explained with a diagram:

The linux kernel boot process.

(Rendered with Ascidia. Textual diagram source available here)

  1. BIOS / UEFI POST - The starting point of the boot process. The BIOS / UEFI turns on all the devices, runs some basic hardware checks, and (usually) gives the user a choice of what they want to boot from.
  2. rEFInd - grub may be used instead of rEFInd, but the basic principle is the same: it asks the user how they want to boot from the hard drive. Kernel parameters are decided on here.
  3. Initialisation: The Linux kernel is executed by the bootloader, and it proceeds to initialise itself and the connected devices.
  4. Mount initial RAM disk: The Linux encounters a chicken-and-egg problem rather early on: How can it start talking to the connected devices if it doesn't know how to talk to them? The initial RAM disk solves that problem: It contains a bunch of drives and other such components that the kernel needs to initialise all the connected devices. It's like a cut-down root file system, in a sense.
  5. Load drivers: The Linux kernel loads the drivers from the initial RAM disk (aka initrd) and starts initialising all the connected devices.
  6. Mount root (read-only): The main root file system is mounted next, but only in read-only mode while the boot process finishes.
  7. Execute init: It is at this point that the very first process is executed. It usually presides at /sbin/init, but this can be changed through the init kernel parameter.
  8. Mount root (read-write): The init process (under SysVinit at least) then remounts the root filesystem such that it is writeable.
  9. Mount other partitions: The next job is the mounting of the other partitions in /etc/fstab. This is also done by SysVinit if I recall correctly.
  10. Reach runlevels: The main runlevels managed by the service manager (e.g. OpenRC) are now executed in order by the service manager.

Phew, that took more explaining than I thought! And to think it all happens in the span of about 10 seconds....! With that out of the way, let's continue with the story.

Let's try specifying the init kernel parameter - maybe the update cleared it for some random reason....? I had no idea what I was getting myself into :P

Unexpectedly, specifying init=/sbin/init didn't work. Neither did specifying init=/bin/sh. At this point, I suspected that there was something seriously wrong. I (correctly) guessed that it was the update I performed that morning that was to blame. After a bunch of backing and forthing, I managed to get hold of a previous copy of the openrc package that was replaced by the 0.27 version from the AUR. After doing a full backup, I tried installing it and removing the new openrc-sysvinit package that was also installed.

Before we continue further, I should probably explain how I managed to install the previous package version. Didn't I just explain that my system wasn't bootable? Well, yes. But I also had the original manjaro-architect installation media that I used to build the system in the first place. With that in hand, I could use rEFInd to boot from that (my UEFI firmware makes it a bit of a pain otherwise!), and then mount the root partition of the broken system and chroot into it. This process allows me to pretend that the system is actually booted, while piggybacking off the live installation media of the boot process. It works a bit like this:

lsblk # Find the root partition
mkdir /mnt/os;
mount /dev/sdZY /mnt/os # Mount the root partition
mount /dev/sdAB /mnt/os/boot/efi # Mount the EFI partition
manjaro-chroot /mnt/os bash # Enter the chroot and execute bash

Back to the story. Sadly, valiant though my effort was to replace the openrc and openrc-sysvinit packages was, it did not solve the problem. Eventually, I ended up having to perform a blind migration to Artix Linux, the spiritual successor to both Manjaro OpenRC and Arch OpenRC (apparently the developers of both came together to create Artix Linux).

Eventually, I ended up with a successful migration that I performed inside the chroot, and the system was bootable again! Next time, I'll always run pacman -Syu before yaourt -Syua. I'll also set up a temporary backup solution for my system files (I've already got one in place for my personal files) while I figure out a more permanent one that backs up across the network.

Sources and Further Reading

Run a program on your dedicated AMD graphics card on Linux

I've recently figured out how to run a program on my dedicated AMD R7 M445 graphics card in Ubuntu 17.04, and since it's taken me far too long to around figuring it out, I thought I'd note it down here for future reference - if it helps you too, let me know in the comments below!

It's actually really simple. First, check that your dedicated AMD graphics card shows up with lspci:

lspci

If it's anything like my setup, you'll get a pair of rows like this (though they might not be next to each other):

00:02.0 VGA compatible controller: Intel Corporation HD Graphics 620 (rev 02)
01:00.0 Display controller: Advanced Micro Devices, Inc. [AMD/ATI] Topaz XT [Radeon R7 M260/M265 / M340/M360 / M440/M445] (rev c3)

Thankfully, my dedicated AMD card is showing (better than it did in previous versions of ubuntu, too, which thought it was an M225!). Next, we need to check that the amdgpu kernel module is loaded with a quick lsmod:

lsmod | grep -i amd

On my laptop, I get this:

amdkfd                139264  1
amd_iommu_v2           20480  1 amdkfd
amdgpu               1564672  1
i2c_algo_bit           16384  2 amdgpu,i915
ttm                    98304  1 amdgpu
drm_kms_helper        151552  2 amdgpu,i915
drm                   352256  9 amdgpu,i915,ttm,drm_kms_helper

Yay! It's loaded. Now to do a test to see if we can run anything on it:

glxinfo | grep "OpenGL renderer"
DRI_PRIME=1 glxinfo | grep "OpenGL renderer"

The above runs glxinfo twice: Once on the integrated graphics card, and once on the dedicated graphics card. The key here is the DRI_PRIME=1 environment variable - this tells the amdgpu driver that this process should run on the dedicated graphics and not the integrated graphics card. On my machine, I get this output:

OpenGL renderer string: Mesa DRI Intel(R) HD Graphics 620 (Kabylake GT2) 
OpenGL renderer string: Gallium 0.4 on AMD ICELAND (DRM 3.9.0 / 4.10.0-33-generic, LLVM 4.0.0)

As you can see, the latter invocation of the command ran on the dedicated AMD graphics card, and the former on the integrated graphics. So simple!

Now that we've verified that it works, we can do it with any program:

DRI_PRIME=1 inkscape

Did this you find this helpful? Did it work (or not)? Let me know in the comments!

Sources

Art by Mythdael