Starbeamrainbowlabs

Stardust
Blog


Archive

Mailing List Articles Atom Feed Comments Atom Feed Twitter Reddit Facebook

Tag Cloud

3d account algorithms android announcement architecture archives arduino artificial intelligence artix assembly async audio bash batch blog bookmarklet booting c sharp c++ challenge chrome os code codepen coding conundrums coding conundrums evolved command line compilers compiling compression css dailyprogrammer debugging demystification distributed computing documentation downtime electronics email embedded systems encryption es6 features event experiment external first impressions future game github github gist gitlab graphics hardware hardware meetup holiday holidays html html5 html5 canvas infrastructure interfaces internet io.js jabber javascript js bin labs learning library linux low level lua maintenance manjaro network networking nibriboard node.js operating systems performance photos php pixelbot portable privacy problem solving programming problems projects prolog protocol protocols pseudo 3d python reddit redis reference release releases resource review rust searching secrets security series list server software sorting source code control statistics storage svg technical terminal textures three thing game three.js tool tutorial tutorials twitter ubuntu university update updates upgrade version control virtual reality virtualisation visual web website windows windows 10 xmpp xslt

Job Scheduling on Linux

Scheduling jobs to happen at a later time on a Linux based machine can be somewhat confusing. Confused by 5 4 8-10/4 6/4 * baffled by 5 */4 * * *? All will be revealed!

cron

Scheduling jobs on a Linux machine can be done in several ways. Let's start with cron - the primary program that orchestrates the whole proceeding. Its name comes from the Greek word Chronos, which means time. By filling in a crontab (read cron-table), you can tell it what to do when. It's essentially a time-table of jobs you'd like it to run.

Your Linux machine should come with cron installed already. You can check if cron is installed and running by entering this command into your terminal:

if [[ "$(pgrep -c cron)" -gt 0 ]]; then echo "Cron is installed :D"; else echo "Cron is not installed :-("; fi

If it isn't installed or running, then you'll have to investigate why this isn't the case. The most common is that it isn't installed. It's normally in the official repositories for most distributions - on Debian-based system sudo apt install cron should suffice. Arch-based users may need to check to make sure that the system service is enabled and do so manually.

With cron setup and ready to go, we can start adding jobs to it. This is done by way of a crontab, as explained above. Each user has their own crontab such that they can each configure their own individual sets jobs. To edit it, type this:

crontab -e

This will open your favourite editor with your crontab ready for editing (if you'd like to change your editor, do sudo update-alternatives --config editor or change the EDITOR environment variable). You should see a bunch of lines like this:

# Edit this file to introduce tasks to be run by cron.
# 
# Each task to run has to be defined through a single line
# indicating with different fields when the task will be run
# and what command to run for the task
# 
# To define the time you can provide concrete values for
# minute (m), hour (h), day of month (dom), month (mon),
# and day of week (dow) or use '*' in these fields (for 'any').# 
# Notice that tasks will be started based on the cron's system
# daemon's notion of time and timezones.
# 
# Output of the crontab jobs (including errors) is sent through
# email to the user the crontab file belongs to (unless redirected).
# 
# For example, you can run a backup of all your user accounts
# at 5 a.m every week with:
# 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/
# 
# For more information see the manual pages of crontab(5) and cron(8)
# 
# m h  dom mon dow   command

I'd advise you keep this for future reference - just in case you find yourself in a pinch later - so scroll down to the bottom and start adding your jobs there.

Let's look at the syntax for telling cron about a job next. This is best done by example:

0 1 * * 7   cd /root && /root/run-backup

This job, as you might have guessed, runs a custom backup script. It's one I wrote myself, but that's a story for another time (comment below if you'd like me to post about that). What we're interested in is the bit at the beginning: 0 1 * * 7. Scheduling a cron job is done by specifying 5 space-separated values. In the case of the above, the job will run at 1am every Sunday morning. The order is as follows:

  • Minute
  • Hour
  • Day of the Month
  • Month
  • Day of the week

For of these values, a number of different specifiers can be used. For example, specifying an asterisk (*) will cause the job to run at every interval of that column - e.g. every minute or every hour. If you want to run something on every minute of the day (such as a logging or monitoring script), use * * * * *. Be aware of the system resources you can use up by doing that though!

Specifying number will restrict it to a specific time in an interval. For example, 10 * * * * will run the job at 10 minutes past every hour, and 22 3 * * * will run a job at 03:22 in the morning every day (I find such times great for maintenance jobs).

Sometimes, every hour or every minute is too often. Cron can handle this too! For example 3 */2 * * * will run a job at 3 minutes past every second hour. You can alter this at your leisure: The value after the forward slash (/) decides the interval (i.e. */3 would be every third, */15 would be every 15th, etc.).

The last column, the day of the week, is an alternative to the day of the month column. It lets you specify, as you may assume, the day oft he week a job should run on. This can be specified in 2 way: With the numbers 0-6, or with 3-letter short codes such as MON or SAT. For example, 6 20 * * WED runs at 6 minutes past 8 in the evening on Wednesday, and 0 */4 * * 0 runs every 4th hour on a Sunday.

The combinations are endless! Since it can be a bit confusing combining all the options to get what you want, crontab.guru is great for piecing cron-job specifications together. It describes your cron-job spec in plain English for you as you type!

crontab.guru showing a random cronjob spec.

(Above: crontab.guru displaying a random cronjob spec)

What if I turn my computer off?

Ok, so cron is all very well, but what if you turn your machine off? Well, if cron isn't running at the time a job should be run, then it won't get executed. For those of us who don't leave their laptops on all the time, all is not lost! It's time to introduce the second piece of software at our disposal.

Enter stage left: anacron. Built to be a complement to cron, anacron sets up 3 folders:

  • /etc/cron.daily
  • /etc/cron.weekly
  • /etc/cron.monthly`

Any executable scripts in this folder will be run at daily, weekly, and monthly intervals respectively by anacron, and it respects the hash-bang (that #! line at the beginning of the script) too!

Most server systems do not come with anacron pre-installed, though it should be present if your distributions official repositories. Once you've installed it, edit root's crontab (with sudo crontab -e if you can't remember how) and add a job that executes anacron every hour like so:

# Run anacron every hour
5 * * * *   /usr/sbin/anacron

This is important, as anacron does not in itself run all the time like cron does (this behaviour is called a daemon in the Linux world) - it needs a helping hand to get it to run.

If you've got more specific requirements, then anacron also has it's own configuration file you can edit. It's found at /etc/anacrontab, and has a different syntax. In the anacron table, jobs follow the following pattern:

  • period - The interval, in days, that the job should run
  • delay - The offset, in minutes, that the job should run at
  • job identifier - A textual identifier (without spaces, of course) that identifies the job
  • command - The command that should be executed

You'll notice that there are 3 jobs specified already - one for each of the 3 folders mentioned above. You can specify your own jobs too. Here's an example:`

# Do the weekly backup
7   20  run-backup  cd /root/data-shape-backup && ./do-backup;

The above job runs every 7 days, with an offset of 20 minutes. Note that I've included a command (the line starting with a hash #) to remind myself as to what the job does - I'd recommend you always include such a comment for your own reference - whether you're using cron, anacron, or otherwise.

I'd also recommend that you test your anacron configuration file after editing it to ensure it's valid. This is done like so:

anacron -T

I'm not an administrator, can I still use this?

Sure you can! If you've got anacron installed (you could even compile it from source locally if you haven't) and want to specify some jobs for your local account, then that's easily done too. Just create an anacrontab file anywhere you please, and then in your regular crontab (crontab -e), tell anacron where you put it like this:

# Run anacron every hour
5 * * * *   /usr/sbin/anacron -t "path/to/anacrontab"

What about one-off jobs?

Good point. cron and anacron are great for repeating jobs, but what if you want to set up a one-off job to auto-disable your firewall before enabling it just in case you accidentally lock yourself out? Thankfully, there's even an answer for this use-case too: atd.

atd is similar to cron in that it runs a daemon in the background, but instead of executing jobs specified in a crontab, you tell it when you want it to execute a series of commands, and then enter the commands themselves. For example:

$ at now + 10 minutes
warning: commands will be executed using /bin/sh
at> echo -e "Testing"  
at> uptime
at> <EOT>
job 4 at Thu Jul 12 14:36:00 2018

In the above, I tell it to run the job 10 minutes from now, and enter a pair of commands. To end the command list, I hit CTRL + D on an empty line. The output of the job will be emailed to me automatically if I've got that set up (cron and anacron also do this).

Specifying a time can be somewhat fiddly, but its also quite flexible:

  • at tomorrow
  • at now + 5 hours
  • at 16:06
  • at next month
  • at 2018 09 25

....and so on. Listing the current scheduled jobs is also just as easy:

atq

This will output a list of scheduled jobs that haven't been run yet. You can't see any jobs that aren't created by you unless you're root (use sudo), though. You can use the job ids listed here to cancel a job too:

# Remove job id 4:
atrm 4

Conclusion

That just about concludes this whirlwind tour of job scheduling on Linux systems. We've looked at how to schedule jobs with cron, and how to ensure our jobs get run - even when the target machine isn't turned on all the time with anacron. We've also looked at one-time jobs with atd, and how to manage the job queue.

As usual, this is a starting point - not an ending point! Job scheduling is just the beginning. From here, you can look at setting up automated backups. You could investigate setting up an email server, and how that integrates with cron. You can utilise cron to perform maintenance for your next great web (or other!) application. The possibilities are endless!

Found this useful? Still confused? Comment below!

The HTTPS version of my website is insecure? Nonsense!

A chrome privacy warning.

I'm still rather ill, but I wanted to post about an issue I've just had with my website. Upon visiting my website in the latest version of chrome beta (57 as of the time of typing), I discovered that chrome had decided that the connection was 'insecure'. It didn't tell me precisely what the problem was (even in the developer tools :-) - why would I possibly need to know that? - only that it considered it insecure.

After googling around a bit, I didn't find any specific articles on the subject - their recent move to start considering regular http connections insecure is swamping all the relevant articles in the search results I suspect.

The big clue came when I discovered that one of my subdomains that uses a letsencrypt works as expected. You see, the main website actually used a StartSSL certificate. My running theory is that even though my certificate was an SHA2 cerrtificate, chrome decided that it was not trustworthy as there was an SHA1 certificate in the trust chain somewhere.

The fix: Replace all my existing StartSSL certificates with Let's Encrypt ones. It seems to have fixed the issue for now. I also discovered that Let's Encrypt certificates can also be used in mail servers (i.e. SMTP and IMAP) too - so I don't have to go and fiddle about with finding an alternative certificate provider.

In future, it would certainly be helpful if Google actually told people precisely what they were going to do before they do it....!

Was this useful? Could it be improved? Would you like a Let's encrypt tutorial? Let me know in the comments below!

How to update your linux kernel version on a KimSufi server

(Or why PHP throws random errors in the latest update)

Hello again!

Since I had a bit of a time trying to find some clear information on the subject, I'm writing the blog post so that it might help others. Basically, yesterday night I updated the packages on my server (the one that runs this website!). There was a PHP update, but I didn't think much of it.

This morning, I tried to access my ownCloud instance, only to discover that it was throwing random errors and refusing to load. I'm running PHP version 7.0.16-1+deb.sury.org~xenial+2. It was spewing errors like this one:

PHP message: PHP Fatal error:  Uncaught Exception: Could not gather sufficient random data in /srv/owncloud/lib/private/Security/SecureRandom.php:80
Stack trace:
#0 /srv/owncloud/lib/private/Security/SecureRandom.php(80): random_int(0, 63)
#1 /srv/owncloud/lib/private/AppFramework/Http/Request.php(484): OC\Security\SecureRandom->generate(20)
#2 /srv/owncloud/lib/private/Log/Owncloud.php(90): OC\AppFramework\Http\Request->getId()
#3 [internal function]: OC\Log\Owncloud::write('PHP', 'Uncaught Except...', 3)
#4 /srv/owncloud/lib/private/Log.php(298): call_user_func(Array, 'PHP', 'Uncaught Except...', 3)
#5 /srv/owncloud/lib/private/Log.php(156): OC\Log->log(3, 'Uncaught Except...', Array)
#6 /srv/owncloud/lib/private/Log/ErrorHandler.php(67): OC\Log->critical('Uncaught Except...', Array)
#7 [internal function]: OC\Log\ErrorHandler::onShutdown()
#8 {main}
  thrown in /srv/owncloud/lib/private/Security/SecureRandom.php on line 80" while reading response header from upstream, client: x.y.z.w, server: ownc

That's odd. After googling around a bit, I found this page on the Arch Linux bug tracker. I'm not using arch (Ubuntu 16.04.2 LTS actually), but it turned out that this comment shed some much-needed light on the problem.

Basically, PHP have changed the way they ask the Linux Kernel for random bytes. They now use the getrandom() kernel function instead of /dev/urandom as they did before. The trouble is that getrandom() was introduced in linux 3.17, and I was running OVH's custom 3.14.32-xxxx-grs-ipv6-64 kernel.

Thankfully, after a bit more digging, I found this article. It suggests installing the kernel you want and moving one of the grub config file generators to another directory, but I found that simply changing the permissions did the trick.

Basically, I did the following:


apt update
apt full-upgrade
apt install linux-image-generic
chmod -x /etc/grub.d/06_OVHkernel
update-grub
reboot

Basically, the above first updates everything on the system. Then it installs the linux-image-generic package. linux-image-generic is the pseudo-package that always depends on the latest stable kernel image available.

Next, I remove execute privileges on the file /etc/grub.d/06_OVHkernel. This is the file that gives the default installed OVH kernel priority over any other instalaled kernels, so it's important to exclude it from the grub configuration process.

Lastly, I update my grub configuration with update-grub and then reboot. You need to make sure that you update your grub configuration file, since if you don't it'll still use the old OVH kernel!

With that all done, I'm now running 4.4.0-62-generic according to uname -a. If follow these steps yourself, make sure you have a backup! While I am happy to try and help you out in the comments below, I'm not responsible for any consequences that may arise as a result of following this guide :-)

New Sharing Buttons! (and how to get your own)

The other day I was asked by someone to add some share buttons to my blog. After taking a little look into it, I found that it really wasn't that difficult to do. Now you'll find three share buttons at the bottom of each post. To start with I picked Twitter, Facebook and Evernote, but if you would like to see any other services just leave a comment down below.

The new sharing buttons are surprisingly simple. All they are is an image wrapped in a specially constructed hyperlink:

https://twitter.com/share?url=<url>&text=<text>&via=<via>
https://facebook.com/sharer/sharer.php?u=<url>
http://www.addtoany.com/add_to/evernote?linkurl=<url>

Simply replace <url> with your url, <text> with your text, and <via> with your twitter handle (without the @ sign). Don't forget to run everything through rawurlencode() though, otherwise some special character might sneak through and break the link.

PSA: Post Order

Recently the order of the posts on this blog seemed to go all strange - I have spent some time attempting to fix it. Hopefully things are back to normal now. Because of this, this week's ES6: Features post will be released tomorrow.

I now have a new tool that re-orders the posts on this blog - problems like this one shouldn't happen (as often!) in the future.

Public Service Announcement: Web Server Switch

Hello again - Today's post is a public service announcement instead of the usual ES6 post. Hopefully that will be coming out on Thursday.

This website is now powered by a new piece of web server software: Nginx (pronounced engine-x). Ever since I started this website, I have been using lighttpd. While lighttpd has been my favourite web server software for ages (mainly because of the flexible configuration file syntax, the light footprint, and the speed), it seems that development of lighttpd's core codebase has been moving too slowly for me. Lighttpd, while fast and light, has been missing several features that I would rather like to have - and no release date for the next has been announced yet either.

Nginx, on the other hand, is much more feature-complete. It will support HTTP/2 by the end of 2015, and has a slew of other features to play around with. While, it's configuration files are kind of a pain (it only matches against one location block per request), I feel that Nginx is a better solution for this website in the long run. If development resumes on lighttpd, perhaps I will move back to it - but only if I am sure that development will actually continue.

So the switch has been made! Please notify me if you notice any issues and/or problems with the new setup and I will fix them as soon as I can.

Upgrading to Ubuntu 15.04 Vivid Vervet

Hello!

Yesterday you probably noticed some downtime. This is because I was upgradting this server's operating system from Ubuntu 14.10 to Ubuntu 15.04! Since I noticed a few things that you should watch out for when upgrading, I thought that I would make a post about it.

For the most part the upgrade went smoothly, but I did hit a few snags. Firstly, if you have got any custom ppas set up for apt-get, you will want to make a list of them (they are located in /etc/apt/sources.list.d) because the upgrade will annoyingly disable them all :( It's not too much trouble to fix but it is annoying.

Secondly, there are two new mime types that have been added /etc/mime.types. If you have made any customisations to this file (I have added text/x-markdown), then you will want to make a note of them and re-add them afterwards. Don't forget to restart your http servers after changing it!

There are some changes that require the ssh daemon to be stopped, so make sure you don't do the upgrade over ssh.

You will get asked which interfaces DCHPv6 should listen on / send requests to. If you use your linux box as a router and have it handing out IP addresses, then you will need to take note of which interfaces you have on your box and which one is which.

By far the biggest problem for me though was the switch from upstart to systemd. This server is hosted by OVH under one of their VPS hosting plans (which are great by the way!), which means that it is virtualised using OpenVZ. It also means that I can't choose my kernel :( I suspect that this is the reason that I can't use systemd, though if anyone has any other ideas, I would love to hears them - leave them in the comments below. When it has finished the upgrade, it couldn't reboot, instead telling me that it couldn't find an alternative telinit implementation to spawn. The solution to this is simple though (don't forget to run as root):

apt-get install upstart
apt-get remove systemd
apt-get install upstart-sysv

The last package in the above (upstart-sysv) should be install automatically, but you should make sure that it is installed - it is the package that prevents it from automatically trying to switch you back to systemd at the next available opportunity.

I hope this post is useful! If you do find it helpful, please leave a comment. If people seem to like it I might start posting full upgrade guides.

Atom Feed Optimisations (and bugfixes)

Hello again!

I have made a few tweaks to this blog's feed that I think that you might want to know about.

Firstly, I fixed the bug with the named html entities breaking the feed. Apparently XML doesn't like named html entities much. This has involved a number of tweaks to atom.gen.php - You will see that the script is now ~30kb(!) - this is due to the inclusion of a named html entity to numeric html entity conversion table. You will now also see (?) if an unknown named html entity couldn't be converted successful.

Secondly, I have improved the performance of the feed by only showing the last 15 posts in the feed. I didn't notice at first, but this has been slowing the server down for some time because it been adding every post I have made so far (62 for those of you who are counting) to the feed, every time someone requests it.

Hopefully these changes don't have any unintended side-effects, but if they do please let me know in the comments below.

Stardust Update: Email Notifications

Welcome to another blog update.

Yesterday I added email notifications to the blog comments. You may have noticed that there is an extra optional field for you to put your email address in when commenting. If you do so, anybody who replies to your comment will trigger an email notification which will go to the email address that you specified.

Currently the only method I have for you to unsubscribe from these emails is to email emails at starbeamrainbowlabs dot com, and I will process your request manually. Make sure that you include a copy of the email that you don't want anymore, since it contains information about the comment that you posted that I will have to edit. In the future I hope to have an automated system that you will be able to use.

If you notice any issues with this, please leave a comment below.

The other thing that I added is email notifications for myself. Now, every time somebody comments on my blog I will receive an email notification telling me about it. This is mainly because I forgot to check for new comments - you should receive a reply to your comment much faster now :)

Tag Cloud Update

Today (after lots of code tweaking in preparation), the tag clouds that recently appeared on the blog now have links on each tag so you can see which posts havethat particular tag. Currently, the Next Page and Pevious Page links are a little bit buggy, but that problem should be fixed sooner or later, I just need to make a few more changes to the code to make it more intelligent.

Edit: The Next Page and Previou Page buttons have been fixed! I have also added First Page and Last Page butttons too - the last page button was the reason it took to long to refactor the code :)

Art by Mythdael