Cluster, Part 5: Staying current | Automating apt updates and using apt-cacher-ng
Hey there! Welcome to another cluster blog post. In the last post, we looked at setting up a WireGuard mesh VPN as a trusted private network for management interfaces and inter-cluster communication. As a refresher, here's a list of all the parts in this series so far:
- Cluster, Part 1: Answers only lead to more questions
- Cluster, Part 2: Grand Designs
- Cluster, Part 3: Laying groundwork with Unbound as a DNS server
- Cluster, Part 4: Weaving Wormholes | Peer-to-Peer VPN with WireGuard
Before we get to the Hashicorp stack though (next week, I promise!), there's an important housekeeping matter we need to attend to: managing updates.
In Debian-based Linux distributions such as Raspbian (that I'm using on my Raspberry Pis), updates are installed through apt - and this covers everything from the kernel to the programs we have installed - hence the desire to package everything we're going to be using to enable easy installation and updating.
There are a number of different related command-line tools, but the ones we're interested in are apt
(the easy-to-use front-end CLI) and apt-get
(the original tool for installing updates).
There are 2 key issues we need to take care of:
- Automating the installation of packages updates
- Caching the latest packages locally to avoid needless refetching
Caching package updates locally with apt-cacher-ng
Issue #2 here is slightly easier to tackle, surprisingly enough, so we'll do that first. We want to cache the latest packages locally, because if we have lots of machines in our cluster (I have 5, all running Raspbian), then when they update they all have to download the package lists and the packages themselves from the remote sources each time. Not only is this bandwidth-inefficient, but it takes longer and puts more strain on the remote servers too.
For apt
, this can be achieved through the use of apt-cacher-ng
. Other distributions require different tools - in particular I'll be researching and tackling Alpine Linux's apk
package manager in a future post in this series - since I intend to use Alpine Linux as my primary base image for my Docker containers (I also intend to build my own Docker containers from scratch too, so that will be an interesting experience that will likely result in a few posts too).
Anyway, installation is pretty simple:
sudo apt install apt-cacher-ng
Once done, there's a little bit of tuning we need to attend to. apt-cacher-ng
by default listens for HTTP requests on TCP port 3142 and has an administrative interface at /acng-report.html
. This admin interface is not, by default, secured - so this is something we should do before opening a hole in the firewall.
This can be done by editing the /etc/apt-cacher-ng/security.conf
configuration file. It should read something like this:
# This file contains confidential data and should be protected with file
# permissions from being read by untrusted users.
#
# NOTE: permissions are fixated with dpkg-statoverride on Debian systems.
# Read its manual page for details.
# Basic authentication with username and password, required to
# visit pages with administrative functionality. Format: username:password
AdminAuth: username:password
....you may need to use sudo
to view and edit it. Replace username
and password
with your own username and a long unguessable password that's not similar to any existing passwords you have (especially since it's stored in plain text!).
Then we can (re) start apt-cacher-ng
:
sudo systemctl enable apt-cacher-ng
sudo systemctl restart apt-cacher-ng
The last thing we need to do here is to punch a hole through the firewall, if required. As I explained in the previous post, I'm using a WireGuard mesh VPN, so I'm allowing all traffic on that interface (for reasons that will - eventually - come clear), so I don't need to open a separate hole in my firewall unless I want other devices on my network to use it too (which wouldn't be a bad idea, all things considered).
Anyway, ufw can be configured like so:
sudo ufw allow 3142/tcp comment apt-cacher-ng
With the apt-cacher server installed and configured, you can now get apt
to use it:
echo 'Acquire::http { Proxy \"http://X.Y.Z.W:3142\"; }' | sudo tee -a /etc/apt/apt.conf.d/proxy
....replacing X.Y.Z.W
with the IP address (or hostname!) of your apt-cacher-ng
server. Note that it will get upset if you use https
anywhere in your apt sources, so you'll have to inspect /etc/apt/sources.list
and all the files in /etc/apt/sources.list.d/
manually and update them.
Automatic updates with unattended-upgrades
Next on the list is installing updates automatically. This is useful because we don't want to have to manually install updates every day on every node in the cluster. There are positives and negatives about installing updates - I recommend giving the top of this article a read.
First, we need to install unattended-upgrades:
sudo apt install unattended-upgrades
Then, we need to edit the /etc/apt/apt.conf.d/50unattended-upgrades
file - don't forget to sudo
.
Unfortunately, I haven't yet automated this process (or properly developed a replacement configuration place that can be automatically placed on a target system by a script), so for now we'll have to do this manually (the mssh
command might come in handy).
First, find the line that starts with Unattended-Upgrade::Origins-Pattern
, and uncomment the lines that end in -updates
, label=Debian
, label=Debian-Security
. For Raspberry Pi users, add the following lines in that bit too:
"origin=Raspbian,codename=${distro_codename},label=Raspbian";
// Additionally, for those running Raspbian on a Raspberry Pi,
// match packages from the Raspberry Pi Foundation as well.
"origin=Raspberry Pi Foundation,codename=${distro_codename},label=Raspberry Pi Foundation";
unattended-upgrades
will only install packages that are matched by a list of origins. Unfortunately, the way that you specify which updates to install is a total mess, and it's not obvious how to configure it. I did find an Ask Ubuntu answer that explains how to get unattended-upgrades to install updates. If anyone knows of a cleaner way of doing this - I'd love to know.
The other decision to make here is whether you'd like your hosts to automatically reboot. This could be disruptive, so only enable it if you're sure that it won't interrupt any long-running tasks.
To enable it, find the line that starts with Unattended-Upgrade::Automatic-Reboot
and set it to true
(uncommenting it if necessary). Then find the Unattended-Upgrade::Automatic-Reboot-Time
setting and set it to a time of day you're ok with it rebooting at - e.g. 03:00
for 3am in the morning - but take care to avoid all your servers from rebooting at the same time, as this might cause issues later.
A few other settings need to be updated too. Here are they are, with their correct values:
APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Download-Upgradeable-Packages "1";
APT::Periodic::Unattended-Upgrade "1";
APT::Periodic::AutocleanInterval "7";
Make sure you find the existing settings and update them, because otherwise if you just paste these in, they may get overridden. In short these settings:
- Enable automatic updates to the package metadata indexes
- Downloads upgradeable packages
- Installs downloaded updates
- Automatically cleans the cache up every 7 days
Once done, save and close that file. Finally, we need to enable and start the unattended-upgrades service:
sudo systemctl enable unattended-upgrades
sudo systemctl restart unattended-upgrades
To learn more about automatic upgrades, these guides might help shed some additional light on the subject:
- Configuring
unattended-upgrades
on Raspbian - Configure Unattended Upgrades on Raspberry Pi
- UnattendedUpgrades - Debian Wiki
Conclusion
In this post, we've taken a look at apt package caching and unattended-upgrades. In the future, I'm probably going to have to sit down and either find an alternative to unattended-upgrades that's easier to configure, or rewrite the entire configuration file and create my own version. Comments and suggestions on this are welcome in the comments.
In the next post, we'll be finally getting to the Hashicorp stack by installing and configuring Consul. Hold on to your hats - next week's post is significantly complicated.
Edit 2020-05-09: Add missing instructions on how to get apt to actually use an apt-cacher-ng
server