I've got a new laptop, and in it I have an Nvidia graphics card. The initial operating system installation went ok (Ubuntu 20.10 Groovy Gorilla is my Linux distribution of choice here), but after I'd done a bunch of configuration and initial setup tasks it inevitably came around to the point in time that I wanted to run an application on my dedicated Nvidia graphics card.
Doing so is actually really quite easy - it's figuring out the how that was the hard bit! To that end, this is a quick post to document how to do this so I don't forget next time.
Before you continue, I'll assume here that you're using the Nvidia propriety drivers. If not, consult your distribution's documentation on how to install and enable this.
With Nvidia cards, it depends greatly on what you want to run as to how you go about doing this. For CUDA applications, you need to install the nvidia-cuda-toolkit package:
sudo apt install nvidia-cuda-toolkit
...and then there will be an application-specific menu you'll need to navigate to tell it which device to use.
For traditional graphical programs, the process is different.
In the NVIDIA X Server Settings, go to PRIME Profiles, and then ensure NVIDIA On-Demand mode is selected. If not, select it and then reboot.
Then, to launch a program on your Nvidia dedicated graphics card, do this:
In short, the __NV_PRIME_RENDER_OFFLOAD environment variable must be set to 1 , and the __GLX_VENDOR_LIBRARY_NAME environment variable must be set to nvidia. If you forget the latter here or try using DRI_PRIME=1 instead of both of these (as I advise in my previous post about AMD dedicated graphics cards), you'll get an error like this:
libGL error: failed to create dri screen
libGL error: failed to load driver: nouveau
The solution is to set the above environment variables instead of DRI_PRIME=1. Thanks to this Ask Ubuntu post for the answer!
Given that I know I'll be wanting to use this regularly, I've created a script in my bin folder for it. It looks like this:
Way back in 2015, I posted a language review about Lua. In that post, I ported an even older 2D maze generator I implemented in Python when I was in secondary school on a Raspberry Pi (this was one of the first experiences I had with the Raspberry Pi). I talked about how Lua was easy to get started with, but difficult do anything serious because everything starts from 1, not 0 - and that immutable strings are awkward.
Since then, I've gained lots more experience with the language. As an aside, I discovered a nice paradigm for building strings:
local function string_example()
local parts = {} -- Create a table
table.insert(parts, "This is ") -- Add some strings
table.insert(parts, "a ")
table.insert(parts, "string")
return table.concat(result, "") -- Concatenate them all at once and return
end
Anyway, before I get too distracted, I think the best way to continue this post is with a picture:
Fair warning: This blog post is pretty media heavy. If you are viewing on your mobile device with a limited data connection, you might want to continue reading on another device later.
Pretty cool, right? Perhaps I should explain a little about how I got here. A month or two ago, I rediscovered the above blog post and the Lua port of my Python 2d maze generator. It outputs mazes like this:
(I can't believe I didn't include example output in my previous blog post!)
My first thought was that I could upgrade it to support 3d mazes as well. One thing led to another, and I ended up with a 3D maze generator that output something like this:
Each block of hash (#) symbols is a layer of the maze. It's a bit hard to visualise though, so I decided to do something about it. For my masters project, I used OpenSCAD to design a housing for an Internet of Things project I did. Since it's essentially a programming language for expressing 3D models, I realised that it would be perfect for representing my 3D mazes - and since said mazes use a grid, I can simply generate an OpenSCAD file full of cubes for all the locations at which I have a hash symbol in the output (the data itself is stored in a nested table setup, which I then process).
This is much better. We can clearly see the maze now and navigate around it. OpenSCAD's preview controls are really quite easy to pick up. What you see in the above screenshot is an 'inverted' version of the maze - i.e. instead of carving out a solid block, the algorithm walks around an empty space inside a defined region.
The algorithm that generates the maze itself is pretty much the same as the original algorithm I devised myself in Python (which I've now lost, sadly - as I didn't use Git back then).
It starts in the top left corner, and then does a random walk around the defined area. It keeps track of where it has been in a node list (basically a list of coordinates), and every time it takes a step forwards, there's a chance it will jump back to a previous position in the nodes list. Once it can't jump anywhere from a position, that position is considered complete and is removed from the nodes list. Once the node list is empty, the maze is considered complete and it returns the output.
As soon as I saw the STL export function though, I knew I could do better. I've used Blender before a little bit - it's a production-grade free open-source rendering program. You can model things in it and apply textures to them, and then render the result. It is using a program like this that many CGI pictures (and films!) are created.
Crucially for my case, I found the STL import function. With that, I could import the STL I exported from OpenSCAD, and then have some fun playing around with the settings to get some cool renders of some mazes:
(Above: Some renders of some of the outputs of the maze generator. See the full size image [3 MiB])
The sizes of the above are as follows, in grid squares as generated by the Lua 3d maze generator:
Blue: 15 x 15 x 15
Orange: 7 x 7 x 7
Purple: 17 x 15 x 11, with a path length of 4 (i.e. the generator jumps forwards by 4 spaces instead of 2 during the random walk)
Green: 21 x 21 x 7
Somehow it's quite satisfying to watch it render, with the little squares gradually spiralling their way out from the centre in a hilbert curve - so I looked into how to create a glass texture, and how to setup volumetric rendering. It was not actually too difficult to do (the most challenging part was getting the lights in the right place with the right strength). Here's a trio of renders that show the iterative process to getting to the final image you see at the top of this post:
(Above: Some renders of some of the blue 15x15x15 above in the previous image with a glass texture. See the full size image [3.4 MiB])
From left to right:
My initial attempt using clear glass
Frosting the glass made it look better
Adding volumetric lighting makes it look way cooler!
I guess that you could give the same treatment to any STL file you like.
Anyway, the code for my maze generator can be found here on my private git server: sbrl/multimaze
The repository README contains instructions on how to use it. I won't duplicate that here, because it will probably change over time, and then this blog post would be out of date.
Before I go, I'll leave you with some animations of some mazes rotating. This whole experience of generating and rendering mazes has been really fun - it's quite far outside what I've been doing recently. I think I'd like to do some more of this in the future!
Update: I've re-rendered a new version at a lower quality. This should help mobile devices! The high-quality version can still be accessed via the links below.
As I promised recently, this post is about a script I implemented a while back that automatically organises and optimises the photos and videos that I take for me. Since I've been using it a while now and it seems stable, I thought I'd share it here in the hopes that it might be useful to someone else too.
I take quite a few photos and the odd video or two with my phone. These are automatically uploaded to a Raspberry Pi 3B+ that's acting as a file server on my home network with FolderSync (yes, it has ads, but it's the best I could find that does the job). Once uploaded to a folder, I then wanted a script that would automatically sort the uploaded images and videos into folders by year and month according to their date taken.
To do this, I implemented a script that uses exiftool (sudo apt install libimage-exiftool-perl I believe) to pull out the date taken from JPEGs and sort based on that. For other formats that don't support EXIF data, I take the last modified time with the date command and use that instead.
Before I do this though, I run my images through a few preprocessing tools:
PNGs are optimised with optipng (sudo apt install optipng)
JPEGs are optimised with jpegoptim (sudo apt install jpegoptim)
JPEGs are additionally automatically reoriented with mogrify -auto-orient from ImageMagick, as many cameras will set an EXIF tag for the rotation of an image without bothering to physically rotate the image itself
It's worth noting here that these preprocessing optimisation steps are lossless. In other words, no quality lost by performing these actions - it simply encodes the images more efficiently such that they use less disk space.
Once all these steps are complete, images and videos are sorted according to their date taken / last modified time as described above. That should end up looking a bit like this:
Now that I've explained how it works, I can show you the script itself:
(Can't see the above? Check out the script directly on GitLab here: organise-photos)
The script operates on the current working directory. All images directly in the working directory will be sorted as described above. Once you've put it in a directory that is in your PATH, simply call it like this:
organise-photos
The script can be divided up into 3 distinct sections:
The setup and initialisation
The function that sorts individual files themselves into the right directory (handle_file - it's about half-way down)
The preprocessing steps and the driver code that calls the above function.
So far, I've found that it's been working really rather well. During development and testing I did encounter a number of issues with the sorting system in handle_file that caused it to sort files into the wrong directory - which took me a while finally squash.
I'm always tweaking and improving it though. To that end, I have several plans to improve it.
Firstly, I want to optimise videos too. I'd like to store them in a standard format if possible. It's not that simple though, because some videos don't take well to being transcoded into a different format - indeed they can even take up more space than they did previously! In those cases it's probably worth discarding the attempt at transcoding the video to a more efficient format if it's larger than the original file.
I'd also like to reverse-geocode (see also the usage policy) the (latitude, longitude) geotags in my images to the name of the place that I took them, and append this data to the comment EXIF tag. This will make it easier to search for images based on location, rather than having to remember when I took them.
Finally, I'd also like to experiment with some form of AI object recognition with a similar goal as reverse-geocoding. By detecting the objects in my images and appending them to the comment EXIF tag, I can do things like search for "cat", and return all the images of cats I've taken so far.
I haven't started to look into AI much yet, but initial search results indicate that I might have an interesting time locating an AI that can identify a large number of different objects.
Anyway, my organise-photos script is available on GitLab in my personal bin folder that I commit to git if you'd like to take a closer look - suggestions and merge requests are welcome if you've got an idea that would make it even better :D
I recently discovered that Animated PNGs are now supported by most major browsers:
I stumbled across the concept of an animated PNG a number of years ago (on caniuse.com actually if I remember right!), but at the time browser support was very bad (read: non-existent :P) - so I moved on to other things.
I ended up re-discovering it a few weeks ago (also through caniuse.com!), and since browser support is so much better now I decided that I just had to play around with it.
It hasn't disappointed. Traditional animated GIFs (Graphics Interchange Format for the curious) are limited to 256 colours, have limited transparency support (a pixel is either transparent, or it isn't - there's no translucency support), and don't compress terribly well.
Animated PNGs (Portable Network Graphics), on the other hand, let you enjoy all the features of a regular PNG (as many colours as you want, translucent pixels, etc.) while also supporting animation, and compressing better as well! Best of all, if a browser doesn't support the animated PNG standard, they will just see and render the first frame as a regular PNG and silently ignore the rest.
Let's take it out for a spin. For my test, I took an image and created a 'panning' animation from one side of it to the other. Here's the image I've used:
(Credit: The background on the download page for Mozilla's Firefox Nightly builds. It isn't available on the original source website anymore (and I've lost the link), but can still be found on various wallpaper websites.)
The first task is to generate the frames from the original image. I wrote a quick shell script for this purpose. Firstly, I defined a bunch of variables:
#!/usr/bin/env bash
set -e; # Crash if we hit an error
input_file="Input.png";
output_file="Output.apng"
# The maximum number of frames
max_frame=32;
end_x=960; end_y=450;
start_x=0; start_y=450;
crop_size_x=960; crop_size_y=540;
mkdir -p ./frames;
Variable
Meaning
input_file
The input file to generate frames from
output_file
The file to write the animated png to
max_frame
The number of frames (plus 1) to generate
start_x
The starting position to pan from on the x axis
start_y
The starting position to pan from on the y axis
end_x
The ending position to pan to on the x axis
end_y
The ending position to pan to on the y axis
crop_size_x
The width of the cropped frames
crop_size_y
The height of the cropped frames
It's worth noting here that it's probably a good idea to implement a proper CLI to this script at this point, but since it's currently only a one-off script I didn't bother. Perhaps in the future I'll come back and improve it if I find myself needing it again for some other purpose.
With the parameters set up (and a temporary directory created - note that you should use mktemp -d instead of the approach I've taken here!), we can then use a for loop to repeatedly call ImageMagick to crop the input image to generate our frames. This won't run in parallel unfortunately, but since it's only a few frames it shouldn't take too long to render. This is only a quick shell script after all :P
for ((frame=0; frame <= "${max_frame}"; frame++)); do
this_x="$(calc -p "${start_x}+((${end_x}-${start_x})*(${frame}/${max_frame}))")";
this_y="$(calc -p "${start_y}+((${end_y}-${start_y})*(${frame}/${max_frame}))")";
percent="$(calc -p "round((${frame}/${max_frame})*100, 2)")";
convert "${input_file}" -crop "${crop_size_x}x${crop_size_y}+${this_x}+${this_y}" "frames/Frame-$(printf "%02d" "${frame}").jpeg";
echo -ne "${frame} / ${max_frame} (${percent}%) \r";
done
echo ""; # Move to the line after the progress indicator
This looks complicated, but it really isn't. The for loop iterates over each of the frame numbers. We do some maths to calculate the (x, y) co-ordinates of the frame we want to extract, and then get ImageMagick's convert command to do the cropping for us. Finally we write a quick progress indicator out to stdout (\r resets the cursor to the beginning of the line, but doesn't go down to the next one).
The maths there is probably better represented like this:
Much better :-) In short, we convert the current frame number into a percentage (between 0 and 1) of how far through the animation we are and use that as a multiplier to find the distance between the starting and ending points.
I use the calc command-line tool (in the apcalc package on Ubuntu) here to do the calculations, because the bash built-in result=$(((4 + 5))) only supports integer-based maths, and I wanted floating-point support here.
With the frames generated, we only need to stitch them together to make the final animated png. Unfortunately, an easy-to-use tool does not yet exist (like gifsicle for GIFs) - so we'll have to make-do with ffpmeg (which, while very powerful, has a rather confusing CLI!). Here's how I stitched the frames together:
-plays - The number of times to loop (0 = infinite; defaults to no looping if omitted)
"${output_file}" - The output file
Here's the final result:
(Filesize: ~2.98MiB)
Of course, it'd be cool to compare it to a traditional animated gif. Let's do that too! First, we'll need to convert the frames to gif (gifsicle, our tool of choice, doesn't support anything other than GIFs as far as I'm aware):
mogrify -format gif frames/*.jpeg
Easy-peasy! mogrify is another tool from ImageMagick that makes such in-place conversions trivial. Note that the frames themselves are stored as JPEGs because I was experiencing an issue whereby each of the frames apparently had a slightly different colour palette, and ffmpeg wasn't smart enough to correct for this - choosing instead to crash.
With the frames converted, we can make our animated GIF like so:
Lastly, we probably want to delete the intermediate frames:
rm -r ./frames
Here's the animated GIF version:
(Filesize: ~3.28MiB)
Woah! That's much bigger.
(Generated (and then extracted & edited with the Firefox developer tools) from Meta-Chart)
It's ~9.7% bigger in fact! Though not a crazy amount, smaller resulting files are always good. I suspect that this effect will stack the more frames you have. Others have tested this too, finding pretty similar results to those that I've found here - though it does of course depend on your specific scenario.
With that observation, I'll end this blog post. The next time you think about inserting an animation into a web page or chat window, consider making it an Animated PNG.
Found this interesting? Found a cool CLI tool for manipulating APNGs? Having trouble? Comment below!
(Above: An example output with debugging turned on from my placeholder generation service)
For a quite a considerable amount of time now, I've been running my own placeholder image generation service here at starbeamrainbowlabs.com - complete with etag generation and custom colour selection. Although it's somewhat of an Easter Egg, it's not actually that hard to find if you know what you're looking for (hint: I use it on my home page, but you may need to view source to find it).
I decided to post about it now because I've just finished fixing the angle GET parameter - and it was interesting (and hard) enough to warrant a post to remind myself how I did it for future reference. Comment below if you knew about it before this blog post came out!
The script itself is split into 3 loose parts:
The initial settings / argument parsing
The polyfills and utility functions
The image generation.
My aim here was to keep the script contained within a single file - so that it fits well in a gist (hence why it currently looks a bit messy!). It's probably best if show the code in full:
(Having trouble viewing code above? Visit it directly here: placeholder.php)
If you append ?help to the url, it will send back a plain text file detailing the different supported parameters and what they do:
Aside from implementing the random value for fg_colour and bg_colour, the angle has been a huge pain. I use GD - a graphics drawing library that's bundled with practically every PHP installation ever - to draw the placeholder image, and when you ask it to draw some rotated text, it decides that it's going to pivot it around the bottom-left corner instead of the centre.
Naturally, this creates some complications. Though some people on the PHP manual page said method (imagettftext) have attempetd to correct for this (exhibits a and b), neither of their solutions work for me. I don't know if it's because their code is just really old (13 and 5 years respectively as of the time of typing) or what.
Anyway, I finally decided recently that enough was enough - and set about fixing it myself. Basing my code on the latter of the 2 pre-existing solutions, I tried fixing it - but ended up deleting most of it and starting again. It did give me some idea as to how to solve the problem though - all I needed to do was find the centre of where the text would be drawn when it is both not rotated and rotated and correct for it (these are represented by the green and blue crosses respectively on the image at the top of this post).
PHP does provide a method for calculating the bounding box of some prospective text that you're thinking of drawing via imagettfbbox. Finding the centre of the box though sounded like a horrible maths-y problem that would take me ages to work through on a whiteboard first, but thankfully (after some searching around) Wikipedia had a really neat solution for finding the central point of any set of points.
It calls it the centroid, and claims that the geometric centre of a set of points is simply the average of all the points involved. It just so happens that the geometric centre is precisely what I'm after:
$$
C=\frac{a+b+c+d+....}{N}
$$
...Where $C$ is the geometric centre of the shape as an $\binom{x}{y}$ vector, $a$, $b$, $c$, $d$, etc. are the input points as vectors, and $N$ is the number of points in question. This was nice and easy to program:
The odd and even indexes of $bbox there are referring to the $x$ and $y$ co-ordinates of the 4 corners of the bounding boxes - imagettfbbox outputs the co-ordinates in 1 single array for some reason. I also calculate the original width and height of the text - in order to perform an additional corrective translation later.
With these in hand, I could calculate the actual position I need to draw it (indicated by the yellow cross in the image at the top of this post):
In short, I calculate the distance between the 2 centre points calculated above, and then find the pivot point by taking this distance plus half the size of the text from the original central position we wanted to draw the text at. Here's that in PHP:
// Calculate the x and y shifting needed
$dx = $rot_centre[0] - $orig_centre[0];
$dy = $rot_centre[1] - $orig_centre[1];
// New pivot point
$px = $x - ($dx + $text_width/2);
$py = $y - $dy + $text_height/2;
I generated a video of it in action (totally not because I thought it would look cool :P) with a combination of curl, and ffmpeg:
The first command downloads all the required frames (3 at a time), and the second stitches them together. The -q:v 3 bit of the ffmpeg command is of note - by default webm videos apparently have a really low quality - this corrects that. Lower is better, apparently - it goes up to about 40 I seem to remember reading somewhere. 3 to 5 is supposed to be the right range to get it to look ok without using too much disk space.
That's about everything I wanted to talk about to remind myself about what I did. Let me know if there's anything you're confused about in the comments below, and I'll explain it in more detail.
I've recently figured out how to run a program on my dedicated AMD R7 M445 graphics card in Ubuntu 17.04, and since it's taken me far too long to around figuring it out, I thought I'd note it down here for future reference - if it helps you too, let me know in the comments below!
Update October 2020: If you have an Nvidia graphics card, then you probably want to read my newer post here.
It's actually really simple. First, check that your dedicated AMD graphics card shows up with lspci:
lspci
If it's anything like my setup, you'll get a pair of rows like this (though they might not be next to each other):
Thankfully, my dedicated AMD card is showing (better than it did in previous versions of ubuntu, too, which thought it was an M225!). Next, we need to check that the amdgpu kernel module is loaded with a quick lsmod:
The above runs glxinfo twice: Once on the integrated graphics card, and once on the dedicated graphics card. The key here is the DRI_PRIME=1 environment variable - this tells the amdgpu driver that this process should run on the dedicated graphics and not the integrated graphics card. On my machine, I get this output:
Since the demonstration for my 3D work is tomorrow and I keep forgetting the details of the OpenGL graphics pipeline, I thought I'd write a blog post about it in the hopes that I'll remember it.
In case you didn't know, OpenGL uses a pipeline system to render graphics. Basically, your vertices and other stuff go in one end, and a video stream gets displayed at the other. This pipeline is made up of number of stages. Each stage has it's own shader, too:
There are rather a lot of stages here, so I've made this table that lists all the different shaders along with what they do:
Stage
Programmable?
Function
Vertex Shader
Yes
Raw vertex manipulation.
Hull Shader
No
Aka the Tessellation Control Shader. Determines control points for the tessellator. Although it's fixed function, it's highly configurable.
Tessellator
No
Subdivides surfaces and adds vertices using the control points specified in the hull shader.
Domain Shader
Yes
Aka the Tessellation Evaluation Shader. Adds details to vertices. Example uses include simplifying models that are far away from the camera. Has access to the control points outputted by the hull shader.
Geometry Shader
Yes
Superseded by the tessellator (see above). Very slow.
Rasterisation
No
Fixed function. Converts the models etc. into fragments ready for the fragment shader.
Fragment Shader
Yes
Insanely flexible. This is the shader that is used to add most, if not all, special effects. Lighting and shadows are done here too. Oddly enough, Microsoft decided that they would call it the "Pixel Shader" in DirectX and not the fragment shader.
Compute Shader
Yes
Not part of the graphics pipeline. Lets you utilise the power of the matrix calculator graphics card to do arbitrary calculations.
The tessellator is very interesting. It replaces the geometry shader (which, although you can technically use, you really shouldn't), and allows you to add details to your model on the GPU, thereby reducing the number of vertices you send to graphics card. It also allows you to customize your models before they hit rasterisation and the fragment shader, so you could simplify those models that are further away, for instance.
As an example in our lecture, we were shown the Haven Benchmark. Our lecturer turned the tessellator on and off to show us what it actually does. Since you can't see what I saw, here's an animation I made showing you the difference:
The other pipeline to be aware of is the coordinate pipeline. This pipeline specifies how coordinates are transformed from one space to another. Here's another diagram:
Again, this looks complicated, but it isn't really. A similar process would be followed for 2D graphics as well as 3D ones. If you take it one step at a time, it doesn't seem so bad.
Model Space - This refers to coordinates relative to any given model. Each model will have the coordinates of each of its vertices stored relative to its central point.
World Space - Multiplying all of a model's coordinates by the model matrix brings it into World Space. World space is relative to the centre of your scene.
View Space - Multiplying all the coordinates in a world by the view matrix brings everything into into View Space. View Space is relative to the camera. It is for this reason that people say that you cheat and move everything around the camera - instead of moving the camera around a scene.
Normalised Device Space - Multiplying everything in view space by the projection matrix brings it into Normalised Device Coordinates. Graphics cards these days apparently like to consider points between $(-1, -1, -1)$ and $(1, 1, 1)$ (if you're OpenGL, that is. DirectX is different - it prefers $(-1, -1, 0)$ to $(1, 1, 1)$ instead). Points in this space are called Normalised Device Coordinates and anything outside of the aforementioned ranges will be cut off. No exceptions.
Image Space - When your scene has been through the entirety of the Graphics pipeline described above, it will find itself in Image Space. Image space is 2D (most of the time) and references the actual pixels in the resulting image.
Converting between all these different coordinate spaces is best left up to the vertex shader - it's much easier to shove a bunch of transformation matrices at it and get it to do all the calculations for you. It's so easy, you can do it in just 11 lines of vertex shader code:
#version 330
uniform mat4 uModel; // The model matrix
uniform mat4 uView; // The view matrix
uniform mat4 uProjection; // The projection matrix
in vec3 vPosition; // The position of the current vertex
void main()
{
gl_Position = vec4(vPosition, 1) * uModel * uView * uProjection;
}
If you made it this far, congratulations! That concludes our (rather long) journey through the graphics pipeline and its associated coordinate spaces. We looked at each of the various shaders and what they do, and learnt about each of the different coordinate spaces involved and why they are important.
I hope that someone besides myself found it both useful and educational! If you did, or you have any questions, please post a comment below. If you have spotted a mistake - please correct me in the comments below too! I try to make sure that posts like this one can be used by both myself and others as a reference in the future.
This is a follow up post to the vector.js post that I made last week, and depends on the vector class I released then. Please take a look at that post first.
Last week, I released a simple Ecmascript 6 vector class that I wrote. It's mildly interesting on its own, but this post is the real reason I wrote that other one. Using that vector class, I then went ahead and wrote my own bezier curve class, that supports an arbitrary number of control points. Before I continue, here's the code:
I was surprised by how easy the bezier curve algorithm was to implement. Basically, you loop over all your control points, finding the point that lies in a specific percentage of the distance between the current point and the next one. You then repeat this process until you have just a single point remaining. This results in a smooth curve that is skewed towards all of the given control points.
In my implementation, I did it recursively, with all the magic happening in the interpolate() function. It performs the above algorithm given a time between 0 and 1, and spits out the interpolated value. I called it a time here because bezier curves are often used to smooth out animations, and I travel along the line in the curve() function when applying it to the given canvas rendering context.
To use it, you first create an instance like so:
var bz = new BezierCurve([
new Vector(38, 41),
new Vector(96, 302),
new Vector(807, 12)
]);
Then, in your rendering function, you can ask your new bezier curve to add itself to any given canvas rendering context like this:
The second argument in the curve() call above is the number of segments to use when rendering. This tells the bezier curve class how many different points along the line it should calculate. More segments make it smoother, but will consume more processing power when first calculated. Pick just enough that you can't tell that the bezier curve is made up of lines - this is different for every curve.
I added a caching function to it too, so if you call the curve() function with the same number of segments more than once, it uses the interpolated values it calculated previously instead of calculating them all over again. This isn't the case if you call it with a different number of segments each time, however. If you want to do this, I suggest that you create a new instance for each different segment number that you need.
That concludes this post on my own bezier curve class. It's a little bit buggy in places (see the gif above), but I haven't managed to track down the issue. It should work fine for most purposes, though. If you do manage to spot the mistake, please let me know!
Update 24th January 2016: I've replaced the original code with an embedded version of the gist in order to keep it up to date, since I've revised it slightly sinec writing this blog post.
I have a different sort of post for you today: A post about pair of websites that I have found that are really quite useful. Quite often when I am creating a website or webpage for some reason or other (today I needed one for bloworm), I keep finding that it always looks a bit plain. These websites that I have found today help to partially solve that problem. They are full of simple tileable textures that look great as backgrounds in many different contexts.
The first is called subtlepatterns.com. It contains a bunch of free to use textures that go well in the background of something without standing out.
The second is called transparenttextures.com. It is similar to the website above - but the textures it provides are transparent so that you can overlay them on top of something else. These are good when you have a colour in mind, but want to add a little something to it to make it look a little less plain.
A few months ago I was asked how I created animated gifs on Linux, and I said that I use silentcast. I also said that I'd write a blog post on it. I've been very busy since then, but now I have found some time when I remembered to post about it and am not exhausted.
Silentcast is a very versatile screen recording application that outputs either a set of png images, an animated gif, or 2 different types of video. It uses png files to store frames, so it isn't suitable for recording at a high fps or for very long, but it is still brilliant for recording short clips for your blog or to accopany a bug report.
Silentcast's dialogs stay in front of everything else that you have open, so you don't need to worry about loosing the window somewhere along the line. It integrates nicely with the Unity desktop (I haven't tried others yet), which makes it feel more intuitive and makes it easer to use. It also allows you to modify the intermediate png files before the final product is stitched together, too, allowing for precise edits to make the resulting gif loop perfectly.
It is written in bash, which makes it perfectly suited for usage on both Mac and Linux system , but I don't think that Windows is supported as of the time of posting. The other issue is that it took me a little while to work out how to record a custom area - this is done by the "Transparent Window Interior" option under "Area to be recorded". I also find it to be a little bit unpoliished around the edges (the icon especially needs some work), but overall it is an excellent piece of software that makes recording an animated gif on Linux a breeze - it's streets ahead of any other competing projects.
The animated gif above was taken and modified from Silentcast's GitHub project page.