A few months ago I was asked how I created animated gifs on Linux, and I said that I use silentcast. I also said that I'd write a blog post on it. I've been very busy since then, but now I have found some time when I remembered to post about it and am not exhausted.
Silentcast is a very versatile screen recording application that outputs either a set of png images, an animated gif, or 2 different types of video. It uses png files to store frames, so it isn't suitable for recording at a high fps or for very long, but it is still brilliant for recording short clips for your blog or to accopany a bug report.
Silentcast's dialogs stay in front of everything else that you have open, so you don't need to worry about loosing the window somewhere along the line. It integrates nicely with the Unity desktop (I haven't tried others yet), which makes it feel more intuitive and makes it easer to use. It also allows you to modify the intermediate png files before the final product is stitched together, too, allowing for precise edits to make the resulting gif loop perfectly.
It is written in bash, which makes it perfectly suited for usage on both Mac and Linux system , but I don't think that Windows is supported as of the time of posting. The other issue is that it took me a little while to work out how to record a custom area - this is done by the "Transparent Window Interior" option under "Area to be recorded". I also find it to be a little bit unpoliished around the edges (the icon especially needs some work), but overall it is an excellent piece of software that makes recording an animated gif on Linux a breeze - it's streets ahead of any other competing projects.
It has been a while since I have implemented a sorting algorithm - I should probably have implemented these a little bit earlier than I have done :)
Today I bring you the C sharp version of the selection sort I posted earlier. To mix it up a bit htough this implementation is in reverse.
Find the largest number in the sequence
If it is larger than the number at the end of the array, swap them
Find the next largest number
If it is larger than the next number along from the end of the array, swap them
Repeat steps 3 and 4 until all the numbers have been sorted.
Here is the code:
/// Performs a selection sort on an array of ints.
/// <param name="array">The array to sort.</param>
static void selection_sort(ref int array)
int limit = array.Length - 1;
while(limit > 0)
//find the index with the maximum value
int max_index = 0; //set the max to the first element in the array
//don't search the first element in the array, we have already done that on the line above
for(int i = limit - 1; i > 0; i--)
if(array[i] > array[max_index])
max_index = i;
if(array[max_index] > array[limit])
//we have found an index with a high value than the current limit
swap_places(ref array, max_index, limit);
It will be mostly automated, once I rewrite the blog posting system. Until then I will post manually.
I plan to have the blog automatically tweet about new blog posts, but I will have to build some kind of system that will be able to tell the difference between new and updated posts first, since I make usually make several corrections to each my blog posts once I have posted them.
Recently I have found that PHP sometimes doesn't follow redirects (e.g. the get_headers() function). So I wrote this quick function to follow a url's redirects to a certain depth:
* @summary Follows a chain of redirects and returns that last url in the sequence.
* @param $url - The url to start at.
* @param $maxdepth - The maximum depth to which to travel following redirects.
* @returns The url at the end of the redirect chain.
function follow_redirects($url, $maxdepth = 10, $depth = 0)
//return the current url if we have hit the maximum depth
if($depth >= $maxdepth)
//download the headers from the url and make all the keys lowercase
$headers = get_headers($url, true);
$headers = array_change_key_case($headers);
//we have a redirect if the `location` header is set
return follow_redirects($headers["location"], $maxdepth, $depth + 1);
Very soon I will also release I few snippets of code I am writing for Blow Worm.
In other news, I have been adding to Pepperminty Wiki. You can now have administrator accounts, which can move and delete pages. Futher plans can be found on the project's readme.
If there is anything you want to see here, post a comment below.
Recently I needed to generate random strings to hex to act as a session token for Blow Worm. Using session tokens mean that you send the login credentials once, and then the server hands out a session token for use instead of the password for the rest of that session. In theory this is more secure than sending the password to the server every time.
The problem with generating random session tokens is that you need a secure random number generator, so that hackers can't attempt to guess the random numbers and hence guess the session tokens (that would be bad).
The way I did it (please leave a comment below if this is insecure!) is as follows:
Generate ~128 bits of randomness using the OpenSSL function openssl_random_pseudo_bytes(). This randomness generator is apparently better than rand() and mt_rand().
Hash that resulting randomness with SHA256 to ensure a constant session key length.
This is a quick post about sending POST requests via the command line. I have been using curl for a while now - and I find it to be a good alternative to wget. I have just found out how to use it to send a POST request from the command line, and I thought that I would share my findings here.
Sending a POST request is really quite simple:
curl -X POST --data-binary "@$input_filename" -o $output_filename $url
$input_filename is the name of the filename that contains the data that you want to send.
@- can be used to specify stdin, allowing you to pipe the output of the previous command into curl.
You can also drop at @ symbol and hard-wire the data you want to send into the command itself.
$output_filename is the name of the file you want to save the response to. You can drop -o $output_filename if you want curl to output he result to he standard output for further processing.
If you find that you get some king of error message from the above, you may need to add --header "Expect: 100-continue" just before the $url - some servers require this.
The --data-binary is rather important, as if you use --data on it's own, line breaks are not preserved for some bizarre reason.
I am finding that curl is much more powerful that I first expected, as it understands just about any protocol you care to name....I need to experiment with it further.
Curl also has a modular structure to it's command line arguments, so you can tack and extra setting on the end and it will work exactly as you would expect it to (most of the time!).
Today (after lots of code tweaking in preparation), the tag clouds that recently appeared on the blog now have links on each tag so you can see which posts havethat particular tag. Currently, the Next Page and Pevious Page links are a little bit buggy, but that problem should be fixed sooner or later, I just need to make a few more changes to the code to make it more intelligent.
Edit: The Next Page and Previou Page buttons have been fixed! I have also added First Page and Last Page butttons too - the last page button was the reason it took to long to refactor the code :)
While writing pepperminty wiki, I started seeing a rather nasty in crease in page load times. After looking into it, I drew the conclusion that it must have been the file system that caused the problem. At the time, I had multiple calls to PHP's glob function to find all the wiki pages in the current directory, and I was checking to see if the wiki page existed before reading it into memory.
The solution: A page index. To cut down on the number of reads from the file system, I created a json file that containedd inforamtion about every page on the wiki. This way, it only needs to check the existence of and read in a single file before it can start rendering any one page. If the page index doesn't exist, it is automatically rebuilt with the glob function to find all the wiki pages in the current directory.
In short: to increase the performance of your PHP application, try to reduce the number of reads (and writes!) to the file system to an absolute minimum.
I still need to update the code to allow users to delete pages via the GUI though, because at present you have to have access to the server files to delete a page and then remove it from the page index manually.