Starbeamrainbowlabs

Stardust
Blog


Archive

Mailing List Articles Atom Feed Comments Atom Feed Twitter

Tag Cloud

3d account algorithms announcement archives arduino artificial intelligence assembly async audio bash batch blog bookmarklet booting c sharp c++ challenge chrome os code codepen coding conundrums coding conundrums evolved command line compiling css dailyprogrammer debugging demystification distributed computing downtime electronics embedded systems encryption es6 features event experiment external first impressions future game github github gist graphics hardware hardware meetup holiday html html5 html5 canvas interfaces internet io.js jabber javascript js bin labs learning library linux low level lua maintenance network networking node.js operating systems performance photos php pixelbot portable privacy programming problems project projects prolog protocol protocols pseudo 3d python reddit reference release releases resource review rust secrets security series list server software sorting source code control statistics svg technical terminal textures three thing game three.js tool tutorial tutorials twitter ubuntu university update updates upgrade version control virtualisation visual web website windows windows 10 xmpp xslt

Learn your terminal (or command line)

Enter stage left: the terminal (or command line, on windows). That window with strange white text on a black background. You might not see it, but every operating system has one - humming away in the background, just waiting to be used, but epic arcane skills are needed to navigate this bizarre and perhaps dated window into your computer.... or so it seems.

When you think of your computer, you will probably think of a GUI (a.k.a. goo-ey), with windows, a cursor, and perhaps a few buttons. GUIs make it easy for newcomers to easily find their way around a computer by referencing things that exist in the real world (e.g. folders and files, a floppy disk on the save button, etc.), but they can be inherently slower to use - especially for long series of perhaps repetitive tasks that stay essentially the same.

A terminal (linux and friends) or a command line (windows) is another view into your computer. It's a way of controlling your computer with text. Text that follows particular set of rules, that can be saved and repeated at will through the use of scripts. It's built on commands, each of which does one thing and one thing well. On their own they're mildly useful, but together they form a powerful framework that can perform almost any task. It's certainly different (and there's a little bit of learning curve, to be sure), but not as hard or arcane as you might think it currently.

A knowledge of the terminal or command line on your computer can be rather useful - especially so for those involved in computer science or technical support. How long would it take you to flatten a large set of deeply nested folders with a GUI? Or convert and recompress few folders worth of videos? Or even renew all your ssl certificates on your web server? All of these things can be automated through the use of a terminal or command line.

Even if you're just a casual computer user who's not into programming, it's still worth at least looking into. Perhaps it'll save you some time! Perhaps it'll save you from asking your friend where something is on their computer when you can't find it. Maybe it'll even save you if your computer suddenly decides it doesn't want to boot up properly. And you'll look cool doing it too :P (What better reason is there?)

If I've somehow managed to convince you to dive in and take up the challenge learning, then I'll end this somewhat different post with a collection of places you can go to get started.

Pushing Git Commits to an SVN Repository

Yesterday I found myself in the awkward position in which I needed to push some commits in a git repository to an existing SVN repository. This came about because as soon as my 3D module was finished at University, I copied my code over to a private git repository and started working from there instead. As luck would have it, I ended up running into a few nasty issues and needed to push my extra commits to the original SVN repository.

Since it took me a while to work out how to accomplish this, I thought I'd write a blog post on it. Credit for this method goes to Pete Goodliffe for figuring out how to do it.

The first thing you need to do is to install git svn, if you haven't already. If you're on Windows, then you probably already have it if you've got git (which you'll need too). If you're not sure, simply type git svn into a command line (or terminal), and if you don't get a 'svn' is not a git command message, you've got it installed.

Next, you need to use git svn to clone your SVN repository. Even if you've got a clone sitting somewhere already, you should create a fresh one, just in case. This process may take a long time, depending on how many commits you've made to the SVN repository. Use the following command:


git svn clone {svnRepositoryUrl}

Replace {svnRepositoryUrl} with the URL of your remote SVN repository. Now that you've done that, cd into the created directory.

Next, we need to add the git repository as a new remote. To do this, type something like this:


git remote add -f remote-repo {gitRepoUrl}

Replace {gitRepoUrl}with the url of your remote git URL. Now we've set everything up, we can replay the remote git repository's commits overtop the commits in the SVN repository:


git rebase --onto remotes/git-svn --root remote-repo/master

Again, this may take a while. Once this command has completed, all you have to do is a quick git svn dcommit to push the new commits up to the SVN repository.

Alternatively you can push the SVN commits to the remote git repository by entering the following commands instead of the rebase command above:


git pull remote-repo master # Make sure that the local workspace is in sync with the remote git repo
git push remote-repo # Push the SVN commits to the remote git repo

That's about all I wanted to include in this post. If you found this post useful (and even if you didn't!), please leave a comment down below.

3D Worley Noise with noisebox

Worley Noise Recently, I've been writing a command line noise generation tool in C♯, and I've called it noisebox. Initially I found a few noise generation algorithms that other people had already implemented, so all I had to do was write an extensible interfacde on top of the code I'd found. When I came to Worley noise, I couldn't find an implementation that I could understand (I haven't taken a look at delegates properly yet), so I decided to write my own. It's still got some bugs in it, but I've decided to relase the Worley noise implementation on it's own first, and then I will tidy up a few loose ends and release the full code on GitHub (loko out for a blog post soon!).

Here's a link to a gist of the Worley noise generator: Worley.cs

The imbuilt documentation comments (what do you call them?) should give you enough information to use it, but if you get stuck post a comment below and I will try and help you out.

The code will be released under the Mozilla Public License 2.0).

This post's title include the word "3D" - but the image at the top is very much in 2D. To demonstrate the 3D-ness of the algorithm, I added the --frames and --offset options to noisebox and rendered 1000 frames of noise, and then stitched the together with ffmpeg. I've uploaded the result to youtube.

Splitting your C♯ Code into Multiple Files 2: DLLs

I have found out about compiling and linking to DLLs. I think this is called Dynamic Linking but again, I could be wrong.

We will be using the example files from last time:

filea.cs:

using System;

class ClassA
{
    public static void Main()
    {
        Console.WriteLine("This is a test from file A");
        Someplace.ClassB.PrintHello();
    }
}

fileb.cs:

using System;

namespace Someplace
{
    public class ClassB
    {
        public static void PrintHello()
        {
            Console.WriteLine("Another hello from file B!");
        }
    }
}

This is a 2 step process. First we need to compile the DLL, then we need to compile the main exe and link to the DLL.

To compile the DLL, you type something like this:

csc /target:library fileb.cs

The important bit here is /target:library. This tells the C♯ compiler to compile your code to a DLL and not and exe.

To compile the main exe, you need to type something like this:

csc /reference:fileb.dll filea.cs

This tells the C♯ compiler to compile the code in filea.cs into an exe, and link it to fileb.dll.

Taken from this MSDN page.

Splitting your C♯ Code into Multiple Files

I have just started to work out how to split my C♯ code into multiple files, and thought that I would share it with you. This post will be about what I believe to be static linking, but I could be wrong. Anyway, it is actually quite simple:

Here is the contents of filea.cs:

using System;

class ClassA
{
    public static void Main()
    {
        Console.WriteLine("This is a test from file A");
        Someplace.ClassB.PrintHello();
    }
}

and here is the contents of fileb.cs:

using System;

namespace Someplace
{
    class ClassB
    {
        public static void PrintHello()
        {
            Console.WriteLine("Another hello from file B!");
        }
    }
}

Then when you compile, you should do something like this:

csc filea.cs fileb.cs

This will tell the C Sharp compiler to grab both filea.cs and fileb.cs, and to output a single filea.exe.

Next I will try to figure out how to create a .dll file and include that - then I can build my own libraries.

Sending POST Requests with curl

This is a quick post about sending POST requests via the command line. I have been using curl for a while now - and I find it to be a good alternative to wget. I have just found out how to use it to send a POST request from the command line, and I thought that I would share my findings here.

Sending a POST request is really quite simple:

curl -X POST --data-binary "@$input_filename" -o $output_filename $url
  • $input_filename is the name of the filename that contains the data that you want to send.
    • @- can be used to specify stdin, allowing you to pipe the output of the previous command into curl.
    • You can also drop at @ symbol and hard-wire the data you want to send into the command itself.
  • $output_filename is the name of the file you want to save the response to. You can drop -o $output_filename if you want curl to output he result to he standard output for further processing.

If you find that you get some king of error message from the above, you may need to add --header "Expect: 100-continue" just before the $url - some servers require this.

The --data-binary is rather important, as if you use --data on it's own, line breaks are not preserved for some bizarre reason.

I am finding that curl is much more powerful that I first expected, as it understands just about any protocol you care to name....I need to experiment with it further.

Curl also has a modular structure to it's command line arguments, so you can tack and extra setting on the end and it will work exactly as you would expect it to (most of the time!).

Art by Mythdael