Mailing List Articles Atom Feed Comments Atom Feed Twitter Reddit Facebook

Tag Cloud

3d 3d printing account algorithms android announcement architecture archives arduino artificial intelligence artix assembly async audio automation backups bash batch blog bookmarklet booting bug hunting c sharp c++ challenge chrome os cluster code codepen coding conundrums coding conundrums evolved command line compilers compiling compression containerisation css dailyprogrammer data analysis debugging demystification distributed computing documentation downtime electronics email embedded systems encryption es6 features ethics event experiment external first impressions future game github github gist gitlab graphics hardware hardware meetup holiday holidays html html5 html5 canvas infrastructure interfaces internet interoperability io.js jabber jam javascript js bin labs learning library linux lora low level lua maintenance manjaro network networking nibriboard node.js operating systems own your code pepperminty wiki performance phd photos php pixelbot portable privacy problem solving programming problems project projects prolog protocol protocols pseudo 3d python reddit redis reference releases resource review rust searching secrets security series list server software sorting source code control statistics storage svg talks technical terminal textures thoughts three thing game three.js tool tutorial twitter ubuntu university update updates upgrade version control virtual reality virtualisation visual web website windows windows 10 xmpp xslt

Virtual Reality: A Review

A considerable number of different 3D glasses scattered around. (Above: A considerable number of stereo 3D glasses technologies. Can you name all the techniques shown here? Comment below!)

Yesterday I spent a morning experimenting with my University's latest stereo equipment as part of the Virtual Environments module I've been taking this semester. With all that I've seen, I wanted to write something about my experiences on here.

Virtual reality and 3D is something that I haven't really had the chance to experience very often. In fact, the last time I was truly able to experience 3D was also through my University - probably through the open day (I can't remember). I've also never had the experience of using a controller before - which I'll talk about later.

With this in mind, it was all a rather new experience for me. The first tech we looked at was a stereo projector with active nvidia shutter glasses. They work by using a variant on the LCD to block out each eye when the image for the other eye is being shown. To this end, they need to sync this with the PC - hence their active nature - and the reason cinemas usually use clever cylindrical polarising filters instead (especially since the screen must be running at a minimum of 120Hz to avoid sickness and provide a reasonable experience).

Even so, the experience was quite amazing - even after seeing it once or twice before. With the additional knowledge about the way stereoscopic images are put together (using techniques such as parallax and concepts such as depth cues and depth budget), I found that I could appreciate what was going on much more than I could previously.

The head tracking that was paired with the shutter glasses was absolutely fascinating. If you were sitting in the seats in front of the stage you got a bunch of window violations and a pair of hurting eyes, when you were on the stage with the tracked glasses, it was a whole different story. It was literally like a window into another world - made all the more real by the projection onto the floor!

We also took a look at the cave, as it's colloquially known - a variant on the screen with 4 panels of a cube, with pairs of projectors back-projecting onto each of the sides - with the same infrared-based head tracking technology. This, too, was similarly cool - it has the ability to make you feel unsteady when looking down from the crows' nest of a large navel ship....

Though this is probably old news to most readers of this post, I found that the idea of using an Xbox controller to move the user around was quite a clever solution to the awkward issue that you can't walk around yourself much unless you like walking into invisible boxes wireframed in black. It certainly felt more natural than using a keyboard - which would have felt bulky and out-of-place. I'll be paying more attention to both controllers and other forms of alternative input when designing applications in future - as I've seen first-hand what a difference the appropriate form of input can make to the overall experience.

Until today, I've also been rather skeptical of Microsoft's HoloLens. Sorting through all the microsoft-speak and buzzwords is somewhat challenging - but the lectures we've had over the last 5 weeks helped with that :D The headset itself is actually plenty comfortable (especially compared to the Oculus Rift), and the head-tracking is astonishing - especially considering that it's all inside-out (as opposed to outside-in). The holograms really look like they're hovering in the environment around you - apart from the fact that they're clearly computer generated of course, and the gestures are actually pretty intuitive for how different the experience is to anything else I've experienced before.

The biggest problem though, as you're probably aware, is the small field-of-view. It's offset slightly by the fact that you can see around the hologram-enabled area, but it still causes frequent window-violations and only covers a fraction of your effective vision - which they don't appear to take any notice of in their marketing material (see the image below - the pair of people in the image can probably only see the very centre quarter of that thundercloud). If they can fix that - then I think that they may have something truly world-changing. It could be used for all sorts of applications - especially in engineering I think.

An image of a pair of people standing altogether far too close to a holographic thundercloud diagram.

The sound system built into it was cool too - I didn't manage to check, but I'm pretty sure only I could hear it, but it sure didn't sound like it! In the tutorial it really sounded like there was a voice coming from all around me - which leads me to think it might be programmable such that it appears to come from anywhere in the room - so you might even be able to have a conversation with a holographic projection of someone standing on the table in front of you (like Microsoft's holoportation demo).

Finally, we took a look at some of the things that the department have been doing with the Oculus Rift. VR is an experience on a whole 'nother level - and best experienced for one's self (it's really important to remember to clean the lenses in the headset thoroughly, and spend some time aligning them precisely to your eyes I found - otherwise everything will be blurry). I found the latter half of the (rather extensive) setup tutorial I went through later that day to test my ACW particularly immersive - to the point where you had consciously remember where you were in the real world - personally I had my leg just touching the edge of my chair to remind me! Though the audio wasn't as good as the HoloLens (see above), it was still adequate for the task at hand.

While I was running through the first-use setup tutorial it was evident though that it was quite clearly a Facebook product - in that you had to create an account (or sign in with Facebook), set privacy settings, and a few other things it hinted at during the setup (I was interested in testing my code I'd written, so I didn't explore the consumer side of the device), so if you're concerned about privacy, then the Oculus Rift is certainly not for you. Thankfully there are lots of other virtual reality headsets around to investigate instead :-)

The controllers made for an interesting experience too - they were a clever solution to the awkward problem that they couldn't track your hand as well as they'd need to in order to display it fully in virtual reality (Microsoft had it easy with the gestures for their HoloLens, apparently) - and they didn't end up breaking immersion too badly in the tutorial by roughly simulating your hand position based on which buttons and triggers you had pressed down. Definitely much better than a keyboard in this instance, since you couldn't even feel where the keyboard was in virtual reality - let alone find the keys on the keyboard to press, and that's not even mentioning the loss of movement and rotation you'd experience.

In conclusion, my whole view on stereo 3D, VR, and input methods have all been changed in a single day - which I think is pretty good going! Stereo 3D and Virtual reality is never going to go away - the potential behind it just far too tempting to not play around with. Designing applications for VR is going to be a challenge for many developers I think - since an understanding of depth dues and immersion is essential to designing effective experiences that don't make you feel sick. We can't leave the real world behind with VR yet (walking into a chair or table is an unpleasant experience), but what we've got right now is absolutely astonishing.

The Graphics Pipeline

Since the demonstration for my 3D work is tomorrow and I keep forgetting the details of the OpenGL graphics pipeline, I thought I'd write a blog post about it in the hopes that I'll remember it.

In case you didn't know, OpenGL uses a pipeline system to render graphics. Basically, your vertices and other stuff go in one end, and a video stream gets displayed at the other. This pipeline is made up of number of stages. Each stage has it's own shader, too:

The OpenGL pipeline.

There are rather a lot of stages here, so I've made this table that lists all the different shaders along with what they do:

Stage Programmable? Function
Vertex Shader Yes Raw vertex manipulation.
Hull Shader No Aka the Tessellation Control Shader. Determines control points for the tessellator. Although it's fixed function, it's highly configurable.
Tessellator No Subdivides surfaces and adds vertices using the control points specified in the hull shader.
Domain Shader Yes Aka the Tessellation Evaluation Shader. Adds details to vertices. Example uses include simplifying models that are far away from the camera. Has access to the control points outputted by the hull shader.
Geometry Shader Yes Superseded by the tessellator (see above). Very slow.
Rasterisation No Fixed function. Converts the models etc. into fragments ready for the fragment shader.
Fragment Shader Yes Insanely flexible. This is the shader that is used to add most, if not all, special effects. Lighting and shadows are done here too. Oddly enough, Microsoft decided that they would call it the "Pixel Shader" in DirectX and not the fragment shader.
Compute Shader Yes Not part of the graphics pipeline. Lets you utilise the power of the matrix calculator graphics card to do arbitrary calculations.

The tessellator is very interesting. It replaces the geometry shader (which, although you can technically use, you really shouldn't), and allows you to add details to your model on the GPU, thereby reducing the number of vertices you send to graphics card. It also allows you to customize your models before they hit rasterisation and the fragment shader, so you could simplify those models that are further away, for instance.

As an example in our lecture, we were shown the Haven Benchmark. Our lecturer turned the tessellator on and off to show us what it actually does. Since you can't see what I saw, here's an animation I made showing you the difference:

The other pipeline to be aware of is the coordinate pipeline. This pipeline specifies how coordinates are transformed from one space to another. Here's another diagram:

The coordinate pipeline.

Again, this looks complicated, but it isn't really. A similar process would be followed for 2D graphics as well as 3D ones. If you take it one step at a time, it doesn't seem so bad.

  • Model Space - This refers to coordinates relative to any given model. Each model will have the coordinates of each of its vertices stored relative to its central point.
  • World Space - Multiplying all of a model's coordinates by the model matrix brings it into World Space. World space is relative to the centre of your scene.
  • View Space - Multiplying all the coordinates in a world by the view matrix brings everything into into View Space. View Space is relative to the camera. It is for this reason that people say that you cheat and move everything around the camera - instead of moving the camera around a scene.
  • Normalised Device Space - Multiplying everything in view space by the projection matrix brings it into Normalised Device Coordinates. Graphics cards these days apparently like to consider points between $(-1, -1, -1)$ and $(1, 1, 1)$ (if you're OpenGL, that is. DirectX is different - it prefers $(-1, -1, 0)$ to $(1, 1, 1)$ instead). Points in this space are called Normalised Device Coordinates and anything outside of the aforementioned ranges will be cut off. No exceptions.
  • Image Space - When your scene has been through the entirety of the Graphics pipeline described above, it will find itself in Image Space. Image space is 2D (most of the time) and references the actual pixels in the resulting image.

Converting between all these different coordinate spaces is best left up to the vertex shader - it's much easier to shove a bunch of transformation matrices at it and get it to do all the calculations for you. It's so easy, you can do it in just 11 lines of vertex shader code:

#version 330
uniform mat4 uModel; // The model matrix
uniform mat4 uView; // The view matrix
uniform mat4 uProjection; // The projection matrix

in vec3 vPosition; // The position of the current vertex

void main() 
    gl_Position = vec4(vPosition, 1) * uModel * uView * uProjection;

If you made it this far, congratulations! That concludes our (rather long) journey through the graphics pipeline and its associated coordinate spaces. We looked at each of the various shaders and what they do, and learnt about each of the different coordinate spaces involved and why they are important.

I hope that someone besides myself found it both useful and educational! If you did, or you have any questions, please post a comment below. If you have spotted a mistake - please correct me in the comments below too! I try to make sure that posts like this one can be used by both myself and others as a reference in the future.


Learning Three.js four: Maze Birthday Card

A maze for a birthday card

Hooray for the fourth 3 dimensional experiment!

This one is a birthday card for someone I know and is also much more complex than the others I have done so far. I ported a maze generation algorithm that I originally wrote in Python 3 to Javascript (I also have ported it to Lua, keep a look out for a post about Lua soon!) and then set about rendering it in 3D.

The aim is to get to the centre of the maze, where some 3D(!) spinning text is waiting for you. The maze generation algorithm is not designed to have the goal in the centre of the maze, so you will find that there are multiple paths that you can take from the centre that lead to different parts of the maze. I will make a separate post about the maze generation algorithm I have written later.

The collision detection is tile based and rather complicated (for me, anyway). I am surprised that nobody has written a library for this sort of thing.....

You can play with it here: Learning Three.js four

Learning Three.js three: Texturing

Three.js three

This week I have been busy, but I have still had time to create a new Three.js experiment. This week, I looked into simple texturing. The Cube in the middle has a simple soil texture, and I gave the skybox a bunch of clouds from It took a whole lot of fiddling to get it to display the way it has now - I couldn't figure out the correct rotation for the tiles :D

This experiment also uses orbitcontrols.js from here I think? Click and drag to move around, and scroll up / down to zoom.

I have attempted to optimise it by only rendering a frame when the camera is actually moved or a texture is loaded too.

You can find it here: three

Learning Three.js 2: Catch the Sphere

Catch the sphere in action.

The second thing that I have made in my endeavours to learn three.js is a simple catch the sphere game. Every time you get close to the orange sphere, it will move it to another random place in the room, add one to your score, and play a sound.

I cheated a little bit on the collision detection... I really ought to look into writing a simple 3D collision detection library that uses boxes and spheres.

The collision detection for the sphere is really done in 2D - I check only on the x and z axes to see if you are within a certain distance of the sphere since your y position doesn't change. For the bounding box, I simply check to make sure that your x and z co-ordinates aren't too big or too small.

You can find it here: two

Learning Three.js: Spinning Cube

A Spinning Cube

TL; DR: I am going to learn Three.js. I made this.

3D programming is much harder than 2D programming. You have to think about renderers and scenes and cameras and that extra dimension when specifying co-ordinates. I have wanted to break into the 3D world with my programming for some time, but every time I tried, I got confused.

Enter Three.js.

Three.js is a Javascript library that abstracts away all the complications of 3D programming that come with WebGl and makes 3D programming much easier. This series (hopefully!) will document the things that I learn about Three.js and programming in 3D.

This post is about my first attempt: A spinning cube. I found this tutorial to start me off. Although it is a little bit outdated, it works fine for my purposes.

The first thing I needed to wrap my head around were co-ordinates. In Three.JS they work as follows:

Three.js Co-ordinates visualisation

(Image credit: Originally made by Keistutis on, tweaked for use here by me)

If you imagine your physical laptop screen as the 3D space that your stuff lives in, then the x co-ordinate is from side to side (left: negative, right: positive), the y co-ordinate goes up (positive) and down (negative), and the z co-ordinate goes in front and behind (coming out of your screen: positive, going into your screen: negative) of your screen.

I am noting down the co-ordinate system here for my own reference... :D

You can see the code in action that I have written here: one - spinning cube.

If you can't see it, please check It will tell you whether your browser supports WebGL or not. Some (older?) chromebooks also have a buggy WebGL implementation if you use one of those.

Art by Mythdael