Starbeamrainbowlabs

Stardust
Blog


Archive

Mailing List Articles Atom Feed Comments Atom Feed Twitter

Tag Cloud

3d account algorithms announcement archives arduino artificial intelligence assembly async audio bash batch blog bookmarklet booting c sharp c++ challenge chrome os code codepen coding conundrums coding conundrums evolved command line compiling css dailyprogrammer debugging demystification distributed computing downtime embedded systems encryption es6 features event experiment external first impressions future game github github gist graphics hardware hardware meetup holiday html html5 html5 canvas interfaces internet io.js jabber javascript js bin labs learning library linux low level lua maintenance network networking node.js operating systems performance photos php pixelbot portable privacy programming problems project projects prolog protocol protocols pseudo 3d python reddit reference release releases resource review rust secrets security series list server software sorting source code control statistics svg technical terminal textures three thing game three.js tool tutorial tutorials twitter ubuntu university update updates upgrade version control visual web website windows windows 10 xmpp

The Graphics Pipeline

Since the demonstration for my 3D work is tomorrow and I keep forgetting the details of the OpenGL graphics pipeline, I thought I'd write a blog post about it in the hopes that I'll remember it.

In case you didn't know, OpenGL uses a pipeline system to render graphics. Basically, your vertices and other stuff go in one end, and a video stream gets displayed at the other. This pipeline is made up of number of stages. Each stage has it's own shader, too:

The OpenGL pipeline.

There are rather a lot of stages here, so I've made this table that lists all the different shaders along with what they do:

Stage Programmable? Function
Vertex Shader Yes Raw vertex manipulation.
Hull Shader No Aka the Tessellation Control Shader. Determines control points for the tessellator. Although it's fixed function, it's highly configurable.
Tessellator No Subdivides surfaces and adds vertices using the control points specified in the hull shader.
Domain Shader Yes Aka the Tessellation Evaluation Shader. Adds details to vertices. Example uses include simplifying models that are far away from the camera. Has access to the control points outputted by the hull shader.
Geometry Shader Yes Superseded by the tessellator (see above). Very slow.
Rasterisation No Fixed function. Converts the models etc. into fragments ready for the fragment shader.
Fragment Shader Yes Insanely flexible. This is the shader that is used to add most, if not all, special effects. Lighting and shadows are done here too. Oddly enough, Microsoft decided that they would call it the "Pixel Shader" in DirectX and not the fragment shader.
Compute Shader Yes Not part of the graphics pipeline. Lets you utilise the power of the matrix calculator graphics card to do arbitrary calculations.

The tessellator is very interesting. It replaces the geometry shader (which, although you can technically use, you really shouldn't), and allows you to add details to your model on the GPU, thereby reducing the number of vertices you send to graphics card. It also allows you to customize your models before they hit rasterisation and the fragment shader, so you could simplify those models that are further away, for instance.

As an example in our lecture, we were shown the Haven Benchmark. Our lecturer turned the tessellator on and off to show us what it actually does. Since you can't see what I saw, here's an animation I made showing you the difference:

The other pipeline to be aware of is the coordinate pipeline. This pipeline specifies how coordinates are transformed from one space to another. Here's another diagram:

The coordinate pipeline.

Again, this looks complicated, but it isn't really. A similar process would be followed for 2D graphics as well as 3D ones. If you take it one step at a time, it doesn't seem so bad.

  • Model Space - This refers to coordinates relative to any given model. Each model will have the coordinates of each of its vertices stored relative to its central point.
  • World Space - Multiplying all of a model's coordinates by the model matrix brings it into World Space. World space is relative to the centre of your scene.
  • View Space - Multiplying all the coordinates in a world by the view matrix brings everything into into View Space. View Space is relative to the camera. It is for this reason that people say that you cheat and move everything around the camera - instead of moving the camera around a scene.
  • Normalised Device Space - Multiplying everything in view space by the projection matrix brings it into Normalised Device Coordinates. Graphics cards these days apparently like to consider points between $(-1, -1, -1)$ and $(1, 1, 1)$ (if you're OpenGL, that is. DirectX is different - it prefers $(-1, -1, 0)$ to $(1, 1, 1)$ instead). Points in this space are called Normalised Device Coordinates and anything outside of the aforementioned ranges will be cut off. No exceptions.
  • Image Space - When your scene has been through the entirety of the Graphics pipeline described above, it will find itself in Image Space. Image space is 2D (most of the time) and references the actual pixels in the resulting image.

Converting between all these different coordinate spaces is best left up to the vertex shader - it's much easier to shove a bunch of transformation matrices at it and get it to do all the calculations for you. It's so easy, you can do it in just 11 lines of vertex shader code:

#version 330
uniform mat4 uModel; // The model matrix
uniform mat4 uView; // The view matrix
uniform mat4 uProjection; // The projection matrix

in vec3 vPosition; // The position of the current vertex

void main() 
{ 
    gl_Position = vec4(vPosition, 1) * uModel * uView * uProjection;
}

If you made it this far, congratulations! That concludes our (rather long) journey through the graphics pipeline and its associated coordinate spaces. We looked at each of the various shaders and what they do, and learnt about each of the different coordinate spaces involved and why they are important.

I hope that someone besides myself found it both useful and educational! If you did, or you have any questions, please post a comment below. If you have spotted a mistake - please correct me in the comments below too! I try to make sure that posts like this one can be used by both myself and others as a reference in the future.

Sources

Learning Three.js four: Maze Birthday Card

A maze for a birthday card

Hooray for the fourth 3 dimensional experiment!

This one is a birthday card for someone I know and is also much more complex than the others I have done so far. I ported a maze generation algorithm that I originally wrote in Python 3 to Javascript (I also have ported it to Lua, keep a look out for a post about Lua soon!) and then set about rendering it in 3D.

The aim is to get to the centre of the maze, where some 3D(!) spinning text is waiting for you. The maze generation algorithm is not designed to have the goal in the centre of the maze, so you will find that there are multiple paths that you can take from the centre that lead to different parts of the maze. I will make a separate post about the maze generation algorithm I have written later.

The collision detection is tile based and rather complicated (for me, anyway). I am surprised that nobody has written a library for this sort of thing.....

You can play with it here: Learning Three.js four

Learning Three.js three: Texturing

Three.js three

This week I have been busy, but I have still had time to create a new Three.js experiment. This week, I looked into simple texturing. The Cube in the middle has a simple soil texture, and I gave the skybox a bunch of clouds from opengameart.org. It took a whole lot of fiddling to get it to display the way it has now - I couldn't figure out the correct rotation for the tiles :D

This experiment also uses orbitcontrols.js from here I think? Click and drag to move around, and scroll up / down to zoom.

I have attempted to optimise it by only rendering a frame when the camera is actually moved or a texture is loaded too.

You can find it here: three

Learning Three.js 2: Catch the Sphere

Catch the sphere in action.

The second thing that I have made in my endeavours to learn three.js is a simple catch the sphere game. Every time you get close to the orange sphere, it will move it to another random place in the room, add one to your score, and play a sound.

I cheated a little bit on the collision detection... I really ought to look into writing a simple 3D collision detection library that uses boxes and spheres.

The collision detection for the sphere is really done in 2D - I check only on the x and z axes to see if you are within a certain distance of the sphere since your y position doesn't change. For the bounding box, I simply check to make sure that your x and z co-ordinates aren't too big or too small.

You can find it here: two

Learning Three.js: Spinning Cube

A Spinning Cube

TL; DR: I am going to learn Three.js. I made this.

3D programming is much harder than 2D programming. You have to think about renderers and scenes and cameras and that extra dimension when specifying co-ordinates. I have wanted to break into the 3D world with my programming for some time, but every time I tried, I got confused.

Enter Three.js.

Three.js is a Javascript library that abstracts away all the complications of 3D programming that come with WebGl and makes 3D programming much easier. This series (hopefully!) will document the things that I learn about Three.js and programming in 3D.

This post is about my first attempt: A spinning cube. I found this tutorial to start me off. Although it is a little bit outdated, it works fine for my purposes.

The first thing I needed to wrap my head around were co-ordinates. In Three.JS they work as follows:

Three.js Co-ordinates visualisation

(Image credit: Originally made by Keistutis on openclipart.org, tweaked for use here by me)

If you imagine your physical laptop screen as the 3D space that your stuff lives in, then the x co-ordinate is from side to side (left: negative, right: positive), the y co-ordinate goes up (positive) and down (negative), and the z co-ordinate goes in front and behind (coming out of your screen: positive, going into your screen: negative) of your screen.

I am noting down the co-ordinate system here for my own reference... :D

You can see the code in action that I have written here: one - spinning cube.

If you can't see it, please check get.webgl.org. It will tell you whether your browser supports WebGL or not. Some (older?) chromebooks also have a buggy WebGL implementation if you use one of those.

Art by Mythdael