## The Graphics Pipeline

Since the demonstration for my 3D work is tomorrow and I keep forgetting the details of the OpenGL graphics pipeline, I thought I'd write a blog post about it in the hopes that I'll remember it.

In case you didn't know, OpenGL uses a *pipeline* system to render graphics. Basically, your vertices and other stuff go in one end, and a video stream gets displayed at the other. This pipeline is made up of number of stages. Each stage has it's own shader, too:

There are rather a lot of stages here, so I've made this table that lists all the different shaders along with what they do:

Stage | Programmable? | Function |
---|---|---|

Vertex Shader | Yes | Raw vertex manipulation. |

Hull Shader | No | Aka the Tessellation Control Shader. Determines control points for the tessellator. Although it's fixed function, it's highly configurable. |

Tessellator | No | Subdivides surfaces and adds vertices using the control points specified in the hull shader. |

Domain Shader | Yes | Aka the Tessellation Evaluation Shader. Adds details to vertices. Example uses include simplifying models that are far away from the camera. Has access to the control points outputted by the hull shader. |

Geometry Shader | Yes | Superseded by the tessellator (see above). Very slow. |

Rasterisation | No | Fixed function. Converts the models etc. into fragments ready for the fragment shader. |

Fragment Shader | Yes | Insanely flexible. This is the shader that is used to add most, if not all, special effects. Lighting and shadows are done here too. Oddly enough, Microsoft decided that they would call it the "Pixel Shader" in DirectX and not the fragment shader. |

Compute Shader | Yes | Not part of the graphics pipeline. Lets you utilise the power of the |

The tessellator is very interesting. It replaces the geometry shader (which, although you *can* technically use, you really shouldn't), and allows you to add details to your model on the GPU, thereby reducing the number of vertices you send to graphics card. It also allows you to customize your models before they hit rasterisation and the fragment shader, so you could simplify those models that are further away, for instance.

As an example in our lecture, we were shown the Haven Benchmark. Our lecturer turned the tessellator on and off to show us what it actually does. Since you can't see what I saw, here's an animation I made showing you the difference:

The other pipeline to be aware of is the *coordinate pipeline*. This pipeline specifies how coordinates are transformed from one space to another. Here's another diagram:

Again, this looks complicated, but it isn't really. A similar process would be followed for 2D graphics as well as 3D ones. If you take it one step at a time, it doesn't seem so bad.

- Model Space - This refers to coordinates relative to any given model. Each model will have the coordinates of each of its vertices stored relative to its central point.
- World Space - Multiplying all of a model's coordinates by the model matrix brings it into
*World Space*. World space is relative to the centre of your scene. - View Space - Multiplying all the coordinates in a world by the view matrix brings everything into into
*View Space*. View Space is relative to the camera. It is for this reason that people say that you cheat and move everything around the camera - instead of moving the camera around a scene. - Normalised Device Space - Multiplying everything in view space by the projection matrix brings it into
*Normalised Device Coordinates*. Graphics cards these days apparently like to consider points between $(-1, -1, -1)$ and $(1, 1, 1)$ (if you're OpenGL, that is. DirectX is different - it prefers $(-1, -1, 0)$ to $(1, 1, 1)$ instead). Points in this space are called*Normalised Device Coordinates*and anything outside of the aforementioned ranges will be cut off. No exceptions. - Image Space - When your scene has been through the entirety of the Graphics pipeline described above, it will find itself in
*Image Space*. Image space is 2D (most of the time) and references the actual pixels in the resulting image.

Converting between all these different coordinate spaces is best left up to the vertex shader - it's much easier to shove a bunch of transformation matrices at it and get it to do all the calculations for you. It's so easy, you can do it in just 11 lines of vertex shader code:

```
#version 330
uniform mat4 uModel; // The model matrix
uniform mat4 uView; // The view matrix
uniform mat4 uProjection; // The projection matrix
in vec3 vPosition; // The position of the current vertex
void main()
{
gl_Position = vec4(vPosition, 1) * uModel * uView * uProjection;
}
```

If you made it this far, congratulations! That concludes our (rather long) journey through the graphics pipeline and its associated coordinate spaces. We looked at each of the various shaders and what they do, and learnt about each of the different coordinate spaces involved and why they are important.

I hope that someone besides myself found it both useful and educational! If you did, or you have any questions, please post a comment below. If you have spotted a mistake - please correct me in the comments below too! I try to make sure that posts like this one can be used by both myself and others as a reference in the future.

### Sources

- Tessellator Overview - Windows Dev Center
- Tessellation - A Visible Difference - EXTREME Overclocking Forums
- World, View and Projection Transformation Matrices - Coding Labs
- Rendering Pipeline Overview - OpenGL.org
- Compute Shader - OpenGL.org
- Learning Modern OpenGL by Bartlomiej Filipek - CodeProject