On Wednesday, I had a day off and I spent most of that day trying to figure out 3D graphics. I’m talking about the theory and to a little extent the mathematics involved. Today I’m going to share a little bit about it and the progress I’ve made. But first let me start by talking about some of the concepts that you really need to get your head around.

An immediate context: performs rendering onto a buffer

A device : Creates resources and renders them onto a buffer

A resource: Anything that you want to draw like a texture but in most cases this resource is represented as a simply a chunk of memory.

Resource view: Is an interpretation of the resource’s chunk of memory.

Swap chain: A thing that is composed of two buffers, a front and a back buffer. The front buffer( and thus what's been rendered to it by the device and the immediate context) is shown to the user and the back buffer is the staging area of that data before it is shown to the user. So you render to the back buffer and then you swap it so it becomes the front buffer and vice versa.

Render Target buffer: This is the buffer you set up and designate to be the back buffer

Render Target view: This is the interpretation of the back buffer that you can render to – this is what you tell the device about and it renders to…

If you put all this together you get a very crude diagram from me here:

IMG_0038

In most cases in DirectX, you create the swap chain object, you create a buffer that you designate as the back bufffer and then you hook the two up so the swap chain knows about the buffer. Furthermore you associate the device with the swap chain.

Some more terms worth knowing about:

Vertex – a 3 point, defined by (x,y,z) such that you need 3 Vertices say to define a triangles points.

x,y axes : are the same things you learned about at school.

z axis: Another point which defines ‘depth’. So a verex can be a point x,z fine but particularly its at those points but specifically where the depth of x,y is z. Its the 3rd dimension in “3D”! The X and Y alone are the 2D points. So normally you could have a cube like this:

IMG_0041

And that dot you see is on a point with might be x=1,y=0.5 but how far “in” to that cube(ie, the depth) is it? Well that would be defined by z. So if it was at x=1,y=0.5 and it was a little bit inside the cube z might be 1 or if it was a little futher than that, it might be z=1.5. If z=0 then it wouldn’t be ‘in’ the cube, it would be on the front of it. An by the way that point is a vertex.

To render a triangle you send 3 vertices to the GPU.

IMG_0046

So at some point you’ll need to send that vertex information to the GPU. In Direct X you need to put all that information in one big chunk of memory and tell direct X to go and read that bit of memory. You also then have to tell direct X what order your vertex information that you stored in the vertex buffer(chunk of memory) is layed out. For example, in most cases the vertex buffer will be a result of casting a C structure that holds vertex information over that vertex buffer and then you need to define the layout that describes the members of that structure in the vertex buffer.

Why go to so much hassle? Well you can store anything along side the essentials of x,y,z that represent that vertex. You can store some colour, some texture mapping coordinates etc. So this buffer is totally defined by you and as such you need to map those things you define as making up your vertex information to stuff directX and the GPU need/want.

So you create a buffer of memory somewhere, that will be called the vertex buffer – the list of the vertexes you want to send to the gpu along side any data associated with each vertex. You do this by defining a structure and overlaying that structure to the vertex buffer and then you create a definition of that structure’s layout. This definition that describes your structure is called the “layout” and is basically the definition of your structure(which becomes the buffer as you’ll typically cast of array of these structures, the structure represents on vertex’s info) on the vertex buffer.

IMG_0047

So some new definitions:

Vertex buffer: raw piece of memory that will hold your raw verte data such as each vertex’s x,z,y coordinates

Vertex buffer structure: The C structure that will be cast to that memory (vertex buffer)

layout: a description of your vertex buffer’s layout and thus your structure for your buffer.

So at this point you have stored a whole bunch of vertices in a chunk of memory, you’ve defined that chunk of memory and now the GPU can fetch stuff from it and then render it. The GPU has a pipeline, the graphics pipe line which is a series of stages that the fetched data is then used/transformed ie. rendered.

Vertex shaders are the first to get the data(vertex information) stored in the vertex buffer. It does what it needs to do with it and then sends it onto the next phase, pixel shaders.. Pixel shaders to their business and more it to the next stage so on a so forth.

Vertex shader: a piece of code that gets one of your vertex structures at a time and does something to the data(vertex) – in most cases is changes the coordinates of your vectors on the fly(why do vertex shaders do that? see below).

Pixel Shader: a piece of code that gets the output from the vertex shaders and colours in the pixels on the render target which is what that will represent your vertexes.

I think thats about as much as I’m going to do right now, I’ll probably add more as and when I want to.