I've spent most of the last couple of weeks tinkering with my forensics report that is due in September. I think I've managed to do enough to represent a decent level of ability to construct a forensic report and undertake a correct forensic investigation. Though I'm never surprised now when my expectations aren't always the same as reality when it comes to marking. I spend most of Saturday working on it. I'm somewhat relieved now that it's complete' however I'm feeling that familiar false sense of security now which inevitably follows with a surprise.
I've gone to bed quite late recently in part to doing this report and in part to me wanting to do anything other than this report now that its finished (and hence doing whatever it is at the expense of sleeping!).
I went to the library today and took with me my new-found gem, '3D Game programming Using DirectX10 and OpenGL'. This is exactly the book I've been looking for, it's technical, detailed and covers almost all the parts of DirectX 10. I've found that in my previous study, that resources tend to only explain the parts they are referring to and not enough about the context of the system in which are describing. I spend about an hour reading through various descriptions of the graphics pipeline.
From this, I've been able to abstract from the detail (something I like doing a lot) the fundamentals of the pipeline. As a side note, I find it sometimes difficult to learn the other way around i.e from the fundamentals to the details(even though it sounds bizarre, there are times when having details is like options, the more you have the more you can choose what or what not to use in your mental processing of the subject matter). Anyway, the first interesting insight I gained was about the frame buffer. Essentially a location of memory that ultimately will contain the end results of the graphics pipeline and which will be shown onto a screen.
The resolution of the framebuffer is the number of rows x cols that represent the pixels. A framebuffer represents pixels and does so by describing them using bytes. The colour depth of the framebuffer is the number of bytes that is used to represent one pixel. Representing a pixel is more than its location, it includes its colour. So if 24bits are used to represent one pixel in the frame buffer, it is said that the colour depth of the frame buffer is 24bit. The resolution of the framebuffer is already mentioned but the number of bits thus of a 24bit frame buffer is cols x rows (of pixels) x 24 - which is a lot of bits and of course, a picture is comprised of a lot of bits then.
The other interesting insight I gained was a greater, more contextualised understanding of the DirectX grpahics pipeline, namely how and what the stages actually do. For example, the graphics pipeline looks like this, the output from one stage feeding into the input to the next:
- Input-Assembler stage
- Vertex Shader stage
- Geometry shader stage
- Rasterizer stage
- Pixel shader stage
- Output merger stage
The key aspect that I learned about the input assembler stage is that it 'assembles' vertices from the raw vertices as described by the vertex buffer's layout. Meaning that it takes your raw vertices, which includes the positions themselves and other additional information about each vertex like the colour and it sends each vertex it finds in the vertex buffer (along with the additional vertex info) and feeds it as parameters to the vertex shader, which now can act on that information in a programmable way through a shader program.
As part of this, the input-layout object is the description of the vertex buffer and binds the vertex buffer to the memory(input slots) in the input-assembler stage, so that vertex data can stream(actually referenced by pointers) from the vertex buffer to the input-assembler stage where it will be read/used as described previously. It sort of parameterises the vertex buffer data as input to a function, which is the shader program. Quite interesting really.
Other interesting aspects of the input-assembler stage other than binding the vertex buffer to the input-assembler's memory is the creation of the input-layout object which describes and then is used by the input-assembler to tell it how to bind it to its memory. This, I think is why I enjoyed reading this book. It explains everything, the how and the why.
Next, once the vertex shader has transformed the vertices is received (model transformations) from local coordinate space, word space, view space, screen space it sends off the vertices in defined groups of vertices such as 4 vertices to represent a quad, 3 to represent a triangle etc. And this was interesting because this is why the 'topology' is specified when setting up the pipeline - to indicate to the vertex shader to output a stream of consistent sets of vertices to represent say a triangle, line, quad. This is done by the vertex shader so that the next stage, the geometry shader can actually construct real, full primitives i.e actual triangles not dots that represent triangles(which is the input it gets initially) which are now what appears to be the metamorphosed result of dots(vertices) to shapes(primitives).
The geometry shader thus deals with the primitive output which it sends to the rasteriser which turns that primitive into pixels. Reminds me of the phrase 'to build it up, to tear it down' because effectively we've used vertices to build up primitive shapes, only to tear it down into pixels that represent that shape.
Now that the primitive, say a triangle is reduced to pixels, the pixel shader can colour those pixels in, hide some of them and then send them out as the end-result of the whole process, ultimately appearing on the framebuffer.
The other thing that I found quite revealing is how the DX API's prefixes gives clues into how you bind aspects of resources that you'll provide to the pipline to specific stages of the pipeline:
g_id3dDevice->OMSetRenderTargets(...) /* indicates that the output merger stage will output to your provided render target */
g_id3dDevice->RSSetViewport(...) /* which indicates that you'll bind the viewport object to the Rasterister Stage./*
/* The following are used to bind to the InputAssember stage:*/
g_id3dDevice->IASetVertexBuffers(..);
g_id3dDevice->IASetInputLayout(...);
g_id3dDevice->IASetPrimativeTopology(...) // which as i mentioned earlier, informs that a certain grouping of vertices will need to be outputted by the input-assembler stage.
So all in all, a good session which I think will put me in good stead for my upcoming computer games technology classes.
I can't help but me mildly excited.