Game Engineering – II |Assignment – 3
This assignment was focused on making the main graphics system completely platform independent. Both graphics.gl.cpp and graphics.d3d.cpp files were combined into a single platform independent graphics.cpp file.
While the process had already started with the previous assignment of creating a mesh and an effect for handling geometry and shader data, this assignment continues where that one left off, hiding the remaining code behind interfaces.
Here is the executable MyGame_
It was easy enough to implement an interface for buffer controls (both OpenGL and Direct3D had relatively independent ways of doing so) which dictated how the color and depth buffers were cleared every frame and the buffer swapping (front and back), The challenge was creating an interface for a Direct3D view (a target for rendering), since the OpenGL equivalent(?) framebuffer is created by default with the OpenGl context via the glCreateProgram() call (although I could define a custom frame buffer object whose implementation might then be similar to a view?), while a view has to be defined manually.
To get around that I created an interface class called cView which I then implemented for both Direct3D and OpenGL (with the OpenGL implementation being empty). This effectively made my Graphics.cpp file platform independent.
One small aside was to have the image buffer take in a custom color to which it can then be cleared to as shown in the screenshot below.
And this image shows the commented code in action.
The next part of the assignment was focused on getting the effect and mesh data as input rather than having it be hard coded into the class itself.
For an effect it was simple enough as the only input required from the user are the locations of the shader files required for creating the fragment and vertex shaders. Moreover, these are not required to be stored and this information is thrown away once the shaders have been loaded. As such, I do not have to store any additional data in the effect object. As for whether or not it’s size can be made smaller, I would say probably not as both vertex and fragment shader information is required for rendering to the screen. It might be possible to do without a fragment shader if the color fill information is not required for a shape (and just its edges are drawn) but that is probably an incorrect assumption.
For initializing a mesh, the user was required to input two arrays. One had the vertex data while the other handled the index data. From this the only information I am storing inside the Mesh object is the index array length. The reason I don’t need any more is because once the index buffer is created only its size is required to be able use it to draw the primitives out the the screen. Besides this only the information required to keep track of the index buffer was needed (as described in the assignment).
Later on I might also need to take as input what kind of primitive is being drawn to the screen (a line, a rectangle, a circle etc.), however I don’t think any additional information will need to be stored in the object itself.
As for making the object smaller, sure it can be done. I don’t really need to store the index size as I can just ask for it specifically during the draw call, but that might create a margin for error. Ultimately, it comes down to personal preference.
And finally, an image of the application running.