Saturday, 26 December 2009

The Renderer so far...

In the prvoius weeks(weeks 1 to 4) we were tasked with implementing matrix and vector classes. In the upcoming week (week 5) we had to implement a camera class. What we had to was implement all the properties of the camera class which included the View and projection matrices for the class aswell as the screen matrix. These were required for rendering inorder to tranform our objects from world space into camera space and finally into screen space. If I didnt mention this earlier, we are using MD2 models for this project. Implementing these was fairly straight forward, having extensively read about this particular topic in a couple of computer graphics books.

Next task was to implement the rendering part of the the program there completing the rendering pipeline. To begin with I did have afew problems understanding the whole concept of a bitmap and pixels, colours and how all this was connected. I read several computer graphics books on the topic but I still could not find a book that explained well enough for me to understand. So I continued reading both in books and searching online for better explainations on the topic. I later came across an explaination in one of the books I had which prompted me to think of a bitmap interms of an array of data. This really made it much easir for me to understand what a bitmap was and how to use it. Having finally understood what I was meant to do I continued implementing the neccessary methods and properties required for rendering objects onto the screen.

To begin with we were going to using GDIplus API for drawing our objects. The other topic that really made my head spin was the topic on vertices. To begin with it was stated that these were just points in space, fair enough. But came the idea of shared vertices, indices that reside in the polygon which index vertices(this was the bit that really got me confused). Again I decided to do asmuch reading as I could on this topic until I found something that could explain it to me in a way that I could understand. I found loads of info about the topic and most which explained the topic very well for me, so it didnt really take long for me to get my head around this. Back to the program, we had to implement a rasteriser using GDIplus which would connect all the vertices for us using lines enabling us to come up with a wire frame image of our object. Having setup all the neccessar methods required for the rendering pipeline(this included building and setting up the camera from our camera class, loading the object using MD2 loader calls class). Note that the rasteriser was a class of its own which possed all the drawing methods for the renderer. Having done this finally I was able to load up a basic cube and display its wire frame image onto the screen. Below are some screen shots of renderer displaying wire frame images:

No comments:

Post a Comment