Project3F15

From Immersive Visualization Lab Wiki
Jump to: navigation, search

Contents

Project 3: Software Rasterizer

In this project you will need to write your own software rasterizer. We provide some code that will allow you to integrate your rasterizer with OpenGL.

The project is due on Friday, October 16th, 2015 at 2pm. You need to present your results in the CSE basement labs as usual, grading starts at 2pm and will continue until at least 3:30pm.

The homework discussion for this project will be on Monday, October 12th.

Getting started

We provide base code for you which displays a software frame buffer on the screen. Your task is to rasterize your scene into this frame buffer. In this assignment's rasterization part you are not allowed to use any OpenGL calls which aren't already in the base code. Instead, use your vector and matrix classes from assignment #1 and add your own rasterization routines.

1. Rasterize Vertices (20 Points)

In the first step of building your own rasterizer you need to project the vertices of the 3D models from assignment 2 to the correct locations in the output image (frame buffer).

You should use the same model, camera and projection matrices as in assignment 2, so that you can use the rendering results from it to verify that your software rasterizer works correctly. We recommend that you start with your code from assignment 2 and copy-paste code from rasterizer.cpp where you need it.

Add support for the 'e' key to switch between your rendering engines: OpenGL from the previous assignment and your new software rasterizer. When in software rendering mode, use the - (minus) and + (plus) keys to go back and forth between the different parts of this homework project, which all build upon another.

Add a viewport matrix (D) to your code, and implement a method to create D based on the window size (window_width and window_height) and call it from the GLUT reshape function. Your program must correctly adjust projection and viewport matrix, as well as frame buffer size when the user changes the window size, just like in your previous assignment.

Then write a method to rasterize a vertex. You can use drawPoint from the base code to set the values in the frame buffer. Call this method from the draw callback (display function in base code) for every vertex in the 3D model. For this part of the assignment, create a method rasterizeVertex which projects each vertex of the house to image coordinates. At this stage, render every point in bright white.

Add a frame rate display: show how many frames per second (fps) you are rendering in the text window. You can calculate this number by subtracting the current glutGet(GLUT_ELAPSED_TIME) from the previous frame's in the display callback: fps = 1000/(current_frame_time - previous_frame_time).

Grading:

  • -5 if rasterized image doesn't resize the same way as in OpenGL mode
  • -10 if rasterized image doesn't respond to keyboard keys
  • -5 if not all 3 models load
  • -10 if vertex positions wrap around when off screen
  • -5 if bounding box is not fully traversed (leaving blank lines)
  • -5 if fps display is missing

2. Rasterize Triangles (40 Points)

Now you will need to render the objects' triangles instead of just the vertices. Use the Barycentric interpolation algorithm to determine whether a framebuffer pixel is inside or outside of the triangle you are rasterizing. To do this you should first compute a bounding box around the triangle, limited to the extent of the triangle, then step through all pixels in the bounding box and test if they lie within the triangle by computing their barycentric coordinates.

Pick a random color for each triangle and use it throughout the triangle. Make sure you seed the random number generator with the same value at the beginning of each frame so that each triangle will get the same random color every time it gets rendered, avoiding flickering triangles.

You need to add keyboard key support to toggle on/off a debug mode with the 'd' key: when it's on you need to display the bounding boxes of all triangles you're rendering. Pick a color of your choosing for the bounding boxes.

Notice that because you are not depth sorting the triangles, some of the foreground triangles might get overwritten by background triangles - we will fix this in the next step.

Grading:

  • The same deductions apply as in part 1 if not already deducted there.
  • -10 if debug mode with bounding box display is not working/doesn't exist
  • -5 if bounding box does not have minimum size

3. Z-Buffer (25 Points)

So far the triangles are rendered in the order in which they are listed in the file. Triangles rendered later will overwrite those that were rendered earlier. This approach causes problems with occlusion.

The remedy is to implement the z-buffer algorithm. Linearly interpolate z/w for every point using the Barycentric interpolation weights and scale it to the range of 0 to 1, between the near and far planes. The result is the z-value, which you need to compare with the previously stored z-buffer value for this pixel. Make sure that you clear the z-buffer (initialize with 1) whenever you clear the frame buffer.

Grading:

  • -10 if z-buffer indices are incorrectly calculated
  • -5 if z-buffer is not initialized correctly for every frame
  • -5 if z-buffer does not resize correctly on window resizes

4. Per Pixel Colors (15 Points)

While triangle overlaps are now correctly resolved, the triangles look very boring. When rasterizing a triangle, change out the random color to a per pixel shading algorithm: use barycentric interpolation to interpolate the vertices' normal vectors within the triangle. Then color each pixel by interpreting the normal at it as an RGB color: the x component determines the amount of red, y determines green, z for blue. Note that the x/y/z values range between -1 and +1, so to map them to colors you'll have to add one and divide by two, to bring them into the desired range of 0 to 1.

Grading:

  • -10 if colors are not interpolated across triangles

5. Extra Credit: Per Pixel Phong Shading (10 Points)

Implement per pixel Phong shading. Define a light source position for a white point light. In contrast to part 4 where you used the normal values for the color interpolation within the triangles, now you need to calculate the color for each pixel based on the colors of the vertices of the triangle, light source position and viewer position. As explained in the lecture, Phong Shading interpolates the vertex normals across the triangle and calculates each pixel's color based on the interpolated normal.

To determine the colors for the vertices, use random colors, again seeded with the same value at the beginning of each frame.