Difference between revisions of "Project3F18"

From Immersive Visualization Lab Wiki
Jump to: navigation, search
(4. Robot Army (15 Points))
 
(52 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 
=Project 3: Textures, Scene Graphs and Culling=
 
=Project 3: Textures, Scene Graphs and Culling=
  
To be posted Oct 20.
+
In this project you will need to implement a scene graph to render an army of animated robots with textures on them.
 
+
<!--
+
 
+
In this project you will need to implement a scene graph to render an army of robots.
+
 
   
 
   
 
The total score for this project is 100 points. Additionally, you can obtain up to 10 points of extra credit.
 
The total score for this project is 100 points. Additionally, you can obtain up to 10 points of extra credit.
  
==1. Sky Box (25 Points)==
+
We recommend the following approach for the timing of your work:
  
Start with code that uses your trackball code, and modify it to control the camera instead. (If you didn't get that to work, keyboard controls will suffice.)
+
# You can start with part 1 immediately: We've already covered texturing in class, and will cover its implementation in discussion on Oct 24.
 +
# Parts 2-4 require a scene graph. It will be covered in the lecture on Oct 23.
 +
# Part 5 (Culling) will be covered in lecture on Oct 30 and also in discussion on Oct 31.
  
Create a sky box for your scene with the robots. A sky box is a large, square box which is drawn around your entire scene. The inside walls of the box have pictures of a sky and a horizon. Sky boxes are typically cubic, which means that they consist of six square textures for the six sides of a cube. [http://learnopengl.com/#!Advanced-OpenGL/Cubemaps Here] is a great tutorial for sky boxes in modern OpenGL.
+
==1. Textured Robot Torso (30 Points)==
  
[http://www.f-lohmueller.de/pov_tut/skyboxer/skyboxer_3.htm Here is is a nice collection of textures for sky boxes], and [http://www.custommapmakers.org/skyboxes.php here is an even bigger one].  
+
The first step of the project is to create a textured robot torso, which you will later use as part of your robot.
  
Draw a cubic sky box and make it extremely big. For instance, by giving it coordinates like -1000 and +1000.
+
Start with code that uses your trackball code, and modify it so that trackball rotations control the camera instead, so that trackball rotations control the camera's direction vector - the camera stays in place. (If you don't have trackball rotation code, keyboard controls will suffice.)
 
+
Make sure single-sided rendering (triangle culling) is enabled with these lines somewhere in your code to ensure that you will never see the outside of the box (this assumes that your sky box is defined with the triangles facing inward):
+
  
 +
Thanks to our tutors Weichen and former tutor Yining, you have the following robot parts to choose from: head, body (torso), limb, eye, antenna. You will find the OBJ files in [[Media:Robot-parts-2018.zip |this ZIP file]]. Each vertex has not only a 3D coordinate and a normal associated with it, but also a texture coordinate. This allows you to map textures to the surfaces of the robot. Note that '''unlike the previous obj files''' in the course, each face has different indices for v/vt/vn. So you are going to need to '''update your parser''' accordingly, when you add texture support. (You many want to check [http://en.wikipedia.org/wiki/Wavefront_.obj_file Wikipedia] for more information about *.obj format) One of ways to deal with the different indices is to re-order (and duplicate) the v/vt/vn data when parsing so that their indices align. The following code might be helpful:
 
<pre>
 
<pre>
glEnable(GL_CULL_FACE);  
+
// Assume that you parse indices of v/vt/vn into different std::vector (vertex_indices_, normal_indices_, uv_indices_)
glCullFace(GL_BACK);  
+
// input_vertices, input_normals, input_uvs are raw input data from *.obj files
 +
// vertices_, normals_, uvs_ are aligned data
 +
 
 +
for (unsigned i = 0; i < vertex_indices_.size(); i++) {
 +
  vertices_.push_back(input_vertices[vertex_indices_[i]]);
 +
  normals_.push_back(input_normals[normal_indices_[i]]);
 +
  uvs_.push_back(input_uvs[uv_indices_[i]]);
 +
  indices_.push_back(i);
 +
}
 
</pre>
 
</pre>
  
Use the following settings for your texture after your first <code>glBindTexture(GL_TEXTURE_CUBE_MAP, id)</code> for correct lighting and filtering settings:
+
Load in the robot body with its texture coordinates. Then apply a texture to it. You can use any (non-offensive) image you find on the internet, or use a picture from your own collection. Best is to trim and resize the image to a size of 512x512 pixels.
  
<pre>
+
Load the image into your C++ code. We provide [[Media:project3-texture.cpp | sample code]] which loads a PPM image file and uses it as a texture for a quad. If you decide to use an image in a format other than PPM (eg, JPEG), you need to convert it to PPM first. The above mentioned image processing tool [http://www.irfanview.com IrfanView] for Windows will do this for you. Alternatively, you can use a third party library such as [http://lonesock.net/soil.html SOIL] to natively load JPEG images, or other image formats.
  // Make sure no bytes are padded:
+
  glPixelStorei(GL_UNPACK_ALIGNMENT, 1); // Deprecated in modern OpenGL - do not use!
+
 
+
  // Select GL_MODULATE to mix texture with polygon color for shading:
+
  glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE); // Deprecated in modern OpenGL - do not use!
+
 
+
  // Use bilinear interpolation:
+
  glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
+
  glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
+
 
+
  // Use clamp to edge to hide skybox edges:
+
  glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
+
  glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
+
 
+
</pre>
+
 
+
To familiarize yourself with texture mapping in OpenGL, we provide [[Media:project3s16-texture.cpp | sample code]], which loads a PPM file and uses it as a texture for a quad. If you decide to use one of the above referenced sky box images, you will have to convert them from JPEG to PPM format. The free image processing tool [http://www.irfanview.com IrfanView] for Windows will do this for you. Alternatively, you can use a third party library such as [http://lonesock.net/soil.html SOIL] to natively load JPEG images.
+
  
'''Grading:'''
+
'''Grading'''
* 5 points for functional camera controls with keyboard or mouse
+
* -3 if texture is not entirely correctly rendered, ie. skewed or stretched
* 5 points for the sky box without textures
+
* 5 points for the textures
+
* 5 points for correct rendering of edges and corners (seamless edges)
+
* 5 points for correct culling of the skybox
+
  
==2. Scene Graph Engine (20 Points)==
+
==2. Scene Graph Engine (15 Points)==
  
To connect the parts of the robot (head, torso, limbs, eyes, antennae), we need to first implement a simple scene graph structure for our rendering engine. This scene graph should consist of at least three nodes: <tt>Node</tt>, <tt>Transform</tt> and <tt>Geometry</tt>. You are free to add more scene graph node types as you see fit.
+
To create a robot with multiple moving body parts (head, torso, limbs, eyes, antennae), we need to first implement a simple scene graph structure for our rendering engine. This scene graph should consist of at least three nodes (5 points for each): <tt>Node</tt>, <tt>Transform</tt> and <tt>Geometry</tt>. You are free to add more scene graph node types as you see fit.
  
 
* Class <tt>Node</tt> should be abstract and serve as the common base class. It should implement the following class methods:
 
* Class <tt>Node</tt> should be abstract and serve as the common base class. It should implement the following class methods:
 
** an abstract draw method: <tt>virtual void draw(Matrix4 C)=0</tt>
 
** an abstract draw method: <tt>virtual void draw(Matrix4 C)=0</tt>
** an abstract <tt>virtual void update()=0</tt> method to separate bounding sphere updates from rendering (4 points)
+
** an abstract <tt>virtual void update()=0</tt> method to separate bounding sphere updates from rendering
* <tt>Transform</tt> should be derived from <tt>Node</tt> and have the following features: (8 points)
+
* <tt>Transform</tt> should be derived from <tt>Node</tt> and have the following features:
 
** store a 4x4 transformation matrix M
 
** store a 4x4 transformation matrix M
 
** store a list of pointers to child nodes (<tt>std::list<Node*></tt>)
 
** store a list of pointers to child nodes (<tt>std::list<Node*></tt>)
** provide class methods to add and remove child nodes (<tt>addChild()</tt>, <tt>removeChild()</tt>) from the list
+
** provide a class methods to add a child node (<tt>addChild()</tt>) to the list
 
** its draw method needs to traverse the list of children and call each child node's draw function
 
** its draw method needs to traverse the list of children and call each child node's draw function
 
** when <tt>draw(C)</tt> is called, multiply matrix M with matrix C.
 
** when <tt>draw(C)</tt> is called, multiply matrix M with matrix C.
* <tt>Geometry</tt> should be derived from <tt>Node</tt> and have the following features: (8 points)
+
* <tt>Geometry</tt> should be derived from <tt>Node</tt> and have the following features:
 
** set the modelview matrix to the current C matrix
 
** set the modelview matrix to the current C matrix
 
** an initialization method to load a 3D model (OBJ file) whose filename is passed to it (<tt>init(string filename)</tt>. Your OBJ loader from project 2 should work.
 
** an initialization method to load a 3D model (OBJ file) whose filename is passed to it (<tt>init(string filename)</tt>. Your OBJ loader from project 2 should work.
** have a class method which draws the 3D model associated with this node.  
+
** have a class method which draws the 3D model associated with this node.
  
==3. Walking Android Robot (25 Points)==
+
==3. Walking Android Robot (20 Points)==
  
Now that we have the scene graph classes, it is time to put them to work, and build a robot with them.  
+
Now that we have the scene graph classes, it is time to put them to work. Build your own robot using the <tt>addChild</tt> methods. Use at least 3 different types of parts for your robot (e.g., body, head and limb). In total, your robot needs to consist of at least 4 parts, 3 of which need to be moving independently from one another and they need to be connected to the 4th part. One of the robot parts needs to be the textured torso which you created in part 1 of the project.
  
Thanks to our tutor Weichen and former tutor Yining, you have the following robot parts to choose from: head, body, limb, eye, antenna. You will find the OBJ files in [[Media:Robot-parts-2018.zip |this ZIP file]]. Each vertex has not only a 3D coordinate and a normal associated with it, but also a texture coordinate. This allows you to map textures to the surfaces of the robot. Note that unlike the previous obj files in the course, each face has different indices for v/vt/vn. So you are going to need to update your parser accordingly, when you add texture support. One of ways to deal with the different indices is to re-order (and duplicate) the v/vt/vn data when parsing so that their indices align. The following code fragment from "OpenGLInsights" might be useful:
+
Here is an example of a robot with two antennas, two eyeballs, one head, one torso and 4 limbs (2 legs and 2 arms), before applying a texture:
 
+
<pre>
+
// For each triangle
+
for( unsigned int v=0; v<vertexIndices.size(); v+=3 )
+
{
+
    // For each vertex of the triangle
+
    for ( unsigned int i=0; i<3; i+=1 )
+
    {
+
        unsigned int vertexIndex = vertexIndices[v+i];
+
        glm::vec3 vertex = temp_vertices[ vertexIndex-1 ];
+
       
+
        unsigned int uvIndex = uvIndices[v+i];
+
        glm::vec2 uv = temp_uvs[ uvIndex-1 ];
+
 
+
        unsigned int normalIndex = normalIndices[v+i];
+
        glm::vec3 normal = temp_normals[ normalIndex-1 ];
+
 
+
        out_vertices.push_back(vertex);
+
        out_uvs.push_back(uv);
+
        out_normals.push_back(normal);
+
    }
+
}
+
</pre>
+
 
+
Build your own robot using the <tt>addChild</tt> methods. Use at least 3 different types of parts for your robot (e.g., body, head and limb). In total, your robot needs to consist of at least 4 parts, 3 of which need to be moving independently from one another and they need to be connected to the 4th part. (15 points)
+
 
+
This is an example of a valid robot with two antennas, two eyeballs, one head, one torso and 4 limbs (2 legs and 2 arms):
+
  
 
[[Image:robot.png]]
 
[[Image:robot.png]]
  
Use your creativity to build the most creative robot in class! The 5 most creative robots in class, after a vote on Piazza, are going to get extra credit.
+
Use your creativity to build the most creative robot in class! The 5 most creative robots in class are going to get extra credit, after a vote on Piazza.
  
Once you've created your scene graph, you need to get your rendering engine ready to recursively traverse the scene graph for rendering by creating a root node of type Group and calling its draw() function with the identity matrix as its parameter.
+
Once you've created your scene graph, you need to get your rendering engine ready to recursively traverse the scene graph for rendering by creating a root node of type <tt>Transform</tt> and calling its <tt>draw()</tt> function with the identity matrix as its parameter.
  
Animate the robot to make it look like it is walking, by changing the matrices in the Transform nodes. (10 points)
+
Animate the robot to make it look like it is walking, by changing the matrices in the <tt>Transform</tt> nodes. Walking in place is fine, the robot does not need to actually move forward.
  
 
==4. Robot Army (15 Points)==
 
==4. Robot Army (15 Points)==
  
Test your implementation by constructing a scene which consists of a large amount of robots, at least 100. The robots can all be identical clones.
+
Construct a scene which consists of a large amount of robots, at least 100. The robots can all be identical clones.
  
* Distribute the robots on a 2D grid (i.e., place them on a plane with uniform spacing). For 100 robots, use a 10x10 grid. (10 points)
+
* Distribute the robots on a 2D grid (i.e., place them on a plane with uniform spacing). For 100 robots, use a 10x10 grid.
 +
* Enable the animation for all the robots so that they look like they are walking.
 +
* Use your camera manipulation technique from part 1 to allow rotating the grid of 3D objects and zoom in or out.
  
* Enable the animation for all the robots so that they look like they are walking. (3 points)
+
This image illustrates the grid layout of the robots (background image not required):
 
+
* Enable your rotation and scale routines (keyboard or mouse) to allow rotating the grid of 3D objects and zoom in or out. (2 points)
+
 
+
This image illustrates the grid layout of the robots:
+
  
 
[[Image:robot-army.png]]
 
[[Image:robot-army.png]]
  
 
+
==5. Culling (20 Points)==
==5. Culling (15 Points)==
+
  
 
Implement object level culling, to allow the existence of thousands of instances of your robot, without having to render them all at once.
 
Implement object level culling, to allow the existence of thousands of instances of your robot, without having to render them all at once.
  
Determine the parameters for a bounding sphere (Vector3 for its center point, and a radius) for each of your robots, which contains all parts of the robot. Add an option to toggle the rendering of the bounding spheres on or off with a keyboard key. You can render the spheres by using your point rendering algorithm from project 1 and the sphere OBJ from project 2. Other ways of rendering spheres are also acceptable as long as you can see the robot within the sphere. (5 points)
+
* Determine the parameters for a bounding sphere (Vector3 for its center point, and a radius) for each of your robots, which contains all parts of the robot. You do not need to find the tightest possible bounding spheres. Just make them as tight as you reasonably can. Estimating the bounding sphere size is fine. You don't need to have a separate bounding sphere for each animation step - one that encloses all steps is fine.
 +
* Add the optional ability to render the bounding spheres. Support a keyboard key to toggle the rendering of the bounding spheres on or off. You can render the spheres by using the sphere OBJ file from project 2. Other ways of rendering spheres are also acceptable. We recommend rendering a wireframe sphere by rendering OpenGL lines instead of triangles. (5 points)
 +
* Add view frustum culling, and support a keyboard key to turn it on or off.  (10 points)
 +
* Display the number of robots that are visible (=not culled) on standard out or in the window's title bar. Make sure that by turning culling on this number goes down when part of the robot army is off screen. (5 points)
  
Note: You do not need to find the tightest possible bounding spheres - that can be rather difficult. Just make them as tight as you reasonably can.
+
Notes:
 +
* [http://www.lighthouse3d.com/tutorials/view-frustum-culling/geometric-approach-extracting-the-planes/ Lighthouse3D] has an excellent description of how to calculate the view frustum plane equations.
  
Add view frustum culling using the bounding spheres of the objects. If the bounding sphere of an object is completely outside of the view frustum, the object should be culled (not rendered). Your culling algorithm should make use of a utility function to test whether a bounding sphere intersects with a given plane (the planes of the view frustum), or whether the sphere is entirely on one side of the plane. Enable a keyboard key to turn culling on and off. (8 points)
+
==6. Extra Credit (Up to 10 Points)==
  
Increase the amount of robots by orders of magnitude, by creating a larger array of them. A good portion of the array should be off screen for the culling to be effective. Display the rendering time per frame in your text window, and show that by turning culling on your rendering time decreases. (2 points)
+
* Debug Mode: Implement a demo mode in which you zoom the camera out (increase the FOV by a few degrees) but do the culling with the original FOV, so that one can see when the robots get culled. Allow toggling between demo mode and normal mode with a keyboard key. (5 points)
 +
* Hierarchical Culling: Create a hierarchical culling algorithm by storing bounding sphere information at every level of the scene graph, so that you can cull an entire branch of the scene graph at once. Structure your scene graph so that you have multiple levels (for instance, by subdividing your army into four quarters, and create a node above each quarter army. Show how many times you were able to cull based on the higher level bounding spheres by displaying the number in the text window. Use differently colored spheres for the different levels of your culling hierarchy. (5 points)
  
==6. Extra Credit (Max. 10 Points)==
+
==7. Creativity Contest (Up to 5 Points)==
  
a) Create an editor for robots. It must include functionality to interactively: select a limb, place it, set its initial angle, set an angular range for its motion, and set the velocity of its swing. (5 points)
+
We're going to have a contest for the top 5 most creative robot creations. Submit a JPEG image or GIF animation of your robot to Piazza by 11:59pm on Friday 11/2 by adding an entry to the respective discussion thread on Piazza. To create a GIF animation you can use [http://gifmaker.me/ this converter]. To convert any image file format to a GIF image, we recommend [http://www.irfanview.com/ IrfanView.] The top five most voted for robots are going to get extra credit.
  
b) It's easier to calculate tight axis aligned bounding boxes than tight spheres, but culling is harder with bounding boxes. implement view frustum culling using tight bounding boxes (in world coordinates) in place of spheres. You also need to create a demo mode in which you zoom the camera out (increase the FOV) but do the culling with the original FOV, so that one can see when the robots get culled. (5 points)
+
Voting starts once submissions are closed and ends on the following Tuesday at 9am.
 
+
c) This can be done in conjunction with b) using tight bounding boxes, or with the original bounding spheres: create a hierarchical culling algorithm by storing bounding box/sphere information at every level of the scene graph, so that you can cull an entire branch of the scene graph at once. Structure your scene graph so that you have multiple levels (for instance, by subdividing your army into four quarters, and create a node above each quarter army. Also use the increased FOV like in part b) to demonstrate what gets culled when. (5 points)
+
 
+
==7. Creativity Contest (Up to 5 Points)==
+
 
+
We're going to have a contest for the top 5 most creative robot creations. Submit a JPEG image or GIF animation of your robot to Piazza by the deadline. Instructions will be posted on Piazza. To create a GIF animation you can use [http://gifmaker.me/ this converter]. To convert any image file format to a GIF image, we recommend [http://www.irfanview.com/ IrfanView.] The top five most voted for robots are going to get extra credit.
+
  
 
The winner of the contest will get 5 points of extra credit. The second will get 4 points, third gets 3, fourth 2, fifth 1 point. This extra credit is on top of the regular extra credit so one can theoretically get 115 points for this homework project.
 
The winner of the contest will get 5 points of extra credit. The second will get 4 points, third gets 3, fourth 2, fifth 1 point. This extra credit is on top of the regular extra credit so one can theoretically get 115 points for this homework project.
 
-->
 
 
<!--
 
next year:
 
- debug mode for culling; cull when bounding sphere intersects view frustum
 
- mandate culling in world coordinates, or figure out how to test if it's done correctly in camera coordinates
 
- display number of visible objects in cull mode
 
-->
 

Latest revision as of 16:02, 2 November 2018

Contents

Project 3: Textures, Scene Graphs and Culling

In this project you will need to implement a scene graph to render an army of animated robots with textures on them.

The total score for this project is 100 points. Additionally, you can obtain up to 10 points of extra credit.

We recommend the following approach for the timing of your work:

  1. You can start with part 1 immediately: We've already covered texturing in class, and will cover its implementation in discussion on Oct 24.
  2. Parts 2-4 require a scene graph. It will be covered in the lecture on Oct 23.
  3. Part 5 (Culling) will be covered in lecture on Oct 30 and also in discussion on Oct 31.

1. Textured Robot Torso (30 Points)

The first step of the project is to create a textured robot torso, which you will later use as part of your robot.

Start with code that uses your trackball code, and modify it so that trackball rotations control the camera instead, so that trackball rotations control the camera's direction vector - the camera stays in place. (If you don't have trackball rotation code, keyboard controls will suffice.)

Thanks to our tutors Weichen and former tutor Yining, you have the following robot parts to choose from: head, body (torso), limb, eye, antenna. You will find the OBJ files in this ZIP file. Each vertex has not only a 3D coordinate and a normal associated with it, but also a texture coordinate. This allows you to map textures to the surfaces of the robot. Note that unlike the previous obj files in the course, each face has different indices for v/vt/vn. So you are going to need to update your parser accordingly, when you add texture support. (You many want to check Wikipedia for more information about *.obj format) One of ways to deal with the different indices is to re-order (and duplicate) the v/vt/vn data when parsing so that their indices align. The following code might be helpful:

// Assume that you parse indices of v/vt/vn into different std::vector (vertex_indices_, normal_indices_, uv_indices_)
// input_vertices, input_normals, input_uvs are raw input data from *.obj files
// vertices_, normals_, uvs_ are aligned data

for (unsigned i = 0; i < vertex_indices_.size(); i++) {
  vertices_.push_back(input_vertices[vertex_indices_[i]]);
  normals_.push_back(input_normals[normal_indices_[i]]);
  uvs_.push_back(input_uvs[uv_indices_[i]]);
  indices_.push_back(i);
}

Load in the robot body with its texture coordinates. Then apply a texture to it. You can use any (non-offensive) image you find on the internet, or use a picture from your own collection. Best is to trim and resize the image to a size of 512x512 pixels.

Load the image into your C++ code. We provide sample code which loads a PPM image file and uses it as a texture for a quad. If you decide to use an image in a format other than PPM (eg, JPEG), you need to convert it to PPM first. The above mentioned image processing tool IrfanView for Windows will do this for you. Alternatively, you can use a third party library such as SOIL to natively load JPEG images, or other image formats.

Grading

  • -3 if texture is not entirely correctly rendered, ie. skewed or stretched

2. Scene Graph Engine (15 Points)

To create a robot with multiple moving body parts (head, torso, limbs, eyes, antennae), we need to first implement a simple scene graph structure for our rendering engine. This scene graph should consist of at least three nodes (5 points for each): Node, Transform and Geometry. You are free to add more scene graph node types as you see fit.

  • Class Node should be abstract and serve as the common base class. It should implement the following class methods:
    • an abstract draw method: virtual void draw(Matrix4 C)=0
    • an abstract virtual void update()=0 method to separate bounding sphere updates from rendering
  • Transform should be derived from Node and have the following features:
    • store a 4x4 transformation matrix M
    • store a list of pointers to child nodes (std::list<Node*>)
    • provide a class methods to add a child node (addChild()) to the list
    • its draw method needs to traverse the list of children and call each child node's draw function
    • when draw(C) is called, multiply matrix M with matrix C.
  • Geometry should be derived from Node and have the following features:
    • set the modelview matrix to the current C matrix
    • an initialization method to load a 3D model (OBJ file) whose filename is passed to it (init(string filename). Your OBJ loader from project 2 should work.
    • have a class method which draws the 3D model associated with this node.

3. Walking Android Robot (20 Points)

Now that we have the scene graph classes, it is time to put them to work. Build your own robot using the addChild methods. Use at least 3 different types of parts for your robot (e.g., body, head and limb). In total, your robot needs to consist of at least 4 parts, 3 of which need to be moving independently from one another and they need to be connected to the 4th part. One of the robot parts needs to be the textured torso which you created in part 1 of the project.

Here is an example of a robot with two antennas, two eyeballs, one head, one torso and 4 limbs (2 legs and 2 arms), before applying a texture:

Robot.png

Use your creativity to build the most creative robot in class! The 5 most creative robots in class are going to get extra credit, after a vote on Piazza.

Once you've created your scene graph, you need to get your rendering engine ready to recursively traverse the scene graph for rendering by creating a root node of type Transform and calling its draw() function with the identity matrix as its parameter.

Animate the robot to make it look like it is walking, by changing the matrices in the Transform nodes. Walking in place is fine, the robot does not need to actually move forward.

4. Robot Army (15 Points)

Construct a scene which consists of a large amount of robots, at least 100. The robots can all be identical clones.

  • Distribute the robots on a 2D grid (i.e., place them on a plane with uniform spacing). For 100 robots, use a 10x10 grid.
  • Enable the animation for all the robots so that they look like they are walking.
  • Use your camera manipulation technique from part 1 to allow rotating the grid of 3D objects and zoom in or out.

This image illustrates the grid layout of the robots (background image not required):

Robot-army.png

5. Culling (20 Points)

Implement object level culling, to allow the existence of thousands of instances of your robot, without having to render them all at once.

  • Determine the parameters for a bounding sphere (Vector3 for its center point, and a radius) for each of your robots, which contains all parts of the robot. You do not need to find the tightest possible bounding spheres. Just make them as tight as you reasonably can. Estimating the bounding sphere size is fine. You don't need to have a separate bounding sphere for each animation step - one that encloses all steps is fine.
  • Add the optional ability to render the bounding spheres. Support a keyboard key to toggle the rendering of the bounding spheres on or off. You can render the spheres by using the sphere OBJ file from project 2. Other ways of rendering spheres are also acceptable. We recommend rendering a wireframe sphere by rendering OpenGL lines instead of triangles. (5 points)
  • Add view frustum culling, and support a keyboard key to turn it on or off. (10 points)
  • Display the number of robots that are visible (=not culled) on standard out or in the window's title bar. Make sure that by turning culling on this number goes down when part of the robot army is off screen. (5 points)

Notes:

  • Lighthouse3D has an excellent description of how to calculate the view frustum plane equations.

6. Extra Credit (Up to 10 Points)

  • Debug Mode: Implement a demo mode in which you zoom the camera out (increase the FOV by a few degrees) but do the culling with the original FOV, so that one can see when the robots get culled. Allow toggling between demo mode and normal mode with a keyboard key. (5 points)
  • Hierarchical Culling: Create a hierarchical culling algorithm by storing bounding sphere information at every level of the scene graph, so that you can cull an entire branch of the scene graph at once. Structure your scene graph so that you have multiple levels (for instance, by subdividing your army into four quarters, and create a node above each quarter army. Show how many times you were able to cull based on the higher level bounding spheres by displaying the number in the text window. Use differently colored spheres for the different levels of your culling hierarchy. (5 points)

7. Creativity Contest (Up to 5 Points)

We're going to have a contest for the top 5 most creative robot creations. Submit a JPEG image or GIF animation of your robot to Piazza by 11:59pm on Friday 11/2 by adding an entry to the respective discussion thread on Piazza. To create a GIF animation you can use this converter. To convert any image file format to a GIF image, we recommend IrfanView. The top five most voted for robots are going to get extra credit.

Voting starts once submissions are closed and ends on the following Tuesday at 9am.

The winner of the contest will get 5 points of extra credit. The second will get 4 points, third gets 3, fourth 2, fifth 1 point. This extra credit is on top of the regular extra credit so one can theoretically get 115 points for this homework project.