Difference between revisions of "Project3F20"

From Immersive Visualization Lab Wiki
Jump to: navigation, search
Line 1: Line 1:
=UNDER CONSTRUCTION _ DO NOT START YET=
+
=UNDER CONSTRUCTION - DO NOT START YET=
  
  
=Project 3: Scene Graphs and Culling with Robots=
+
=Project 3: Hierarchical Modeling=
  
In this project you will need to implement a scene graph to render a group of animated robots.
+
In this project you will need to implement a scene graph to render a an animated scene which consists of a number of objects that are part of a hierarchy.
+
The total score for this project is 100 points. Additionally, you can obtain up to 10 points of extra credit.
+
  
We recommend the following approach for the timing of your work:
+
You can pick one of the following types of things to animate:
  
# Parts 1-4 require knowledge of scene graphs. They were covered in the lecture on Nov 5th and will wrap up on the 10th.
+
* a solar system
# Part 5 (culling) will be covered in lecture on Nov 10th or 12th.
+
* an amusement park ride
 +
* an animal or person moving in a coordinated way (no robots!)
 +
 
 +
The user of your program should be able to control a variety of options within your simulation using the keyboard. You may choose which keys control which functions, but make sure they are clearly documented in a text file named README.txt which you add to the main directory of your code.
 +
 
 +
Allow the user the following controls
 +
 
 +
* start/stop the animation
 +
* reset the view of the simulation
 +
* move around within the simulation, i.e., move left/right/up/down, turn left/right
 +
 +
The total score for this project is 100 points. Additionally, you can obtain up to 10 points of extra credit.
  
 
Start with your code from project 2 and remove the rendering of the 3D models. In this project you are going to render all new objects, using your own implementation of a scene graph.
 
Start with your code from project 2 and remove the rendering of the 3D models. In this project you are going to render all new objects, using your own implementation of a scene graph.

Revision as of 16:52, 6 November 2020

Contents

UNDER CONSTRUCTION - DO NOT START YET

Project 3: Hierarchical Modeling

In this project you will need to implement a scene graph to render a an animated scene which consists of a number of objects that are part of a hierarchy.

You can pick one of the following types of things to animate:

  • a solar system
  • an amusement park ride
  • an animal or person moving in a coordinated way (no robots!)

The user of your program should be able to control a variety of options within your simulation using the keyboard. You may choose which keys control which functions, but make sure they are clearly documented in a text file named README.txt which you add to the main directory of your code.

Allow the user the following controls

  • start/stop the animation
  • reset the view of the simulation
  • move around within the simulation, i.e., move left/right/up/down, turn left/right

The total score for this project is 100 points. Additionally, you can obtain up to 10 points of extra credit.

Start with your code from project 2 and remove the rendering of the 3D models. In this project you are going to render all new objects, using your own implementation of a scene graph.

Sky Box (Points)

Create a sky box for your scene. A sky box is a large, square box which is shown around your entire scene, to give it a nice background. The camera is located inside of this box. The inside walls of the box usually consist of pictures of a sky and a horizon, as well as the terrain below. Sky boxes are normally cubic, which means that they consist of six square textures for the six sides of a cube. Here is a great tutorial for sky boxes and how to implement them in modern OpenGL.

Here is is a nice collection of textures for sky boxes, and here is an even bigger one. Make sure the width and height of your sky box textures are powers of two, such as 512x512 or 1024x1024. You are allowed to create your own sky box if you want.

Choose a sky box texture. Then create a cubic sky box and make it very large by giving it coordinates from -500 to +500. Check to see if the textures align correctly at the edges - if not you may need to rotate some of the textures. There should not be any visible seams.

Make sure single-sided rendering (backface culling) is enabled. If your sky box is defined with the triangles' normals facing outward (like the example in the discussion slides), use the following code lines to ensure that the user will never see the outside of the box:

glEnable(GL_CULL_FACE); 
glCullFace(GL_FRONT); 

If your normals point inward, you need to use GL_BACK instead of GL_FRONT in the above code.

Use the following settings for your texture after your first glBindTexture(GL_TEXTURE_CUBE_MAP, id) for correct lighting and filtering settings:


  // Make sure no bytes are padded:
  glPixelStorei(GL_UNPACK_ALIGNMENT, 1);

  // Use bilinear interpolation:
  glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
  glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

  // Use clamp to edge to hide skybox edges:
  glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
  glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

Grading:

  • -5 if sky box works but is not seamless
  • -3 if backface culling is not enabled, or it is enabled but the wrong faces are culled

Environment Mapping (20 Points)

Use the sphere code you used for the bounding spheres in project 3 to render a solid sphere. Make sure you are parsing for the vertex normals in addition to the vertices. Its surface should look like polished metal, so you need to use environment mapping on it.

The Cube map tutorial explains how environment mapping is done with a sky box as the environment. Feel free to follow it and use the code from the site to make it work in your code. The goal is to have the sky box reflect on the sphere, to give the impression of the sphere's surface being polished metal (i.e., act like a perfect mirror).

Use the camera controls from project 2, which allow the user to rotate the view from a fixed view point, as well as zoom in. It is important that your camera controls allow you to zoom in on the sphere, so that you can check to see if your environment mapping algorithm works correctly.

Grading:

  • -10 if reflection looks somewhat like the skybox but is off by a lot
  • -5 if reflection is upside down but code looks good

Scene Graph Engine (Points)

To create a robot with multiple moving body parts (head, torso, limbs, eyes, antennae), we need to first implement a simple scene graph structure for our rendering engine. This scene graph should consist of at least three nodes: Node (5 points), Transform (10 points) and Geometry (10 points). You are free to add more scene graph node types as you see fit.

  • Class Node should be abstract and serve as the common base class. It should implement the following class methods:
    • an abstract draw method: virtual void draw(Matrix4 C)=0
    • an abstract virtual void update()=0 method to separate bounding sphere updates from rendering
  • Transform should be derived from Node and have the following features:
    • store a 4x4 transformation matrix M
    • store a list of pointers to child nodes (std::list<Node*>)
    • provide a class methods to add a child node (addChild()) to the list
    • its draw method needs to traverse the list of children and call each child node's draw function
    • when draw(C) is called, multiply matrix M with matrix C.
  • Geometry should be derived from Node and have the following features:
    • set the modelview matrix to the current C matrix
    • an initialization method to load a 3D model (OBJ file) whose filename is passed to it (init(string filename). Your OBJ loader from project 2 should work.
    • have a class method which draws the 3D model associated with this node.

Robot (Points)

Now that we have the scene graph classes, it is time to put them to work. Build your own robot using the addChild methods. Use at least 3 different types of parts for your robot (e.g., body, head and limb). In total, your robot needs to consist of at least 4 parts. (15 points)

Start with code that uses your trackball code, and modify it so that trackball rotations control the camera's direction vector instead - the camera should stay in place. (If you don't have trackball rotation code, keyboard controls will suffice, but the camera still needs to be stationary and pivot about its location.) Also allow zooming in and out (i.e., changing the field of view). (5 points)

Thanks to our tutors Weichen and former tutor Yining, you have the following robot components to choose from: head, body (torso), limb, eye, antenna. You will find the OBJ files in this ZIP file. In these OBJ files, each vertex has not only a 3D coordinate and a normal associated with it, but also a texture coordinate - you can ignore the latter. Note that unlike the previous obj files in the course, each face has different indices for v/vt/vn. (You many want to check Wikipedia for more information about *.obj format) One way to deal with the different indices is to re-order (and duplicate) the v//vn data when parsing so that their indices align. The following code might be helpful for this:

// Assume that you parse indices of v//vn into different std::vector (vertex_indices_, normal_indices_)
// input_vertices and input_normals are raw input data from *.obj files
// vertices_ and normals_ are aligned data

for (unsigned i = 0; i < vertex_indices_.size(); i++) {
  vertices_.push_back(input_vertices[vertex_indices_[i]]);
  normals_.push_back(input_normals[normal_indices_[i]]);
  indices_.push_back(i);
}

Here is an example of a robot with two antennas, two eyeballs, one head, one torso and 4 limbs (2 legs and 2 arms):

Robot.png

Use your creativity to build the most creative robot in class! The 5 most creative robots in class are going to get extra credit, after a vote on Canvas.

Once you've created your scene graph, you need to get your rendering engine ready to recursively traverse the scene graph for rendering by creating a root node of type Transform and calling its draw() function with the identity matrix as its parameter.


3. Animated Robot (15 Points)

Animate the robot to make it look like it is walking, by changing the matrices in the Transform nodes. Walking in place is fine, the robot does not need to actually move forward.

In your robot, at least 3 of its parts need to be moving independently from one another and they need to be connected to the 4th part


Robot Party (Points)

  • Construct a scene which consists of a large amount of robots, at least 100. The robots can all be identical clones. (5 points)
  • Distribute the robots on a 2D grid (i.e., place them on a plane with uniform spacing). For 100 robots, use a 10x10 grid. (5 points)
  • Enable the animation for all the robots so that they look like they are dancing. (5 points)
  • Make sure your camera can be rotated and zoomed so that you can change between having all robots on screen all the way to none and back.

This image illustrates the grid layout of the robots (background image not required):

Robot-army.png

Culling (Points)

Implement object level culling, to allow the existence of thousands of instances of your robot, without having to render them all at once.

  • Determine the parameters for a bounding sphere (Vector3 for its center point, and a radius) for each of your robots, which contains all parts of the robot. You do not need to find the tightest possible bounding spheres. Just make them as tight as you reasonably can. Estimating the bounding sphere size is fine. You don't need to have a separate bounding sphere for each animation step - one that encloses all steps is fine.
  • Add the optional ability to render the bounding spheres. Support a keyboard key to toggle the rendering of the bounding spheres on or off. You can render the spheres by using the sphere OBJ file from project 2. Other ways of rendering spheres are also acceptable. We recommend rendering a wireframe sphere by rendering OpenGL lines instead of triangles. (5 points)
  • Add view frustum culling, and support a keyboard key to turn it on or off. (10 points)
  • Display the number of robots that are visible (=not culled) on standard out or in the window's title bar. Make sure that by turning culling on this number goes down when part of the robot array is off screen. (5 points)
  • Debug Mode: Implement a demo mode in which you zoom the camera out (increase the FOV by a few degrees) but do the culling with the original FOV, so that one can see when the robots get culled. Allow toggling between demo mode and normal mode with a keyboard key. (5 points)

Notes:

  • Lighthouse3D has an excellent description of how to calculate the view frustum plane equations. Note that this tutorial and the discussion slides assume that the frustum plane normals point away from the view volume, whereas the lecture slides have them point into the view volume. Either way can work, you just need to be consistent.

Extra Credit (Up to 10 Points)

  • Hierarchical Culling: Create a hierarchical culling algorithm by storing bounding sphere information at every level of the scene graph, so that you can cull an entire branch of the scene graph at once. Structure your scene graph so that you have multiple levels (for instance, by subdividing your robot array into four quarters, and create a separate node for each quarter. Show how many times you were able to cull based on the higher level bounding spheres by displaying the number in the text window. Use differently colored spheres for the different levels of your culling hierarchy. (5 points)
  • Creativity Contest (up to 5 Points): We're going to have a contest for the top 5 most creative robot creations. Submit a JPEG image or GIF animation of your robot to Canvas by adding an entry to the respective discussion thread. To create a GIF animation you can use this converter. To convert any image file format to a GIF image, we recommend IrfanView. The top five most voted for robots are going to get extra credit. Voting starts Friday 11/1 at 11:59pm and ends on the following Wednesday 11/6 at 11:59pm. The winner of the contest will get 5 points of extra credit. The second place will get 4 points, third gets 3, fourth 2, fifth 1 point.