Project7F15

From Immersive Visualization Lab Wiki
Jump to: navigation, search

Contents

Final Project

The final homework assignment has to be done in teams of two or three people. Each team will need to implement an interactive software application which uses some of the advanced rendering effects or modeling techniques we discussed in class. We will evaluate your project based on technical and creative merits.

The final project has to be presented to the entire class during our final exam slot from 8-11am on Thursday, December 10th in room CSE 1202, the conference room on the first floor of the computer science building. It will be graded by the instructor, the TAs and the tutors. Late submissions will not be permitted. You are welcome to bring guests to the presentation.

Note: If you are not going to be able to work as a member of a team, you can try to get permission from the instructor to do a solo project. This will only be permitted in special cases and will have to be requested of the instructor, in this case please email him directly.

Grading

Your final project score consists of three parts:

  • Blog (10 points)
  • Presentation (90 points)
  • Extra Credit (10 points)

Blog (10 Points)

You need to create a blog to report on the progress you're making on your project. You need to make at least three blog entries to get the full score. The first is due on Tuesday, November 24th at 11:59pm, the second is due on Tuesday, December 1st at 11:59pm, the third is due on Tuesday, December 8th at 11:59pm. During the final week you need to upload a short video of your application to your blog. It is due by Thursday December 10th at 8am.

The first blog entry needs to contain (at least) the following information:

  • The name of your project.
  • The names of your team members.
  • A short description of the theme of your project.
  • The technical features you are implementing.
  • What you are planning on spending your creative efforts on.

In week 2 you need to write about the progress you made, and mention any changes you have made to the team, project title, or implementation plans. You also need to upload at least one screen shot of the state of your application to the blog.

In week 3 you need to give another update like in week 2.

The video should be a 1 minute long (+/- 5 seconds) screen grab of your project. You don't need to use any editing software, but you need to use software to capture your screen to a video file. You should use screen recording software, such as the Open Broadcaster Software, which is available free of charge for Windows and Mac. On a Mac you can use QuickTime, which is available free of charge as well. There does not need to be an audio track, but you are welcome to talk over the video. Embed the video in your blog, or link to it from there.

You are free to create the blog on any web based blog site, such as Blogger or WordPress. You should use the same blog each time and just add blog entries. If you need to move to a different blog server, please move your entire blog over (copy-paste if necessary) and let us know its new URL.

The points are distributed like this:

  • Blog entry #1: 2 points
  • Blog entry #2: 3 points
  • Blog entry #3: 3 points
  • Video: 2 points

Once you have created your blog, please email the URL to TA Dylan. His email address and instructions for the subject line are on Piazza.

Presentation (90 Points)

80% of the score for the presentation are for the technical features, 20% for the creative quality of your demonstration. The grading will be based solely on your presentation! What you don't show us won't score points. Each team will have one minute per team member for the presentation.

To obtain the full score for the technical quality, each team must implement at least three skill points worth of technical features per team member. For example, a team of two must cover 2x3=6 skill points. Additional constraint: each team must implement at least one medium or hard feature for each member of the team. Example: a team of two has the following options to cover the required 6 skill points:

  • One easy (1 point), one medium (2 points), one hard (3 points) feature.
  • Two easy and two medium features.
  • Two hard features.
  • Three medium features.

If your team implements more than the required amount of skill points, the additional features can fill in for points which you might lose for incomplete or insufficiently implemented features.

Your score for creativity is going to be determined by averaging the instructor's, the TA and the tutors' subjective scores. We will look for a cohesive theme and story, but also things such as visually pleasing geometry, tasteful textures, a thoughtful choice of colors and materials, smart placement of camera and lights, effective user interaction, and fluid rendering.

Here is the list of technical features you can choose from:

Easy: (1 skill point)

  • Per-pixel illumination of texture-mapped polygons
  • Toon shading
  • Environment mapping
  • Bump mapping
  • Glow or halo effect
  • Particle effect
  • Sound effects

Medium: (2 skill points)

  • Collision detection with bounding boxes
  • Procedurally modeled city
  • Procedurally modeled buildings (no shape grammar)
  • Procedurally generated terrain
  • Procedurally generated plants with L-systems
  • Water effect with waves, reflection and refraction
  • Multipass rendering

Hard: (3 skill points)

  • Shadow mapping
  • Shadow volumes
  • Shape grammar for buildings or objects
  • Displacement mapping
  • Screen space post-processed lights
  • Screen space ambient occlusion (SSAO)
  • Screen space directional occlusion (SSDO) with color bounce
  • Collision detection with arbitrary geometry

For a full score, each of these technical features must fulfill the more detailed requirements listed at the bottom of this page.

All technical features have to have a toggle switch (keyboard key) with which they can be enabled or disabled, or for procedural algorithms, to recalculate the procedural objects with a different seed value for the random number generator.

Additional technical features may be approved by the course staff upon request.

Presentation Day

For the presentation, you need to bring a laptop or other computer with a VGA output (make sure you bring an adapter if necessary). If nobody on your team can bring a computer to the presentation and you can't borrow one from anyone, you can use the instructor's Windows laptop, but you will need to test out your application at least one day before the presentation (even if it is not the final version), ideally in the instructor's office hour on a Tuesday. The presentation should be done by two people: while one person speaks to the audience, the other one operates the computer. It is expected that all team member attend the final presentation.

Your application needs to run in full screen mode, with the graphics window maximized. It is not acceptable to run your application in a small window like you did in the other homework projects, because the judges have to grade you based on what they see on the projector screen.

Implementation

If you want to have debugging support from the TAs and tutors, you need to implement this project in C++ using OpenGL and GLUT, just like the other homework projects. Otherwise, you are free to use a language of your choice, you can even write an app for a smart phone or tablet as long as you can send the image to the projector through its VGA input for the presentation.

Third party programming libraries are generally not acceptable, unless cleared by the instructor. Exceptions are typically granted for libraries to load images and 3D models, libraries to support input devices, or support for audio. Pre-approved libraries are:

  • GLee to manage OpenGL extensions
  • SOIL to load a variety of image formats
  • SDL as a more versatile replacement of GLUT
  • OpenAL for audio support
  • Any XML parser, such as MiniXML or PugiXML
  • The math library GLm, as well as the enhanced version of it
  • FreeGLUT, a modern GLUT replacement
  • GLFW: similar to GLUT but slightly different approach with more control for the programmer
  • Physics engines (such as the Bullet Engine), as long as they are not used to obtain points for technical features.
  • For Java programmers: LWJGL

Extra Credit: Bounty Points (10 Points)

In order to motivate you to choose the implementation of some of the hardest algorithms, you can receive up to 10 extra points for a flawless implementation of one of the following algorithms, as well as adequate visual debugging aids. Note that some of these are not directly or sufficiently covered in class, so you will need to research them yourself.

For a full score of 10 points, teams of N people must implement N of these algorithms! In other words, each of the algorithms below gains the team 10/N extra credit points.

  • Water effect with reflection of 3D models
  • Screen space post-processed lights
  • Screen space ambient occlusion (SSAO)
  • Screen space directional occlusion (SSDO) with color bounce
  • Collision detection with arbitrary geometry

Any other technical feature(even from the regular feature lists) can also be eligible for bounty points, but requires permission by a course staff member.

Notes:

Tips

  • If you use Sketchup to create obj models: Sketchup writes quads whereas our obj reader expects triangles. You can convert the Sketchup file to one with triangles using Blender, a free 3D modeling tool. Then you put the object into edit mode and select Mesh->Faces->Convert Quads to Triangles.
  • MeshLab is another excellent and free tool for 3D file format conversion.
  • Trimble 3D Warehouse and Turbosquid are great resources for ready-made 3D models you can export to OBJ files with the above described technique.

Technical Feature Implementation Requirements

Technical Feature Requirements and Comments
Per-pixel illumination of texture-mapped polygons This effect can be hard to see without careful placement of the light source(s). You need to support a keyboard key to turn the effect on or off at runtime. The best type of light source to demonstrate the effect is a spot light with a small spot width, to illuminate only part of the texture.
Toon shading Needs to consist of both discretized colors and silhouettes.
Environment mapping To show the effect of the environment mapping there needs to be a way to turn on or off the environment mapping (e.g., by unbinding the shader) with a keyboard key.
Bump mapping Needs to use either a height map to derive the normals from, or a normal map directly. The map needs to be shown alongside the rendering window, upon key press.
Glow or halo effect Object glow must be obvious and clearly visible (not occluded). Implement functionality to turn off glow upon key press.
Particle effect Generate a LOT of particles (at least 1000), all of which can move and die shortly after appearing. Application should not experience a significant slowdown, and instancing has to be done cleanly (no memory leaks should occur when the particles die).
Collision detection with bounding boxes The bounding boxes must be tight enough around the colliding objects to be within 50% of the size of the objects. There must be a debug option which upon key press displays the bounding boxes as wireframe boxes, and uses different colors for those boxes which are colliding (eg, red) vs. those that don't (eg, white). An example for an algorithm to implement is the Sweep and Prune algorithm with Dimension Reduction as discussed in class.
Procedurally modeled city Must include 3 types of buildings, a park (a flat rectangular piece of land is fine), and roads that are more complex than a regular square grid (even Manhattan is more complex than a grid!)
Creativity points if you can input an actual city's data for the roads!
Procedurally modeled buildings Must have 3 non-identical portions (top, middle, bottom OR 1st floor, 2nd floor, roof).
Generate at least 3 different buildings with differently shaped windows.
Procedurally generated terrain Ability to input a height map: either real or generated from different applications.
Shader that adds at least 3 different type of terrain(grass, desert, snow).
Procedurally generated plants with L-systems At least 4 language variables (X, F, +, -), and parameters (length of F, theta of + and -).
To make it 3D you can also add another axis for rotation(typical variables are & and ^).
At least 3 trees that demonstrate different rules.
Pseudorandom execution will make your trees look more varied and pretty.
Water effect with waves, reflection and refraction Add shaders to render water, reflecting and refracting the sky box textures as your cubic environment map. In addition, simulate wave animations for the water. To get the bounty points, the water must reflect/refract all 3D objects of your scene, not just the sky box (you need to place multiple 3D objects so that they reflect off the water into the camera).
Multipass Rendering Render the scene multiple times, each time with different light sources or texture sets, with the final scene being some (possibly weighted) combination of the multiple rendered scenes. There should be a keyboard command to change between the number of passes. This seems like a simple way to deal with having over 8 lights or doing multitexturing, since it's possible to use the fixed function pipeline at each pass, with the cost of losing realism when there are too many passes, since final color values will all be too high if not weighted properly when summing them up. An example of this technique is shown in this video, where the object just appears all white after a high number of lights are added.
Shadow mapping Upon key press, the shadow map (as a greyscale image) must be shown alongside the rendering window.
Shadow volumes Upon key press, the shadow volume must be rendered into the graphics window, either with solid polygons, or as a wireframe model.
Shape grammar for buildings or objects At least 5 different shapes must be used, and there must be rules for how they can be combined. It is not acceptable to allow their combination in completely arbitrary ways.
Displacement mapping Using a texture or height map, the position of points on the geometry should be modified. The map must optionally (upon key press) be shown alongside the actual graphics window to demonstrate that normal mapping or bump mapping was not used to achieve a displacement illusion.
Screen space post-processed lights Screen space lighting effect using normal maps, depth maps such as haze or area lights.
Screen space rendering effects such as motion blur or reflection(ability to turn on/off).
Volumetric light scattering or some kind of screen space ray tracing effect.
Screen space ambient occlusion (SSAO) Demonstrate functionality on an object with a large amount of crevices. Implement functionality to turn SSAO off upon key press. To qualify for bounty points, upon key press, SSAO map (as a greyscale image) must be shown alongside the rendering window.
Screen space directional occlusion (SSDO) with color bounce Demonstrate functionality on multiple objects which possess large amounts of crevices and widely varying colors. Implement functionality to turn SSDO with color bounce off upon key press. To qualify for bounty points, upon key press, SSDO map (colored image) must be shown alongside the rendering window.
Collision detection with arbitrary geometry Needs to test for the intersection of faces. Performance should be smooth enough to undoubtedly show that your algorithm is working properly. Support a debug mode to highlight colliding faces (e.g. color the face red when colliding).