Difference between revisions of "Project5S16"

From Immersive Visualization Lab Wiki
Jump to: navigation, search
(Technical Feature Implementation Requirements)
(Implementation: Added assimp)
Line 116: Line 116:
  
 
* GLee to manage OpenGL extensions
 
* GLee to manage OpenGL extensions
 +
* [http://www.assimp.org/index.html Assimp] for importing OBJs
 
* SOIL to load a variety of image formats
 
* SOIL to load a variety of image formats
 
* SDL, to replace GLFW
 
* SDL, to replace GLFW

Revision as of 12:35, 26 May 2016

Contents

Final Project

The final homework assignment has to be done in teams of two or three people. Each team will need to implement an interactive software application which uses some of the advanced rendering effects or modeling techniques we discussed in class, but weren't covered by previous homework projects. We will evaluate your project based on technical and creative merits.

The final project has to be presented to the entire class during our final exam slot from 3-6pm on Tuesday, June 7th, by means of a video and a live demonstration. It will be graded by the instructor, the TAs and the tutors. Late submissions will not be permitted. You are welcome to bring guests to the presentation. More detail about the location of the grading event will be released shortly.

Note: If you are not going to be able to work as a member of a team, you can ask for permission from the instructor to do a solo project. This will only be permitted in special cases and will have to be requested of the instructor, by visiting his office hour or by email.

Grading

Your final project score consists of three parts:

  • Blog (10 points)
  • Presentation (90 points)
  • Extra Credit (10 points)

Blog (10 Points)

You need to create a blog to report on the progress you are making on your project. You need to make at least three blog entries to get the full score. The first is due on Friday, May 27th at 11:59pm, the second is due on Friday, June 3rd at 11:59pm, the third is due on Monday, June 6th at 11:59pm.

Before the final presentations, you also need to create a video of your application and link to it on your blog. It is due by Tuesday, June 7th at 12 noon. This video will be shown to the entire class during the final presentation event, and is the primary basis for your grade.

The first blog entry needs to contain (at least) the following information:

  • The name of your project.
  • The names of your team members.
  • A short description of your project.
  • The technical features you are implementing.
  • What you are planning on spending your creative efforts on.
  • At least one screen shot of some part of your project.

In weeks 2 and 3 you need to update all above categories, and upload at least one screen shot again.

The video should be about 1 minute long. You should use screen recording software, such as the Open Broadcaster Software, which is available free of charge for Windows and Mac. On a Mac you can use QuickTime, which is available free of charge as well. There does not need to be an audio track, but you are welcome to talk over the video or add sound. Embed the video in your blog, or link to it from there. The use of video editing software is optional.

You are free to create the blog on any web based blog site, such as Blogger or WordPress. You should use the same blog each time and just add blog entries. If you need to move to a different blog server, please move your entire blog over (copy-paste if necessary) and let us know its new URL.


Presentation (90 Points)

80% of the score for the presentation are for the technical features, 20% for the creative quality of your demonstration. The grading will be based solely on your presentation. What you don't show us won't score points.

To obtain the full score for the technical quality, each team must implement at least three skill points worth of technical features per team member. For example, a team of two must cover 2x3=6 skill points. Also: each team must implement at least one medium or hard feature for each member of the team. Example: a team of two has the following options to cover the required 6 skill points:

  • One easy (1 point), one medium (2 points), one hard (3 points) feature.
  • Two easy and two medium features.
  • Two hard features.
  • Three medium features.

If your team implements more than the required amount of skill points, the additional features can fill in for points which you might lose for incomplete or insufficiently implemented features.

Your score for creativity is going to be determined by averaging the instructor's, the TA and the tutors' subjective scores. We will look for a cohesive theme and story, but also things such as visually pleasing geometry, tasteful textures, a thoughtful choice of colors and materials, smart placement of camera and lights, effective user interaction, and fluid rendering.

Here is the list of technical features you can choose from:

Easy: (1 skill point)

  • Per-pixel illumination of texture-mapped polygons
  • Toon shading
  • Bump mapping
  • Glow or halo effect
  • Particle effect
  • Sound effects

Medium: (2 skill points)

  • Collision detection with bounding boxes
  • Procedurally modeled city
  • Procedurally modeled buildings (no shape grammar)
  • Procedurally generated terrain
  • Procedurally generated plants with L-systems
  • Water effect with waves, reflection and refraction
  • Multipass rendering

Hard: (3 skill points)

  • Shadow mapping
  • Shape grammar for buildings or objects
  • Displacement mapping
  • Screen space post-processed lights
  • Screen space ambient occlusion (SSAO)
  • Screen space directional occlusion (SSDO) with color bounce
  • Collision detection with arbitrary geometry

For a full score, each of these technical features must fulfill the more detailed requirements listed at the bottom of this page.

All technical features have to have a toggle switch (keyboard key) with which they can be enabled or disabled, or for procedural algorithms, to recalculate the procedural objects with a different seed value for the random number generator.

Additional technical features may be approved by the course staff upon request.

Presentation Day

We are working on getting a presentation space other than the classroom. The presentations are going to consist of showing your video to the class. We may also include live demos as well. Stay tuned.


Implementation

If you want to have debugging support from the TAs and tutors, you need to implement this project in C++ using OpenGL and GLFW, just like the other homework projects. Otherwise, you are free to use a language of your choice, you can even write an app for a smart phone or tablet, or a VR system.

Third party graphics libraries are generally not acceptable, unless cleared by the instructor. Exceptions are typically granted for libraries to load images and 3D models, libraries to support input devices, or support for audio. Pre-approved libraries are:

  • GLee to manage OpenGL extensions
  • Assimp for importing OBJs
  • SOIL to load a variety of image formats
  • SDL, to replace GLFW
  • OpenAL for audio support
  • XML parsers, such as MiniXML or PugiXML - useful for configuration files
  • Physics engines (such as the Bullet Engine), as long as they are not used to obtain points for technical features.

Extra Credit (10 Points)

Tips

  • If you use Sketchup to create obj models: Sketchup writes quads whereas our obj reader expects triangles. You can convert the Sketchup file to one with triangles using Blender, a free 3D modeling tool. Then you put the object into edit mode and select Mesh->Faces->Convert Quads to Triangles.
  • MeshLab is another excellent and free tool for 3D file format conversion.
  • Trimble 3D Warehouse and Turbosquid are great resources for ready-made 3D models you can export to OBJ files with the above described technique.

Technical Feature Implementation Requirements

Technical Feature Requirements and Comments
Per-pixel illumination of texture-mapped polygons This effect can be hard to see without careful placement of the light source(s). You need to support a keyboard key to turn the effect on or off at runtime. The best type of light source to demonstrate the effect is a spot light with a small spot width, to illuminate only part of the texture.
Toon shading Needs to consist of both discretized colors and silhouettes.
Bump mapping Needs to use either a height map to derive the normals from, or a normal map directly. The map needs to be shown alongside the rendering window, upon key press.
Glow or halo effect Object glow must be obvious and clearly visible (not occluded). Implement functionality to turn off glow upon key press.
Particle effect Generate a LOT of particles (at least 1000), all of which can move and die shortly after appearing. Application should not experience a significant slowdown, and instancing has to be done cleanly (no memory leaks should occur when the particles die).
Collision detection with bounding boxes The bounding boxes must be tight enough around the colliding objects to be within 50% of the size of the objects. There must be a debug option which upon key press displays the bounding boxes as wireframe boxes, and uses different colors for those boxes which are colliding (eg, red) vs. those that don't (eg, white). An example for an algorithm to implement is the Sweep and Prune algorithm with Dimension Reduction as discussed in class.
Procedurally modeled city Must include 3 types of buildings, a park (a flat rectangular piece of land is fine), and roads that are more complex than a regular square grid (even Manhattan is more complex than a grid!)
Creativity points if you can input an actual city's data for the roads!
Procedurally modeled buildings Must have 3 non-identical portions (top, middle, bottom OR 1st floor, 2nd floor, roof).
Generate at least 3 different buildings with differently shaped windows.
Procedurally generated terrain Ability to input a height map: either real or generated from different applications.
Shader that adds at least 3 different type of terrain(grass, desert, snow).
Procedurally generated plants with L-systems At least 4 language variables (X, F, +, -), and parameters (length of F, theta of + and -).
To make it 3D you can also add another axis for rotation(typical variables are & and ^).
At least 3 trees that demonstrate different rules.
Pseudorandom execution will make your trees look more varied and pretty.
Water effect with waves, reflection and refraction Add shaders to render water, reflecting and refracting the sky box textures as your cubic environment map. In addition, simulate wave animations for the water. To get the bounty points, the water must reflect/refract all 3D objects of your scene, not just the sky box (you need to place multiple 3D objects so that they reflect off the water into the camera).
Multipass Rendering Also known as additional light passes, render the scene multiple times, each with different lighting sources or effects. The final result should be some sort of (possibly weighted) combination of the lighting passes. You may not receive points for multipass rendering if a multipass lighting effect of higher skill point value is credited, such as SSAO, SSDO, or some screen-space post-processed lights.
Shadow mapping Upon key press, the shadow map (as a greyscale image) must be shown alongside the rendering window.
Shape grammar for buildings or objects At least 5 different shapes must be used, and there must be rules for how they can be combined. It is not acceptable to allow their combination in completely arbitrary ways.
Displacement mapping Using a texture or height map, the position of points on the geometry should be modified. The map must optionally (upon key press) be shown alongside the actual graphics window to demonstrate that normal mapping or bump mapping was not used to achieve a displacement illusion.
Screen space post-processed lights Screen space lighting effect using normal maps, depth maps such as haze or area lights.
Screen space rendering effects such as motion blur or reflection(ability to turn on/off).
Volumetric light scattering or some kind of screen space ray tracing effect.
Screen space ambient occlusion (SSAO) Demonstrate functionality on an object with a large amount of crevices. Implement functionality to turn SSAO off upon key press. To qualify for bounty points, upon key press, SSAO map (as a greyscale image) must be shown alongside the rendering window.
Screen space directional occlusion (SSDO) with color bounce Demonstrate functionality on multiple objects which possess large amounts of crevices and widely varying colors. Implement functionality to turn SSDO with color bounce off upon key press. To qualify for bounty points, upon key press, SSDO map (colored image) must be shown alongside the rendering window.
Collision detection with arbitrary geometry Needs to test for the intersection of faces. Performance should be smooth enough to undoubtedly show that your algorithm is working properly. Support a debug mode to highlight colliding faces (e.g. color the face red when colliding).