Difference between revisions of "Project5F19"
(→Final Project) |
|||
(6 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
=Final Project= | =Final Project= | ||
+ | |||
+ | <!-- next time: | ||
+ | |||
+ | - no more points for FPS | ||
+ | - same requirements regardless of team size | ||
+ | - all terrains, custom objects etc. have to be textured or shaded with correct normals | ||
+ | - extra credit: don't divide points by number of people | ||
+ | --> | ||
+ | |||
+ | Update: [https://www.youtube.com/playlist?list=PLH1tSSGT2qPFIOYEx5pPZuSTryRvun7T0&jct=4j1dUEgwiaXxM0ezcqq3o09DQTZaaA this year's video playlist] is now on-line. | ||
The final homework assignment should be done in teams of two or three people. Each team will need to implement an interactive software application which uses some of the advanced rendering effects or modeling techniques we discussed in class, but that weren't covered in previous homework projects. We will evaluate your project based on technical and creative merits. | The final homework assignment should be done in teams of two or three people. Each team will need to implement an interactive software application which uses some of the advanced rendering effects or modeling techniques we discussed in class, but that weren't covered in previous homework projects. We will evaluate your project based on technical and creative merits. | ||
Line 165: | Line 175: | ||
|- | |- | ||
| Bezier Patches | | Bezier Patches | ||
− | | Need to make a surface out of at least | + | | Need to make a surface out of at least 2 connected Bezier patches with C1 continuity between them. Also the patches need to have correct normals. |
|- | |- | ||
| Sound Effects | | Sound Effects | ||
Line 182: | Line 192: | ||
| Generate a lot of particles (at least 200), all of which can move and die shortly after appearing. Application should not experience a significant slowdown, and instancing has to be done cleanly (no memory leaks should occur when the particles die). | | Generate a lot of particles (at least 200), all of which can move and die shortly after appearing. Application should not experience a significant slowdown, and instancing has to be done cleanly (no memory leaks should occur when the particles die). | ||
|- | |- | ||
− | | Collision detection with bounding | + | | Collision detection with bounding volumes |
− | | The bounding boxes must be tight enough around the colliding objects to be within 50% of the size of the objects. There must be a debug option which upon key press displays the bounding | + | | The bounding volumes (boxes or spheres) must be tight enough around the colliding objects to be within 50% of the size of the objects. There must be a debug option which upon key press displays the bounding volumes as wireframe, and uses different colors for those volumes which are colliding (eg, red) vs. those that don't (eg, white). An example for an algorithm to implement is the Sweep and Prune algorithm with dimension reduction as discussed in class. A brute force bounding sphere based algorithm is also acceptable. |
|- | |- | ||
| Procedurally modeled city | | Procedurally modeled city | ||
Line 211: | Line 221: | ||
| Depth of field | | Depth of field | ||
| Similar to how cameras work, focus on an object in the scene, and apply a gradual decrease in sharpness to objects that are out of focus. You must be able to change the focal point at runtime, between at least 2 different points. Not to be confused with fog, and fog should not be used in the same project. Make sure you have objects at various distances to show this feature works properly. | | Similar to how cameras work, focus on an object in the scene, and apply a gradual decrease in sharpness to objects that are out of focus. You must be able to change the focal point at runtime, between at least 2 different points. Not to be confused with fog, and fog should not be used in the same project. Make sure you have objects at various distances to show this feature works properly. | ||
+ | |- | ||
+ | | Motion Blur | ||
+ | | Needs to make use of per-pixel velocities as per this [https://developer.nvidia.com/gpugems/GPUGems3/gpugems3_ch27.html tutorial] or similar method. The use of an accumulation buffer alone will not be sufficient. | ||
|- | |- | ||
| Screen space effects | | Screen space effects | ||
Line 219: | Line 232: | ||
|- | |- | ||
| Collision detection with arbitrary geometry | | Collision detection with arbitrary geometry | ||
− | | Needs to test for the intersection of | + | | Needs to test for the intersection of geometric primitives. Performance should be smooth enough to undoubtedly show that your algorithm is working properly. Support a debug mode to highlight colliding faces (e.g. color the face red when colliding). |
|} | |} |
Latest revision as of 14:21, 16 December 2019
Contents |
Final Project
Update: this year's video playlist is now on-line.
The final homework assignment should be done in teams of two or three people. Each team will need to implement an interactive software application which uses some of the advanced rendering effects or modeling techniques we discussed in class, but that weren't covered in previous homework projects. We will evaluate your project based on technical and creative merits.
The final project has to be presented to the entire class during our final exam slot by means of a short video, as well as a demonstration to the course staff. Late submissions will not be permitted.
The video presentations will happen in CSE room 1242, the demos in the downstairs labs.
For inspiration, here is the link to the final project playlist from last year's CSE 167.
Grading
Your final project score consists of three parts:
- Blog (10 points)
- Video (5 points)
- Graphics Application (85 points)
- Extra Credit (10 points)
Blog (10 Points)
You need to create a blog to report on the progress you are making on your project. You need to make three blog entries, each summarizing one week of your work. The due dates for the blog entries are:
- Blog #1 (4 points): Wednesday, Nov 27th at 11:59pm
- Blog #2 (3 points): Wednesday, Dec 4th at 11:59pm
- Blog #3 (3 points): Wednesday, Dec 11th at 11:59pm
Your first blog entry needs to contain (at a minimum) the following information
- The name of your project (your choice)
- The names of your team members
- A short description of what your project is about (1 paragraph)
- The technical features you are implementing
- What you are spending your creative efforts on
- A picture: either a screen shot of the current state of your application, or a sketch of it (for the first blog a sketch of your planned application also works)
The remaining blog entries need to have:
- Any updates to the above information
- A description of what you worked on during that week
- A screen shot of the current state of your application (additional sketches are optional)
Once you have created your blog, please tell us its URL with a Google form, the link to which we will post on Canvas. You are free to create the blog on any web based blog site, such as Google Sites, Blogger or WordPress. You should use the same blog URL each time and just add new blog entries. If you need to move to a different blog server, please move your entire blog content over and let the instructor know its new URL.
Video (5 Points)
By 3pm on Dec 12th you need to create a Youtube video of your application, and add it to the final project playlist, the link to which will be posted on Canvas. Your video will be shown during the first hour of the final presentation event.
The video should be about 1 minute long (maximum: 1:20 min). You should use screen recording software, such as the Open Broadcaster Software, which is available free of charge for Windows and Mac. On a Mac you can use QuickTime, which is available free of charge as well. There does not need to be an audio track, but we recommend that you talk over the video or add sound or music.
The use of video editing software is optional, but you need to have at a minimum a title slide with the name of your project and the team members' names (you can record this also with OBS).
Graphics Application (85 Points)
The majority of your final project score will be for the graphics application you will be developing. We will look at both its technical as well as creative quality.
We will base your grade on what we see in your video, as well as what you show us during your demo.
Technical Features (70 Points)
To obtain the full score for the technical quality, your team must implement at least three skill points worth of technical features per team member. For example, a team of two must cover 2x3=6 skill points. Also: each team must implement at least one medium or hard feature for each team member. For example: a team of two has the following options to cover the required 6 skill points:
- One easy (1 point), one medium (2 points), one hard (3 points) feature.
- Two easy and two medium features.
- Three medium features.
- Two hard features.
If your team implements more than the required amount of skill points, the additional features can fill in for points which you might lose for incomplete or insufficiently implemented features.
Here is the list of technical features you can choose from:
Easy: (1 skill point)
- Toon shading - covered Nov 26
- Procedurally modeled buildings (no shape grammar) - covered Nov 26
- Particle effect - covered Dec 3
- Collision detection with bounding spheres or boxes - covered Dec 3
- Sound effects - not covered in class
- First person camera control with player movements - not covered in class
- Selection buffer for selection with the mouse - not covered in class
Medium: (2 skill points)
- Surface made with at least two C1 continuous Bezier patches (e.g., flag, water surface, etc.) - covered Nov 14
- Procedurally generated terrain - covered Nov 19
- Procedurally modeled city (no shape grammar) - covered Nov 26
- Procedurally generated plants with L-systems - covered Nov 26
- Procedurally modeled buildings with shape grammar - covered Nov 26
- Shadow mapping - covered Nov 26
- Glow, bloom or halo effect - covered Dec 5
- Bump mapping - covered Dec 5
- Water effect with reflection and refraction - not covered in class
- Procedurally generated and animated clouds - not covered in class
Hard: (3 skill points)
- Other deferred rendering effects, such as screen space post-processed lights - covered Dec 3
- Displacement mapping - covered Dec 5
- Collision detection with arbitrary geometry - not covered in class
- Shadow Volumes - not covered in class
More specific requirements for the technical features are listed in the table at the bottom of this page.
All technical features need to have a toggle switch (keyboard key) with which they can be enabled or disabled, or that changes its parameters (eg, seed value for randomizer) to show that they work correctly.
Additional technical features may be approved by the instructor upon request.
Creativity (15 Points)
We will look for a cohesive theme or story, but also things such as aesthetic visuals, accomplished by carefully selected textures, colors and materials, placement of camera and lights, correct normal vectors for all your surfaces, effective user interaction, and fluid rendering. The entire course staff will determine these scores very carefully during a grading meeting on the day following the presentations.
Presentation Day
There will be two parts to the presentation:
- During the first hour of the event we are going to show your Youtube videos to the entire class.
- During the following two hours you will need to present your projects to the graders science-fair style in the CSE basement labs, like for regular homework projects (sign up on autograder). We are going to split the class into two groups, A and B. Group A will present during the first hour, group B during the second hour. The purpose of this is that we want as many of the graders to be able to see the projects, and also give the class the opportunity to look at the projects as well.
Implementation
If you want to have debugging support from the TA and tutors, you need to implement this project in C++ using OpenGL and GLFW, just like the other homework projects. Otherwise, you are free to use a language of your choice, you can even write an app for a smart phone or tablet, or a VR system, as long as you use OpenGL, DirectX, or Vulkan.
Third party graphics libraries are generally not acceptable, unless cleared by the instructor. Exceptions are typically granted for libraries that load images or 3D models, libraries to support input devices, or support for audio. Pre-approved libraries are:
- GLee to manage OpenGL extensions
- Assimp for importing OBJs
- SOIL to load a variety of image formats
- SDL, to replace GLFW with a more powerful graphics engine
- OpenAL for audio support
- XML parsers, such as MiniXML or PugiXML - useful for configuration files
- Physics engines (such as the Bullet Engine), as long as they are not used to obtain points for technical features.
Extra Credit (10 Points Max.)
a) Advanced Effects (10 Points)
You can receive up to 10 extra points for a flawless implementation of one of the following algorithms. It also needs to include adequate visual debugging aids (example: Oleg Utkin's CSE 167 final project video). None of these effects are covered in class, so you will need to research them yourself.
All of these effects also count as 3 skill points towards your technical score.
For a full score of 10 points, teams of N people must implement N of these algorithms. In other words, each of the algorithms below gains the team 10/N extra credit points.
- Water effect with reflection of 3D models (doesn't stack with regular water effect)
- Screen space ambient occlusion (SSAO) or Screen space directional occlusion (SSDO)
- Motion blur (tutorial link)
- Depth of Field
Other advanced effects may also quality for extra credit, but must be run by the professor.
b) Virtual Reality (10 Points)
Write your graphics application for the Oculus Rift or HTC Vive. Your application needs to be interactive and support at least one 3D controller. You have to use the respective programming SDK and write your code in C++ rather than using Unity/Unreal/etc. We recommend using the starter code we use in CSE 190. If you want to use a different programming environment you need to get it cleared by the professor. If you choose this extra credit option, the professor will get you access to the VR lab, room B210, to do your project. Your demonstration on demo day will also happen in B210.
Tips
- If you use Sketchup to create obj models: Sketchup writes quads whereas our obj reader expects triangles. You can convert the Sketchup file to one with triangles using Blender, a free 3D modeling tool. Then you put the object into edit mode and select Mesh->Faces->Convert Quads to Triangles.
- MeshLab is another excellent and free tool for 3D file format conversion.
- Trimble 3D Warehouse and Turbosquid are great resources for ready-made 3D models you can export to OBJ files with the above described technique.
Technical Feature Implementation Requirements
Technical Feature | Requirements and Comments |
---|---|
Bezier Patches | Need to make a surface out of at least 2 connected Bezier patches with C1 continuity between them. Also the patches need to have correct normals. |
Sound Effects | Add a background music track that plays throughout the entire application, along with sound effects for events. The sound effects and background music should not cut each other (in other words, they should be able to play at the same time without forcing one another to pause or restart). You cannot get credit for this if audio sources are sparse, or if sound effects rarely ever play. |
Toon shading | Needs to consist of both discretized colors and silhouettes. The use of Toon Shading must make sense for the project. |
Bump mapping | Needs to use either a height map to derive the normals from, or a normal map directly. The map needs to be shown alongside the rendering window, upon key press. |
Glow or halo effect | Object glow must be obvious and clearly visible (not occluded). Implement functionality to turn off glow upon key press. |
Particle effect | Generate a lot of particles (at least 200), all of which can move and die shortly after appearing. Application should not experience a significant slowdown, and instancing has to be done cleanly (no memory leaks should occur when the particles die). |
Collision detection with bounding volumes | The bounding volumes (boxes or spheres) must be tight enough around the colliding objects to be within 50% of the size of the objects. There must be a debug option which upon key press displays the bounding volumes as wireframe, and uses different colors for those volumes which are colliding (eg, red) vs. those that don't (eg, white). An example for an algorithm to implement is the Sweep and Prune algorithm with dimension reduction as discussed in class. A brute force bounding sphere based algorithm is also acceptable. |
Procedurally modeled city | Must include 3 types of buildings, some large landmark features that span multiple blocks such as a park (a flat rectangular piece of land is fine) or lake or stadium. More than one building per block. Creativity points if you can input an actual city's data for the roads! |
Procedurally modeled buildings | Must have 3 non-identical portions (top, middle, bottom OR 1st floor, 2nd floor, roof). Generate at least 3 different buildings with differently shaped windows. |
Procedurally generated terrain | Procedural terrain requires the usage of an algorithm that does not simply read from a terrain map. You can use any algorithm mentioned in class or on the Internet, such as midpoint displacement, diamond-square, etc. The generated terrain must look like something potentially habitable, and we will ask you to explain how the terrain was generated. Your terrain also needs to have correct normals. |
Procedurally generated plants with L-systems | To make it 3D you can also add another axis for rotation(typical variables are & and ^). At least 3 trees that demonstrate different rules. Pseudorandom execution will make your trees look more varied and pretty. |
Water effect with waves, reflection and refraction | Add shaders to render water, reflecting and refracting the sky box textures as your cubic environment map. In addition, simulate wave animations for the water. To get the extra credit, the water must reflect/refract all 3D objects of your scene, not just the sky box (you need to place multiple 3D objects so that they reflect off the water into the camera). |
Shadow mapping | Upon key press, the shadow map (as a greyscale image) must be shown alongside the rendering window. You should be able to toggle shadows with another key. |
Shadow volumes | In addition to the shadow map, show the wireframe of the shadow volume that was created. Points cannot be stacked with shadow mapping. In other words, you will either get 3 points for having shadow volumes, or 2 points for shadow mapping, but not 5 points for both. |
Displacement mapping | Using a texture or height map, the position of points on the geometry should be modified. The map must optionally (upon key press) be shown alongside the actual graphics window to demonstrate that normal mapping or bump mapping was not used to achieve a displacement illusion. |
Depth of field | Similar to how cameras work, focus on an object in the scene, and apply a gradual decrease in sharpness to objects that are out of focus. You must be able to change the focal point at runtime, between at least 2 different points. Not to be confused with fog, and fog should not be used in the same project. Make sure you have objects at various distances to show this feature works properly. |
Motion Blur | Needs to make use of per-pixel velocities as per this tutorial or similar method. The use of an accumulation buffer alone will not be sufficient. |
Screen space effects | Screen space lighting effect using normal maps, depth maps such as haze or area lights. Screen space rendering effects such as motion blur or reflection(ability to turn on/off). Volumetric light scattering or some kind of screen space ray tracing effect. |
Screen space ambient occlusion (SSAO) | Demonstrate functionality on an object with a large amount of crevices (note that the obj models given this quarter will not suffice; if you are unsure, just post several models you intend to work with on your blog for feedback). Implement functionality to turn SSAO off upon key press. To qualify for extra credit, upon key press, SSAO map (as a greyscale image) must be shown alongside the rendering window. |
Collision detection with arbitrary geometry | Needs to test for the intersection of geometric primitives. Performance should be smooth enough to undoubtedly show that your algorithm is working properly. Support a debug mode to highlight colliding faces (e.g. color the face red when colliding). |