Difference between revisions of "Project5F17"
(→Technical Feature Implementation Requirements) |
(→Final Project) |
||
(9 intermediate revisions by 2 users not shown) | |||
Line 6: | Line 6: | ||
'''Note:''' If you are not going to be able to work as a member of a team, you can ask for permission from the instructor to do a solo project. This will only be permitted in special cases and will have to be requested of the instructor, by visiting his office hour or by email. | '''Note:''' If you are not going to be able to work as a member of a team, you can ask for permission from the instructor to do a solo project. This will only be permitted in special cases and will have to be requested of the instructor, by visiting his office hour or by email. | ||
+ | |||
+ | '''Update:''' Here is the link to the [https://www.youtube.com/playlist?list=PLINx2DKpKpTvFEnpwyzLmtmZK5LXlBP5x&jct playlist] of all videos shown on final exam day. | ||
==Grading== | ==Grading== | ||
Line 108: | Line 110: | ||
* During the first hour of the event we are going to show the Youtube videos to the entire class. The graders will be taking detailed notes. | * During the first hour of the event we are going to show the Youtube videos to the entire class. The graders will be taking detailed notes. | ||
* During the following two hours you will need to present your projects to the graders science-fair style in the VR lab B210. We are going to split the class into two groups A and B. Group A will present during the first hour, group B during the second hour. | * During the following two hours you will need to present your projects to the graders science-fair style in the VR lab B210. We are going to split the class into two groups A and B. Group A will present during the first hour, group B during the second hour. | ||
+ | |||
+ | <!-- next time: move bump mapping to medium category; also require correct normals for all objects and terrains. --> | ||
==Implementation== | ==Implementation== | ||
Line 124: | Line 128: | ||
==Extra Credit (10 Points Max.)== | ==Extra Credit (10 Points Max.)== | ||
− | |||
− | |||
− | |||
− | |||
a) Bounty Points (10 Points) | a) Bounty Points (10 Points) | ||
Line 133: | Line 133: | ||
In order to motivate you to choose the implementation of some of the hardest algorithms, you can receive up to 10 extra points for a flawless implementation of one of the following algorithms, as well as adequate visual debugging aids. Note that some of these are not directly or sufficiently covered in class, so you will need to research them yourself. | In order to motivate you to choose the implementation of some of the hardest algorithms, you can receive up to 10 extra points for a flawless implementation of one of the following algorithms, as well as adequate visual debugging aids. Note that some of these are not directly or sufficiently covered in class, so you will need to research them yourself. | ||
− | For a full score of 10 points, teams of N people must implement N of these algorithms | + | For a full score of 10 points, teams of N people must implement N of these algorithms. In other words, each of the algorithms below gains the team 10/N extra credit points. |
* Water effect with reflection of 3D models | * Water effect with reflection of 3D models | ||
Line 139: | Line 139: | ||
* Screen space ambient occlusion (SSAO) | * Screen space ambient occlusion (SSAO) | ||
* Screen space directional occlusion (SSDO) with color bounce | * Screen space directional occlusion (SSDO) with color bounce | ||
− | * | + | * Displacement mapping |
+ | * Motion blur (tutorial link) | ||
+ | * Depth of Field | ||
+ | * Shadow Volumes | ||
Any other technical feature (even from the regular feature lists) can also be eligible for bounty points, but requires permission by a course staff member. | Any other technical feature (even from the regular feature lists) can also be eligible for bounty points, but requires permission by a course staff member. | ||
Line 145: | Line 148: | ||
b) Virtual Reality (10 Points) | b) Virtual Reality (10 Points) | ||
− | Port your application to the Oculus Rift or HTC Vive | + | Port your application to the Oculus Rift or HTC Vive. Your application needs to be interacted with by the HTC Vive or the Oculus Touch controllers. You have to use the respective programming SDK and write your code in C++ rather than using Unity/Unreal/etc. |
'''Notes:''' | '''Notes:''' | ||
* As an example for what we mean by visual debugging aids, have a look at [https://www.youtube.com/watch?v=PXAk75SLZ10 Oleg Utkin's CSE 167 final project video]. | * As an example for what we mean by visual debugging aids, have a look at [https://www.youtube.com/watch?v=PXAk75SLZ10 Oleg Utkin's CSE 167 final project video]. | ||
− | |||
− | |||
==Tips== | ==Tips== | ||
Line 167: | Line 168: | ||
| Bezier Patches | | Bezier Patches | ||
| Need to make a surface out of at least 4 connected Bezier patches with C1 continuity between all of them. | | Need to make a surface out of at least 4 connected Bezier patches with C1 continuity between all of them. | ||
+ | |- | ||
+ | | Sound Effects | ||
+ | | Add a background music track that plays throughout the entire application, along with sound effects for events. The sound effects and background music should not cut each other (in other words, they should be able to play at the same time without forcing one another to pause or restart). You cannot get credit for this if audio sources are sparse, or if sound effects rarely ever play. | ||
|- | |- | ||
| Per-pixel illumination of texture-mapped polygons | | Per-pixel illumination of texture-mapped polygons | ||
Line 187: | Line 191: | ||
|- | |- | ||
| Linear fog | | Linear fog | ||
− | | Add a fog effect to your application, similar to the now deprecated glFog function, so that objects farther away are fogged more than those that are closer. The fog effect intensifies linearly with distance. | + | | Add a fog effect to your application, similar to the now deprecated glFog function, so that objects farther away are fogged more than those that are closer. The fog effect intensifies linearly with distance, so make sure you have objects at various distances to show that. |
|- | |- | ||
| Collision detection with bounding boxes | | Collision detection with bounding boxes | ||
Line 193: | Line 197: | ||
|- | |- | ||
| Procedurally modeled city | | Procedurally modeled city | ||
− | | Must include 3 types of buildings, a park (a flat rectangular piece of land is fine), and roads that are more complex than a regular square grid (even Manhattan is more complex than a grid!)<br>Creativity points if you can input an actual city's data for the roads! | + | | Must include 3 types of buildings, some large landmark features that span multiple streets such as a park (a flat rectangular piece of land is fine) or lake or stadium, and roads that are more complex than a regular square grid (even Manhattan is more complex than a grid!)<br>Creativity points if you can input an actual city's data for the roads! |
|- | |- | ||
| Procedurally modeled buildings | | Procedurally modeled buildings | ||
Line 199: | Line 203: | ||
|- | |- | ||
| Procedurally generated terrain | | Procedurally generated terrain | ||
− | | Ability to input a height map: either [http://ivl.calit2.net/wiki/images/3/3d/SanDiegoTerrain.zip real] (1 point) or generated from different algorithms (2 points).<br>Shader that adds at least 3 different | + | | Ability to input a height map: either [http://ivl.calit2.net/wiki/images/3/3d/SanDiegoTerrain.zip real] (1 point) or generated from different algorithms (2 points).<br>Shader that adds at least 3 different types of terrain (for example: grassland, plains, desert, snow, tundra, sea, rocks, wasteland, etc). |
|- | |- | ||
| Procedurally generated plants with L-systems | | Procedurally generated plants with L-systems | ||
Line 208: | Line 212: | ||
|- | |- | ||
| Shadow mapping | | Shadow mapping | ||
− | | Upon key press, the shadow map (as a greyscale image) must be shown alongside the rendering window. | + | | Upon key press, the shadow map (as a greyscale image) must be shown alongside the rendering window. You should be able to toggle shadows with another key. |
|- | |- | ||
| Shadow volumes | | Shadow volumes | ||
Line 220: | Line 224: | ||
|- | |- | ||
| Depth of field | | Depth of field | ||
− | | Similar to how cameras work, focus on an object in the scene, and apply a gradual decrease in sharpness to objects that are out of focus. You must be able to change the focal point at runtime, between at least 2 different points. Not to be confused with fog, and fog should not be used in the same project. | + | | Similar to how cameras work, focus on an object in the scene, and apply a gradual decrease in sharpness to objects that are out of focus. You must be able to change the focal point at runtime, between at least 2 different points. Not to be confused with fog, and fog should not be used in the same project. Make sure you have objects at various distances to show this feature works properly. |
|- | |- | ||
| Screen space effects | | Screen space effects | ||
Line 226: | Line 230: | ||
|- | |- | ||
| Screen space ambient occlusion (SSAO) | | Screen space ambient occlusion (SSAO) | ||
− | | Demonstrate functionality on an object with a large amount of crevices. Implement functionality to turn SSAO off upon key press. To qualify for bounty points, upon key press, SSAO map (as a greyscale image) must be shown alongside the rendering window. | + | | Demonstrate functionality on an object with a large amount of crevices (note that the obj models given this quarter will not suffice; if you are unsure, just post several models you intend to work with on your blog for feedback). Implement functionality to turn SSAO off upon key press. To qualify for bounty points, upon key press, SSAO map (as a greyscale image) must be shown alongside the rendering window. |
|- | |- | ||
| Screen space directional occlusion (SSDO) with color bounce | | Screen space directional occlusion (SSDO) with color bounce | ||
Line 245: | Line 249: | ||
* The pseudorandom number generator used in the procedural modeling techniques must be seeded (use srand()). For all these requirements, you should be able to change the seed at least once during runtime, so we can see a dynamic reloading of the procedural modeling. We will also ask you to re-run the application to verify that your procedural modeling does not change when using the same seed. | * The pseudorandom number generator used in the procedural modeling techniques must be seeded (use srand()). For all these requirements, you should be able to change the seed at least once during runtime, so we can see a dynamic reloading of the procedural modeling. We will also ask you to re-run the application to verify that your procedural modeling does not change when using the same seed. | ||
* Do not add fog if depth of field is implemented. The scene you generate with depth of field should look similar to what you would see in a camera. | * Do not add fog if depth of field is implemented. The scene you generate with depth of field should look similar to what you would see in a camera. | ||
+ | |||
+ | |||
+ | <!-- next time: mandate normals for all shapes including terrain, bezier patches etc. --> |
Latest revision as of 12:36, 15 December 2017
Contents |
Final Project
The final homework assignment has to be done in teams of two or three people. Each team will need to implement an interactive software application which uses some of the advanced rendering effects or modeling techniques we discussed in class, but weren't covered in previous homework projects. We will evaluate your project based on technical and creative merits.
The final project has to be presented to the entire class during our final exam slot by means of a one minute long video. It will be graded by the instructor, the TA and the tutors. Late submissions will not be permitted. You are welcome to bring guests to the presentation.
Note: If you are not going to be able to work as a member of a team, you can ask for permission from the instructor to do a solo project. This will only be permitted in special cases and will have to be requested of the instructor, by visiting his office hour or by email.
Update: Here is the link to the playlist of all videos shown on final exam day.
Grading
Your final project score consists of three parts:
- Blog (10 points)
- Presentation (90 points)
- Extra Credit (10 points)
Blog (10 Points)
You need to create a blog to report on the progress you are making on your project. You need to make at least two blog entries to get the full score. The first is due on Monday, Dec 4th at 11:59pm, the second is due on Monday, Dec 11th at 11:59pm. By noon on Dec 13th you need to create a Youtube video of your application. This video will be shown during the first hour of the final presentation event, and is the primary basis for your grade.
The first blog entry needs to contain (at least) the following information:
- The name of your project.
- The names of the team members.
- A short description of your project.
- The technical features you are implementing.
- What you are planning on spending your creative efforts on.
- At least one screen shot of your project.
In blog 2 you need to update all above categories, and upload at least one screen shot again.
The video should be about 1 minute long (maximum: 1:20 min). You should use screen recording software, such as the Open Broadcaster Software, which is available free of charge for Windows and Mac. On a Mac you can use QuickTime, which is available free of charge as well. There does not need to be an audio track, but you are welcome to talk over the video or add sound or music. Upload the video to Youtube, and link to it from your blog. The use of video editing software is optional, but you need to have a title slide with the name of your project and the team members' names. This can be done within Youtube itself with text embedding.
You are free to create the blog on any web based blog site, such as Blogger or WordPress. You should use the same blog URL each time and just add blog entries. If you need to move to a different blog server, please move your entire blog content over and let us know its new URL.
The points are distributed like this:
- Blog entry #1: 4 points
- Blog entry #2: 3 points
- Video: 3 points
Once you have created your blog, please tell us its URL in the way described on Piazza.
Presentation (90 Points)
80% of the score for the presentation are for the technical features, 20% for the creative quality of your demonstration. The grading will be based solely on your presentation. What you don't show us won't score points.
To obtain the full score for the technical quality, each team must implement at least three skill points worth of technical features per team member. For example, a team of two must cover 2x3=6 skill points. Also: each team must implement at least one medium or hard feature for each member of the team. Example: a team of two has the following options to cover the required 6 skill points:
- One easy (1 point), one medium (2 points), one hard (3 points) feature.
- Two easy and two medium features.
- Two hard features.
- Three medium features.
If your team implements more than the required amount of skill points, the additional features can fill in for points which you might lose for incomplete or insufficiently implemented features.
Your score for creativity is going to be determined by averaging instructor's, TA's' and tutors' subjective scores. We will look for a cohesive theme and story, but also things such as aesthetic visuals, accomplished by carefully selected textures, colors and materials, placement of camera and lights, effective user interaction, and fluid rendering.
Here is the list of technical features you can choose from:
Easy: (1 skill point)
- Per-pixel illumination of texture-mapped polygons
- Toon shading
- Bump mapping
- Glow, bloom or halo effect
- Particle effect
- Procedurally modeled buildings (no shape grammar)
- Terrain map loading
- Linear fog
- Rim Shading
- Sound effects
Medium: (2 skill points)
- Surface made with Bezier patches (e.g., flag, water surface, etc.)
- Collision detection with tight bounding boxes (only counts if objects aren't already round or rectangular)
- Procedurally modeled city
- Procedurally generated terrain (not just loading a terrain from a file)
- Procedurally generated plants with L-systems
- Procedurally modeled buildings with shape grammar
- Water effect with waves, reflection and refraction
- Shadow mapping
Hard: (3 skill points)
- Displacement mapping
- Screen space post-processed lights
- Screen space ambient occlusion (SSAO)
- Collision detection with arbitrary geometry
- Procedurally modeled complex objects with shape grammar
- Motion blur (tutorial link)
- Depth of Field
- Shadow Volumes
Extra Hard: (4 skill points)
- Screen space directional occlusion (SSDO) with color bounce
For a full score, each of these technical features must fulfill the more detailed requirements listed at the bottom of this page.
All technical features have to have a toggle switch (keyboard key) with which they can be enabled or disabled. For procedural algorithms, to recalculate the procedural objects with a different seed value for the random number generator and update the geometry in real-time.
Additional technical features may be approved by the course staff upon request.
Presentation Day
There will be two parts to the presentation:
- During the first hour of the event we are going to show the Youtube videos to the entire class. The graders will be taking detailed notes.
- During the following two hours you will need to present your projects to the graders science-fair style in the VR lab B210. We are going to split the class into two groups A and B. Group A will present during the first hour, group B during the second hour.
Implementation
If you want to have debugging support from the TA and tutors, you need to implement this project in C++ using OpenGL and GLFW, just like the other homework projects. Otherwise, you are free to use a language of your choice, you can even write an app for a smart phone or tablet, or a VR system.
Third party graphics libraries are generally not acceptable, unless cleared by the instructor. Exceptions are typically granted for libraries to load images and 3D models, libraries to support input devices, or support for audio. Pre-approved libraries are:
- GLee to manage OpenGL extensions
- Assimp for importing OBJs
- SOIL to load a variety of image formats
- SDL, to replace GLFW
- OpenAL for audio support
- XML parsers, such as MiniXML or PugiXML - useful for configuration files
- Physics engines (such as the Bullet Engine), as long as they are not used to obtain points for technical features.
Extra Credit (10 Points Max.)
a) Bounty Points (10 Points)
In order to motivate you to choose the implementation of some of the hardest algorithms, you can receive up to 10 extra points for a flawless implementation of one of the following algorithms, as well as adequate visual debugging aids. Note that some of these are not directly or sufficiently covered in class, so you will need to research them yourself.
For a full score of 10 points, teams of N people must implement N of these algorithms. In other words, each of the algorithms below gains the team 10/N extra credit points.
- Water effect with reflection of 3D models
- Screen space post-processed lights
- Screen space ambient occlusion (SSAO)
- Screen space directional occlusion (SSDO) with color bounce
- Displacement mapping
- Motion blur (tutorial link)
- Depth of Field
- Shadow Volumes
Any other technical feature (even from the regular feature lists) can also be eligible for bounty points, but requires permission by a course staff member.
b) Virtual Reality (10 Points)
Port your application to the Oculus Rift or HTC Vive. Your application needs to be interacted with by the HTC Vive or the Oculus Touch controllers. You have to use the respective programming SDK and write your code in C++ rather than using Unity/Unreal/etc.
Notes:
- As an example for what we mean by visual debugging aids, have a look at Oleg Utkin's CSE 167 final project video.
Tips
- If you use Sketchup to create obj models: Sketchup writes quads whereas our obj reader expects triangles. You can convert the Sketchup file to one with triangles using Blender, a free 3D modeling tool. Then you put the object into edit mode and select Mesh->Faces->Convert Quads to Triangles.
- MeshLab is another excellent and free tool for 3D file format conversion.
- Trimble 3D Warehouse and Turbosquid are great resources for ready-made 3D models you can export to OBJ files with the above described technique.
Technical Feature Implementation Requirements
Technical Feature | Requirements and Comments |
---|---|
Bezier Patches | Need to make a surface out of at least 4 connected Bezier patches with C1 continuity between all of them. |
Sound Effects | Add a background music track that plays throughout the entire application, along with sound effects for events. The sound effects and background music should not cut each other (in other words, they should be able to play at the same time without forcing one another to pause or restart). You cannot get credit for this if audio sources are sparse, or if sound effects rarely ever play. |
Per-pixel illumination of texture-mapped polygons | This effect can be hard to see without careful placement of the light source(s). You need to support a keyboard key to turn the effect on or off at runtime. The best type of light source to demonstrate the effect is a spot light with a small spot width, to illuminate only part of the texture. |
Toon shading | Needs to consist of both discretized colors and silhouettes. The use of Toon Shading must make sense for the project, and Rim shading must not be used in the same project. |
Rim shading | Add lighting along the rim of objects to further contrast it with the background. The use of Rim Shading must make sense for the project, and Toon shading must not be used in the same project. |
Bump mapping | Needs to use either a height map to derive the normals from, or a normal map directly. The map needs to be shown alongside the rendering window, upon key press. |
Glow or halo effect | Object glow must be obvious and clearly visible (not occluded). Implement functionality to turn off glow upon key press. |
Particle effect | Generate a lot of particles (at least 200), all of which can move and die shortly after appearing. Application should not experience a significant slowdown, and instancing has to be done cleanly (no memory leaks should occur when the particles die). |
Linear fog | Add a fog effect to your application, similar to the now deprecated glFog function, so that objects farther away are fogged more than those that are closer. The fog effect intensifies linearly with distance, so make sure you have objects at various distances to show that. |
Collision detection with bounding boxes | The bounding boxes must be tight enough around the colliding objects to be within 50% of the size of the objects. There must be a debug option which upon key press displays the bounding boxes as wireframe boxes, and uses different colors for those boxes which are colliding (eg, red) vs. those that don't (eg, white). An example for an algorithm to implement is the Sweep and Prune algorithm with Dimension Reduction as discussed in class. |
Procedurally modeled city | Must include 3 types of buildings, some large landmark features that span multiple streets such as a park (a flat rectangular piece of land is fine) or lake or stadium, and roads that are more complex than a regular square grid (even Manhattan is more complex than a grid!) Creativity points if you can input an actual city's data for the roads! |
Procedurally modeled buildings | Must have 3 non-identical portions (top, middle, bottom OR 1st floor, 2nd floor, roof). Generate at least 3 different buildings with differently shaped windows. |
Procedurally generated terrain | Ability to input a height map: either real (1 point) or generated from different algorithms (2 points). Shader that adds at least 3 different types of terrain (for example: grassland, plains, desert, snow, tundra, sea, rocks, wasteland, etc). |
Procedurally generated plants with L-systems | At least 4 language variables (X, F, +, -), and parameters (length of F, theta of + and -). To make it 3D you can also add another axis for rotation(typical variables are & and ^). At least 3 trees that demonstrate different rules. Pseudorandom execution will make your trees look more varied and pretty. |
Water effect with waves, reflection and refraction | Add shaders to render water, reflecting and refracting the sky box textures as your cubic environment map. In addition, simulate wave animations for the water. To get the bounty points, the water must reflect/refract all 3D objects of your scene, not just the sky box (you need to place multiple 3D objects so that they reflect off the water into the camera). |
Shadow mapping | Upon key press, the shadow map (as a greyscale image) must be shown alongside the rendering window. You should be able to toggle shadows with another key. |
Shadow volumes | In addition to the shadow map, show the wireframe of the shadow volume that was created. Points cannot be stacked with shadow mapping. In other words, you will either get 3 points for having shadow volumes, or 2 points for shadow mapping, but not 5 points for both. |
Procedurally modeled complex objects with shape grammar | At least 5 different shapes must be used, and there must be rules for how they can be combined. It is not acceptable to allow their combination in completely arbitrary ways. You cannot stack this feature's skill points with the procedural modeling techniques listed in the Medium difficulty section. In other words, you will either get 3 points for having shape grammar, or 2 points for not having it. |
Displacement mapping | Using a texture or height map, the position of points on the geometry should be modified. The map must optionally (upon key press) be shown alongside the actual graphics window to demonstrate that normal mapping or bump mapping was not used to achieve a displacement illusion. |
Depth of field | Similar to how cameras work, focus on an object in the scene, and apply a gradual decrease in sharpness to objects that are out of focus. You must be able to change the focal point at runtime, between at least 2 different points. Not to be confused with fog, and fog should not be used in the same project. Make sure you have objects at various distances to show this feature works properly. |
Screen space effects | Screen space lighting effect using normal maps, depth maps such as haze or area lights. Screen space rendering effects such as motion blur or reflection(ability to turn on/off). Volumetric light scattering or some kind of screen space ray tracing effect. |
Screen space ambient occlusion (SSAO) | Demonstrate functionality on an object with a large amount of crevices (note that the obj models given this quarter will not suffice; if you are unsure, just post several models you intend to work with on your blog for feedback). Implement functionality to turn SSAO off upon key press. To qualify for bounty points, upon key press, SSAO map (as a greyscale image) must be shown alongside the rendering window. |
Screen space directional occlusion (SSDO) with color bounce | Demonstrate functionality on multiple objects which possess large amounts of crevices and widely varying colors. Implement functionality to turn SSDO with color bounce off upon key press. To qualify for bounty points, upon key press, SSDO map (colored image) must be shown alongside the rendering window. |
Collision detection with arbitrary geometry | Needs to test for the intersection of faces. Performance should be smooth enough to undoubtedly show that your algorithm is working properly. Support a debug mode to highlight colliding faces (e.g. color the face red when colliding). |
Restrictions
A summary of restrictions listed in the above table, along with extra explanations:
- Toon Shading and Rim Shading cannot be used in the same project.
- Adding shape grammar to procedural modeling does not stack with the base requirement. It will only yield 1 extra point to the modeling requirement.
- Shadow volumes do not stack with shadow mapping. You will either get 2 points for mapping or 3 points for volumes, but not 5 points for both (since shadow volumes already have shadow mapping).
- Procedural terrain requires the usage of an algorithm that does not simply read from a terrain map. You can use any algorithm mentioned in class or on the Internet, such as midpoint displacement, diamond-square, etc. The generated terrain must look like something potentially habitable, and we will ask you to explain how the terrain was generated.
- The pseudorandom number generator used in the procedural modeling techniques must be seeded (use srand()). For all these requirements, you should be able to change the seed at least once during runtime, so we can see a dynamic reloading of the procedural modeling. We will also ask you to re-run the application to verify that your procedural modeling does not change when using the same seed.
- Do not add fog if depth of field is implemented. The scene you generate with depth of field should look similar to what you would see in a camera.