The final homework assignment has to be done in teams of two or three people. Each team will need to implement an interactive software application which uses some of the advanced rendering effects or modeling techniques we discussed in class, but that weren't covered in previous homework projects. We will evaluate your project based on technical and creative merits.
The final project has to be presented to the entire class during our final exam slot by means of a short video, as well as a demonstration to the course staff. Late submissions will not be permitted.
Note: If you have reasons for why you can't work on this project as a member of a team, you can ask the instructor for permission to do it alone.
For inspiration, here is the link to the final project playlist from last year's CSE 167.
Your final project score consists of three parts:
- Blog (10 points)
- Video (5 points)
- Presentation (85 points)
- Extra Credit (10 points)
Blog (10 Points)
You need to create a blog to report on the progress you are making on your project. You need to make three blog entries, each summarizing one week of your work. The due dates for the blog entries are:
- Blog #1 (4 points): Tuesday, Nov 27th at 11:59pm
- Blog #2 (3 points): Tuesday, Dec 4th at 11:59pm
- Blog #3 (3 points): Tuesday, Dec 11th at 11:59pm
Your first blog entry needs to contain (at a minimum) the following information
- The name of your project
- The names of the team members
- A short description of what your project is about
- The technical features you are implementing
- What you are spending your creative efforts on
Additionally, each blog entry need to have:
- Any updates to the above information
- A description of what you worked on during that week
- At least one screen shot of your project
Once you have created your blog, please tell us its URL - we will post a note on Piazza with details on how. You are free to create the blog on any web based blog site, such as Blogger or WordPress. You should use the same blog URL each time and just add blog entries. If you need to move to a different blog server, please move your entire blog content over and let us know its new URL.
Video (5 Points)
By 3pm on Dec 13th you need to create a Youtube video of your application, and add it to the final project playlist. This video will be shown during the first hour of the final presentation event, and is the primary basis for the grade for your project.
The video should be about 1 minute long (maximum: 1:20 min). You should use screen recording software, such as the Open Broadcaster Software, which is available free of charge for Windows and Mac. On a Mac you can use QuickTime, which is available free of charge as well. There does not have to be an audio track, but we recommend that you talk over the video or add sound or music. Upload the video to Youtube, and add it to the final project playlist we are going to post a link to on Piazza. You should also link to the video from your blog.
The use of video editing software is optional, but you need to have a title slide with the name of your project and the team members' names. This can be done within Youtube itself through its text embedding functionality.
Graphics Application (85 Points)
The majority of your final project score will be for the graphics application you will be developing. We will look at both its technical as well as creative quality.
- 80% of the score for your application will be for the technical features you are implementing
- 20% of the score will be for the creative quality of your application
We will base your grade on what we see in your video, as well as what you show us during your demo to the course staff.
To obtain the full score for the technical quality, your team must implement at least three skill points worth of technical features per team member. For example, a team of two must cover 2x3=6 skill points. Also: each team must implement at least one medium or hard feature for each team member. For example: a team of two has the following options to cover the required 6 skill points:
- One easy (1 point), one medium (2 points), one hard (3 points) feature.
- Two easy and two medium features.
- Three medium features.
- Two hard features.
If your team implements more than the required amount of skill points, the additional features can fill in for points which you might lose for incomplete or insufficiently implemented features.
Your score for creativity is going to be determined by averaging the subjective scores the instructor, TA and tutors give you. We will look for a cohesive theme and story, but also things such as aesthetic visuals, accomplished by carefully selected textures, colors and materials, placement of camera and lights, correct normals for all your surfaces, effective user interaction, and fluid rendering. We will determine these scores very carefully during a grading meeting on the day following the presentations.
Here is the list of technical features you can choose from:
Easy: (1 skill point)
- Per-pixel illumination of texture-mapped polygons
- Toon shading
- Glow, bloom or halo effect
- Particle effect
- Procedurally modeled buildings (no shape grammar)
- Sound effects
Medium: (2 skill points)
- Bump mapping
- Surface made with at least two C1 continuous Bezier patches (e.g., flag, water surface, etc.)
- Collision detection with tight bounding boxes (only counts if objects aren't already round or rectangular)
- Procedurally modeled city
- Procedurally generated terrain
- Procedurally generated plants with L-systems
- Procedurally modeled buildings with shape grammar
- Water effect with reflection and refraction
- Shadow mapping
Hard: (3 skill points)
- Displacement mapping
- Screen space post-processed lights
- Screen space ambient occlusion (SSAO)
- Collision detection with arbitrary geometry
- Shadow Volumes
For a full score, each of these technical features must fulfill the more detailed requirements listed at the bottom of this page.
All technical features need to have a toggle switch (keyboard key) with which they can be enabled or disabled, or that changes its parameters (eg, seed value for randomizer) to show that they work correctly.
Additional technical features may be approved by the course staff upon request.
There will be two parts to the presentation:
- During the first hour of the event we are going to show the Youtube videos to the entire class. The graders will be taking detailed notes.
- During the following two hours you will need to present your projects to the graders science-fair style in the VR lab B210. We are going to split the class into two groups, A and B. Group A will present during the first hour, group B during the second hour.
If you want to have debugging support from the TA and tutors, you need to implement this project in C++ using OpenGL and GLFW, just like the other homework projects. Otherwise, you are free to use a language of your choice, you can even write an app for a smart phone or tablet, or a VR system.
Third party graphics libraries are generally not acceptable, unless cleared by the instructor. Exceptions are typically granted for libraries to load images and 3D models, libraries to support input devices, or support for audio. Pre-approved libraries are:
- GLee to manage OpenGL extensions
- Assimp for importing OBJs
- SOIL to load a variety of image formats
- SDL, to replace GLFW with a more powerful graphics engine
- OpenAL for audio support
- XML parsers, such as MiniXML or PugiXML - useful for configuration files
- Physics engines (such as the Bullet Engine), as long as they are not used to obtain points for technical features.
Extra Credit (10 Points Max.)
a) Advanced Effects (10 Points)
You can receive up to 10 extra points for a flawless implementation of one of the following algorithms. It also needs to include adequate visual debugging aids. None of these effects are covered in class, so you will need to research them yourself. Some of these effects are
For a full score of 10 points, teams of N people must implement N of these algorithms. In other words, each of the algorithms below gains the team 10/N extra credit points.
- Water effect with reflection of 3D models
- Screen space post-processed lights
- Screen space ambient occlusion (SSAO)
- Displacement mapping
- Motion blur (tutorial link)
- Depth of Field
Any other technical feature (even from the regular feature lists) can also be eligible for bounty points, but requires permission by a course staff member.
b) Virtual Reality (10 Points)
Write your graphics application for the Oculus Rift or HTC Vive. Your application needs to be interactive and support at least one 3D controller. You have to use the respective programming SDK and write your code in C++ rather than using Unity/Unreal/etc. We recommend using the starter code we use in CSE 190. If you want to use a different programming environment you need to get it cleared by the professor. If you choose this extra credit option, the professor will get you access to the VR lab, room B210, to do your project. Your demonstration on demo day will also happen in this lab.
- As an example for what we mean by visual debugging aids, have a look at Oleg Utkin's CSE 167 final project video.
- If you use Sketchup to create obj models: Sketchup writes quads whereas our obj reader expects triangles. You can convert the Sketchup file to one with triangles using Blender, a free 3D modeling tool. Then you put the object into edit mode and select Mesh->Faces->Convert Quads to Triangles.
- MeshLab is another excellent and free tool for 3D file format conversion.
- Trimble 3D Warehouse and Turbosquid are great resources for ready-made 3D models you can export to OBJ files with the above described technique.
Technical Feature Implementation Requirements
|Technical Feature||Requirements and Comments|
|Bezier Patches||Need to make a surface out of at least 4 connected Bezier patches with C1 continuity between all of them.|
|Sound Effects||Add a background music track that plays throughout the entire application, along with sound effects for events. The sound effects and background music should not cut each other (in other words, they should be able to play at the same time without forcing one another to pause or restart). You cannot get credit for this if audio sources are sparse, or if sound effects rarely ever play.|
|Per-pixel illumination of texture-mapped polygons||This effect can be hard to see without careful placement of the light source(s). You need to support a keyboard key to turn the effect on or off at runtime. The best type of light source to demonstrate the effect is a spot light with a small spot width, to illuminate only part of the texture.|
|Toon shading||Needs to consist of both discretized colors and silhouettes. The use of Toon Shading must make sense for the project, and Rim shading must not be used in the same project.|
|Rim shading||Add lighting along the rim of objects to further contrast it with the background. The use of Rim Shading must make sense for the project, and Toon shading must not be used in the same project.|
|Bump mapping||Needs to use either a height map to derive the normals from, or a normal map directly. The map needs to be shown alongside the rendering window, upon key press.|
|Glow or halo effect||Object glow must be obvious and clearly visible (not occluded). Implement functionality to turn off glow upon key press.|
|Particle effect||Generate a lot of particles (at least 200), all of which can move and die shortly after appearing. Application should not experience a significant slowdown, and instancing has to be done cleanly (no memory leaks should occur when the particles die).|
|Linear fog||Add a fog effect to your application, similar to the now deprecated glFog function, so that objects farther away are fogged more than those that are closer. The fog effect intensifies linearly with distance, so make sure you have objects at various distances to show that.|
|Collision detection with bounding boxes||The bounding boxes must be tight enough around the colliding objects to be within 50% of the size of the objects. There must be a debug option which upon key press displays the bounding boxes as wireframe boxes, and uses different colors for those boxes which are colliding (eg, red) vs. those that don't (eg, white). An example for an algorithm to implement is the Sweep and Prune algorithm with Dimension Reduction as discussed in class.|
|Procedurally modeled city|| Must include 3 types of buildings, some large landmark features that span multiple streets such as a park (a flat rectangular piece of land is fine) or lake or stadium, and roads that are more complex than a regular square grid (even Manhattan is more complex than a grid!)|
Creativity points if you can input an actual city's data for the roads!
|Procedurally modeled buildings|| Must have 3 non-identical portions (top, middle, bottom OR 1st floor, 2nd floor, roof).|
Generate at least 3 different buildings with differently shaped windows.
|Procedurally generated terrain|| Ability to input a height map: either real (1 point) or generated from different algorithms (2 points).|
Shader that adds at least 3 different types of terrain (for example: grassland, plains, desert, snow, tundra, sea, rocks, wasteland, etc).
|Procedurally generated plants with L-systems|| At least 4 language variables (X, F, +, -), and parameters (length of F, theta of + and -).|
To make it 3D you can also add another axis for rotation(typical variables are & and ^).
At least 3 trees that demonstrate different rules.
Pseudorandom execution will make your trees look more varied and pretty.
|Water effect with waves, reflection and refraction||Add shaders to render water, reflecting and refracting the sky box textures as your cubic environment map. In addition, simulate wave animations for the water. To get the extra credit, the water must reflect/refract all 3D objects of your scene, not just the sky box (you need to place multiple 3D objects so that they reflect off the water into the camera).|
|Shadow mapping||Upon key press, the shadow map (as a greyscale image) must be shown alongside the rendering window. You should be able to toggle shadows with another key.|
|Shadow volumes||In addition to the shadow map, show the wireframe of the shadow volume that was created. Points cannot be stacked with shadow mapping. In other words, you will either get 3 points for having shadow volumes, or 2 points for shadow mapping, but not 5 points for both.|
|Procedurally modeled complex objects with shape grammar||At least 5 different shapes must be used, and there must be rules for how they can be combined. It is not acceptable to allow their combination in completely arbitrary ways. You cannot stack this feature's skill points with the procedural modeling techniques listed in the Medium difficulty section. In other words, you will either get 3 points for having shape grammar, or 2 points for not having it.|
|Displacement mapping||Using a texture or height map, the position of points on the geometry should be modified. The map must optionally (upon key press) be shown alongside the actual graphics window to demonstrate that normal mapping or bump mapping was not used to achieve a displacement illusion.|
|Depth of field||Similar to how cameras work, focus on an object in the scene, and apply a gradual decrease in sharpness to objects that are out of focus. You must be able to change the focal point at runtime, between at least 2 different points. Not to be confused with fog, and fog should not be used in the same project. Make sure you have objects at various distances to show this feature works properly.|
|Screen space effects|| Screen space lighting effect using normal maps, depth maps such as haze or area lights.|
Screen space rendering effects such as motion blur or reflection(ability to turn on/off).
Volumetric light scattering or some kind of screen space ray tracing effect.
|Screen space ambient occlusion (SSAO)||Demonstrate functionality on an object with a large amount of crevices (note that the obj models given this quarter will not suffice; if you are unsure, just post several models you intend to work with on your blog for feedback). Implement functionality to turn SSAO off upon key press. To qualify for extra credit, upon key press, SSAO map (as a greyscale image) must be shown alongside the rendering window.|
|Screen space directional occlusion (SSDO) with color bounce||Demonstrate functionality on multiple objects which possess large amounts of crevices and widely varying colors. Implement functionality to turn SSDO with color bounce off upon key press. To qualify for extra credit, there must be the option to show the SSDO map (colored image) alongside the rendering window.|
|Collision detection with arbitrary geometry||Needs to test for the intersection of faces. Performance should be smooth enough to undoubtedly show that your algorithm is working properly. Support a debug mode to highlight colliding faces (e.g. color the face red when colliding).|
A summary of restrictions listed in the above table, along with extra explanations:
- Toon Shading and Rim Shading cannot be used in the same project.
- Adding shape grammar to procedural modeling does not stack with the base requirement. It will only yield 1 extra point to the modeling requirement.
- Shadow volumes do not stack with shadow mapping. You will either get 2 points for mapping or 3 points for volumes, but not 5 points for both (since shadow volumes already have shadow mapping).
- Procedural terrain requires the usage of an algorithm that does not simply read from a terrain map. You can use any algorithm mentioned in class or on the Internet, such as midpoint displacement, diamond-square, etc. The generated terrain must look like something potentially habitable, and we will ask you to explain how the terrain was generated.
- The pseudorandom number generator used in the procedural modeling techniques must be seeded (use srand()). For all these requirements, you should be able to change the seed at least once during runtime, so we can see a dynamic reloading of the procedural modeling. We will also ask you to re-run the application to verify that your procedural modeling does not change when using the same seed.
- Do not add fog if depth of field is implemented. The scene you generate with depth of field should look similar to what you would see in a camera.