Difference between revisions of "Project3F18"
Line 1: | Line 1: | ||
=Project 3: Textures, Scene Graphs and Culling= | =Project 3: Textures, Scene Graphs and Culling= | ||
− | + | '''This page is currently under construction. It will be completed on Oct 20. Feel free to start working on it at any time.''' | |
− | + | ||
− | + | ||
In this project you will need to implement a scene graph to render an army of animated robots with textures on them. | In this project you will need to implement a scene graph to render an army of animated robots with textures on them. | ||
Line 9: | Line 7: | ||
The total score for this project is 100 points. Additionally, you can obtain up to 10 points of extra credit. | The total score for this project is 100 points. Additionally, you can obtain up to 10 points of extra credit. | ||
− | ==1. | + | ==1. Applying a Texture (25 Points)== |
− | + | The first step of the project is to texture a robot component so that you can later use it as part of your robot. | |
− | + | Start with code that uses your trackball code, and modify it so that trackball rotations control the camera instead. (If you didn't get that to work, keyboard controls to rotate the camera will suffice.) | |
− | + | Thanks to our tutors Weichen and former tutor Yining, you have the following robot parts to choose from: head, body, limb, eye, antenna. You will find the OBJ files in [[Media:Robot-parts-2018.zip |this ZIP file]]. Each vertex has not only a 3D coordinate and a normal associated with it, but also a texture coordinate. This allows you to map textures to the surfaces of the robot. Note that unlike the previous obj files in the course, each face has different indices for v/vt/vn. So you are going to need to update your parser accordingly, when you add texture support. One of ways to deal with the different indices is to re-order (and duplicate) the v/vt/vn data when parsing so that their indices align. The following code fragment from "OpenGLInsights" might be useful: | |
− | + | <pre> | |
+ | // For each triangle | ||
+ | for( unsigned int v=0; v<vertexIndices.size(); v+=3 ) | ||
+ | { | ||
+ | // For each vertex of the triangle | ||
+ | for ( unsigned int i=0; i<3; i+=1 ) | ||
+ | { | ||
+ | unsigned int vertexIndex = vertexIndices[v+i]; | ||
+ | glm::vec3 vertex = temp_vertices[ vertexIndex-1 ]; | ||
+ | |||
+ | unsigned int uvIndex = uvIndices[v+i]; | ||
+ | glm::vec2 uv = temp_uvs[ uvIndex-1 ]; | ||
− | + | unsigned int normalIndex = normalIndices[v+i]; | |
+ | glm::vec3 normal = temp_normals[ normalIndex-1 ]; | ||
− | + | out_vertices.push_back(vertex); | |
− | + | out_uvs.push_back(uv); | |
− | + | out_normals.push_back(normal); | |
+ | } | ||
+ | } | ||
</pre> | </pre> | ||
+ | |||
+ | Choose '''one''' of the robot parts to apply a texture to. Best candidates are the body and the head. | ||
+ | |||
+ | Choose a texture image for the robot part. You can use any non-offensive image you find on the internet through image search, for example, or use a picture from your own collection. Best is to trim and resize it to a size of roughly 512x512 pixels. | ||
+ | |||
+ | Load the image into your C++ code. We provide [[Media:project3s16-texture.cpp | sample code]] which loads a PPM image file and uses it as a texture for a quad. If you decide to use an image in a format other than PPM (eg, JPEG), you need to convert it to PPM first. The above mentioned image processing tool [http://www.irfanview.com IrfanView] for Windows will do this for you. Alternatively, you can use a third party library such as [http://lonesock.net/soil.html SOIL] to natively load JPEG images, or other image formats. | ||
Use the following settings for your texture after your first <code>glBindTexture(GL_TEXTURE_CUBE_MAP, id)</code> for correct lighting and filtering settings: | Use the following settings for your texture after your first <code>glBindTexture(GL_TEXTURE_CUBE_MAP, id)</code> for correct lighting and filtering settings: | ||
<pre> | <pre> | ||
− | + | // Use bilinear interpolation for higher image quality: | |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | // Use bilinear interpolation: | + | |
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR); | glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR); | ||
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR); | glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR); | ||
− | // Use clamp to edge to hide | + | // Use clamp to edge to hide avoid repeating the texture: |
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); | glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); | ||
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); | glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); | ||
Line 45: | Line 57: | ||
</pre> | </pre> | ||
− | |||
− | + | <!-- | |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
==2. Scene Graph Engine (20 Points)== | ==2. Scene Graph Engine (20 Points)== | ||
Line 76: | Line 82: | ||
Now that we have the scene graph classes, it is time to put them to work, and build a robot with them. | Now that we have the scene graph classes, it is time to put them to work, and build a robot with them. | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
Build your own robot using the <tt>addChild</tt> methods. Use at least 3 different types of parts for your robot (e.g., body, head and limb). In total, your robot needs to consist of at least 4 parts, 3 of which need to be moving independently from one another and they need to be connected to the 4th part. At least one of the parts needs to have a texture on it. The texture can be any image you can find on the internet. If you need to resize your image, we recommend the free [https://www.irfanview.com IrfanView] if you are using Windows. (15 points) | Build your own robot using the <tt>addChild</tt> methods. Use at least 3 different types of parts for your robot (e.g., body, head and limb). In total, your robot needs to consist of at least 4 parts, 3 of which need to be moving independently from one another and they need to be connected to the 4th part. At least one of the parts needs to have a texture on it. The texture can be any image you can find on the internet. If you need to resize your image, we recommend the free [https://www.irfanview.com IrfanView] if you are using Windows. (15 points) |
Revision as of 14:20, 20 October 2018
Project 3: Textures, Scene Graphs and Culling
This page is currently under construction. It will be completed on Oct 20. Feel free to start working on it at any time.
In this project you will need to implement a scene graph to render an army of animated robots with textures on them.
The total score for this project is 100 points. Additionally, you can obtain up to 10 points of extra credit.
1. Applying a Texture (25 Points)
The first step of the project is to texture a robot component so that you can later use it as part of your robot.
Start with code that uses your trackball code, and modify it so that trackball rotations control the camera instead. (If you didn't get that to work, keyboard controls to rotate the camera will suffice.)
Thanks to our tutors Weichen and former tutor Yining, you have the following robot parts to choose from: head, body, limb, eye, antenna. You will find the OBJ files in this ZIP file. Each vertex has not only a 3D coordinate and a normal associated with it, but also a texture coordinate. This allows you to map textures to the surfaces of the robot. Note that unlike the previous obj files in the course, each face has different indices for v/vt/vn. So you are going to need to update your parser accordingly, when you add texture support. One of ways to deal with the different indices is to re-order (and duplicate) the v/vt/vn data when parsing so that their indices align. The following code fragment from "OpenGLInsights" might be useful:
// For each triangle for( unsigned int v=0; v<vertexIndices.size(); v+=3 ) { // For each vertex of the triangle for ( unsigned int i=0; i<3; i+=1 ) { unsigned int vertexIndex = vertexIndices[v+i]; glm::vec3 vertex = temp_vertices[ vertexIndex-1 ]; unsigned int uvIndex = uvIndices[v+i]; glm::vec2 uv = temp_uvs[ uvIndex-1 ]; unsigned int normalIndex = normalIndices[v+i]; glm::vec3 normal = temp_normals[ normalIndex-1 ]; out_vertices.push_back(vertex); out_uvs.push_back(uv); out_normals.push_back(normal); } }
Choose one of the robot parts to apply a texture to. Best candidates are the body and the head.
Choose a texture image for the robot part. You can use any non-offensive image you find on the internet through image search, for example, or use a picture from your own collection. Best is to trim and resize it to a size of roughly 512x512 pixels.
Load the image into your C++ code. We provide sample code which loads a PPM image file and uses it as a texture for a quad. If you decide to use an image in a format other than PPM (eg, JPEG), you need to convert it to PPM first. The above mentioned image processing tool IrfanView for Windows will do this for you. Alternatively, you can use a third party library such as SOIL to natively load JPEG images, or other image formats.
Use the following settings for your texture after your first glBindTexture(GL_TEXTURE_CUBE_MAP, id)
for correct lighting and filtering settings:
// Use bilinear interpolation for higher image quality: glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR); // Use clamp to edge to hide avoid repeating the texture: glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);