Difference between revisions of "Discussion3S16"

From Immersive Visualization Lab Wiki
Jump to: navigation, search
(Added initial draft for centering.)
m (The New and Improved OpenGL: wording fix)
 
(5 intermediate revisions by one user not shown)
Line 32: Line 32:
 
Again, try drawing a simple centered 2d non-unit rectangle to visualize what this would look like. Do you see the positions fitting in the range [-1,1]?
 
Again, try drawing a simple centered 2d non-unit rectangle to visualize what this would look like. Do you see the positions fitting in the range [-1,1]?
  
==Faces==
+
==Reading Faces==
In project 1, we parsed vertices and normals. What other lines were there in <code>.OBJ</code> files?
+
In [[Project1S16|project 1]], we parsed vertices and normals. What other lines were there in <code>.OBJ</code> files? If you answered lines that started with <code>f</code>, or ''faces'' you're correct! (The section title hopefully gave it away). The face lines are formatted as described in [[Discussion1S16#The OBJ file format|Week 1's discussion]], and that shows how to parse and understand them.
 +
 
 +
In this discussion, we wanted to cover why there are face lines to begin with. Let's see how we might specify 6 triangles to make the following hexagon:
 +
 
 +
[[File:Face_objs.PNG|800px]]
 +
 
 +
Note how when we don't use faces, and list faces simply by the three vertices that make them, we end up duplicating many vertices and use up more lines. This increases our parsing time polynomially as well as make our file sizes larger. The use of face specifications help reduce this.
 +
 
 +
==Modern OpenGL Pipeline==
 +
The demand that applications such as games put on graphics card hardware is unimaginably large. So much so that as the amount of geometry and effects the developers were loading up increased, graphics hardware manufacturers couldn't simply keep adding functions to the architecture. This called for a newer way to organize graphics hardware entirely. First lets review what we've done so far in the old pipeline.
 +
 
 +
===The Old Ways===
 +
[[File:fixed_function.PNG|800px]]
 +
 
 +
All of [[Project1S16|Project 1]] was done on the old way of OpenGL in which we utilized calls such as <code>glMatrixMult</code>, <code>gluLookAt</code>, <code>gluPerspective</code>. These functions are implemented in hardware and simply do what they were coded to do, hence the name '''"Fixed-function"'''.
 +
 
 +
===The New and Improved OpenGL===
 +
[[File:programmable_pipeline.PNG|755px]]
 +
 
 +
All of [[Project2S16|Project 2]] will be done on the programmable pipeline. Instead of specifying vertices through <code>GL_POINTS</code> one-by-one, we now load them all into a Vertex Array Object through Vertex Buffer Objects. This solves the geometry demand by reducing the time we jump back and forth between CPU and GPU.
 +
 
 +
Instead of OpenGL providing us with the functions to deal with Model Transformations(<code>toWorld</code> matrix), View Transformations(<code>C_inverse</code> matrix) and Projection Transformations(<code>P</code> matrix), we are given nothing. Instead, we now have near-infinite flexibility and can implement any camera effect or projection effects.
 +
 
 +
We haven't dealt with lighting yet, but again, instead of relying on the few default lighting configurations OpenGL provides us, we have the flexibility of implementing all kinds of weird lighting effects such as ray tracing on the hardware.
 +
 
 +
Below is another image to show the visual impact of exactly how much of OpenGL's fixed-function pipeline has become manual, but also programmable now, in this brave new world.
 +
 
 +
[[File:pipeline_reduction.PNG|800px]]
  
 
==OpenGL Buffer Objects==
 
==OpenGL Buffer Objects==
 +
So how do we use this new fancy geometry specification?
 +
 +
Well first, lets look at what the terminologies are.
 +
 +
===Some Definitions===
 +
* Vertex Array Object (VAO): The Vertex Array Object ties together all of the information that will be stored in our VBO and EBO. It stores their data as well as information on how they are formatted.
 +
* Vertex Buffer Object (VBO): A Vertex Buffer Object is where all the vertex data is actually stored. The starter code stores the position information in this buffer object.
 +
* Element Buffer Object (EBO): The Element Buffer Object is technically not different from any other buffer object(such as the VBO), but we have a special name for it as it is useful almost all the time. Remember the indices that we parsed in the [[#Reading Faces]] section? The EBO will store those.
 +
 +
This is all good but what are we missing? What else did we parse from our <code>.OBJ</code> files?
 +
 +
That's right! Normals! (and maybe color) How can we pass these into our OpenGL buffer?
 +
 +
===Packing Many Attributes===
 +
A vertex doesn't have just a position. It has normals! It has colors! It may even have texture coordinates! How do we pass all of these into the graphics card efficiently? We'll dive into a little bit of the art of memory management and packing for this.
 +
 +
[[File:vertex_attribute_interleaved.png]]
 +
 +
In the figure above, see how each vertex is a nicely packed set of 8 attributes&mdash;<code>X, Y, Z, R, G, B, S, T</code>(The <code>S</code> and <code>T</code> are texture coordinates. You don't have to worry about them yet)? This lets us pass along each vertex in one nice swoop, and byte-aligned to boot! You can construct a C++ struct to format your vertices this way, and pass it along to OpenGL as a VBO much like how we do it in <code>Cube.cpp</code>'s <code>Cube::Cube</code> constructor.
 +
 +
More information on how to do this can be found on the [http://learnopengl.com/#!Model-Loading/Mesh Learn OpenGL] site.
  
 
==GLSL Structure==
 
==GLSL Structure==
 +
Now we learned in the pipeline section that the whole vertex and fragment parts of the pipeline are programmable. How do we exactly achieve this? What language do we use? Well, in the case of OpenGL we use GLSL(GL Shading Language)! Let's see what each of these do as well as some examples.
 +
 +
===Vertex Shader===
 +
A vertex shader does the work that our application used to do in [[Project1S16|Project 1]]. It's main goal is to transform the vertex to a context that can be drawn, by setting up the ModelViewProjection transformation. In addition to this color, lighting, camera effect calculations will often times happen in here. It will also output and pass along any information that would be useful in the fragment shader.
 +
 +
Below is an example of how the vertex shader in the starter code for [[Project2S16|Project 2]] functions.
 +
 +
[[File:vertex_shader.PNG|800px]]
 +
 +
===Fragment Shader===
 +
If the vertex shader manipulated vertices, the fragment shader manipulates fragments. Tautology aside, what is a fragment? Well it's quite close to a pixel but not quite.
 +
 +
[[File:fragment_vs_pixel.png]]
 +
 +
As you can see above, a fragment is a partial pixel that a triangle intersects into. The pixel merging pipeline that actually merges these fragments to determine the final color a pixel should be hasn't happened yet, and so our fragments aren't grid-aligned pixels yet.
 +
 +
So what are the operations that can be done on these fragments? We can still change the final color of the pixel at this point, meaning we can still apply screen-space lighting effects(such as per-pixel lighting) or z-buffer manipulations. The ultimate job of the fragment shader is to output the final color of the fragment, as we see below from the starter code for [[Project2S16|Project 2]].
 +
 +
[[File:fragment_shader.PNG]]
 +
 +
==Some Coding Tips==
 +
Here's some code tips to help you through the fun filled journey that Project 2 is.
 +
 +
Before we get into them, remember the [[Discussion2S16#Some Coding Tips|Coding Tips]] from Week 2? Many of them still apply so keep them in mind! Especially the creation of utility functions, making sure you use object construction correctly, and avoiding hairy if-statements.
 +
 +
===Update Your Drivers===
 +
Now that we're jumping to modern OpenGL, make sure you have your latest drivers that support this. Any machine that was bought in a reasonable timeframe should support all these functions, but make sure your drivers are up-to-date. For Windows machine, this will be your graphics driver from your vendor, and for OS X machines, your OS version must be 10.9 or higher.
 +
 +
===Segmentation Faults? Check Your Object Initializations!===
 +
For OpenGL to properly initialize an object, an OpenGL context needs to be created. This happens only after GLFW itself is initialized. This means any object initialization that utilize OpenGL such as <code>Cube.cpp</code> must be done in <code>initialize_objects()</code> at the earliest.
 +
 +
To summarize, take note of these three points:
 +
 +
* Global pointer declaration is okay.
 +
* Global pointer initialization is '''NOT''' okay.
 +
** Perform all calls to new in <code>initialize_objects()</code>
 +
* Calling constructors of objects that perform gl*() function calls before GLFW is initialized will result in a segmentation fault.

Latest revision as of 18:31, 18 April 2016

Contents

Overview

Week 3 discussion recap (04/11/16)

Slides: download here

Object Centering

We can't expect our objects to be centered when we first parse our .OBJ files. The bear obj we parsed for Project1 are good examples. What can we do if we want to center our objects? How may we go about this?

Well the first job would be to find the center of our object right? Let's go through how we might accomplish that.

The Koan of Finding Your Center

This is going to be a slight modification to our parsing code so that we can find where the minimum point and the maximum point lie in our .OBJ file.

  • Loop and parse every vertex in the .OBJ file.
  • Loop through the vertices to find the minX, minY, minZ and maxX, maxY, maxZ.
    • Hint: You would want to use infinity and negative infinity in some way.
  • Find the average: avgX, avgY, avgZ
  • Loop through all vertices and subtract avgX, avgY, avgZ from all coordinates.

Try and draw a simple non-centered 2d rectangle to visualize what this would look like. Do you see how this would center our object?

Here we looped through the list of vertices three times to center our object. Can we do better and reduce the number of iterations?

The Zen of Filling The Room

Once we've centered our object, we can scale our object. Our goal is to fit the object into a standard cube(a 2x2x2 cube, with all vertices in the range [-1,1]) so that all objects start from the same size.

Here's an idea of how we might go about this.

  • Find the longest dimension between the axes x, y, or z.
  • Loop through every vertex and scale them(divide) by the longest axis.

Again, try drawing a simple centered 2d non-unit rectangle to visualize what this would look like. Do you see the positions fitting in the range [-1,1]?

Reading Faces

In project 1, we parsed vertices and normals. What other lines were there in .OBJ files? If you answered lines that started with f, or faces you're correct! (The section title hopefully gave it away). The face lines are formatted as described in Week 1's discussion, and that shows how to parse and understand them.

In this discussion, we wanted to cover why there are face lines to begin with. Let's see how we might specify 6 triangles to make the following hexagon:

Face objs.PNG

Note how when we don't use faces, and list faces simply by the three vertices that make them, we end up duplicating many vertices and use up more lines. This increases our parsing time polynomially as well as make our file sizes larger. The use of face specifications help reduce this.

Modern OpenGL Pipeline

The demand that applications such as games put on graphics card hardware is unimaginably large. So much so that as the amount of geometry and effects the developers were loading up increased, graphics hardware manufacturers couldn't simply keep adding functions to the architecture. This called for a newer way to organize graphics hardware entirely. First lets review what we've done so far in the old pipeline.

The Old Ways

Fixed function.PNG

All of Project 1 was done on the old way of OpenGL in which we utilized calls such as glMatrixMult, gluLookAt, gluPerspective. These functions are implemented in hardware and simply do what they were coded to do, hence the name "Fixed-function".

The New and Improved OpenGL

Programmable pipeline.PNG

All of Project 2 will be done on the programmable pipeline. Instead of specifying vertices through GL_POINTS one-by-one, we now load them all into a Vertex Array Object through Vertex Buffer Objects. This solves the geometry demand by reducing the time we jump back and forth between CPU and GPU.

Instead of OpenGL providing us with the functions to deal with Model Transformations(toWorld matrix), View Transformations(C_inverse matrix) and Projection Transformations(P matrix), we are given nothing. Instead, we now have near-infinite flexibility and can implement any camera effect or projection effects.

We haven't dealt with lighting yet, but again, instead of relying on the few default lighting configurations OpenGL provides us, we have the flexibility of implementing all kinds of weird lighting effects such as ray tracing on the hardware.

Below is another image to show the visual impact of exactly how much of OpenGL's fixed-function pipeline has become manual, but also programmable now, in this brave new world.

Pipeline reduction.PNG

OpenGL Buffer Objects

So how do we use this new fancy geometry specification?

Well first, lets look at what the terminologies are.

Some Definitions

  • Vertex Array Object (VAO): The Vertex Array Object ties together all of the information that will be stored in our VBO and EBO. It stores their data as well as information on how they are formatted.
  • Vertex Buffer Object (VBO): A Vertex Buffer Object is where all the vertex data is actually stored. The starter code stores the position information in this buffer object.
  • Element Buffer Object (EBO): The Element Buffer Object is technically not different from any other buffer object(such as the VBO), but we have a special name for it as it is useful almost all the time. Remember the indices that we parsed in the #Reading Faces section? The EBO will store those.

This is all good but what are we missing? What else did we parse from our .OBJ files?

That's right! Normals! (and maybe color) How can we pass these into our OpenGL buffer?

Packing Many Attributes

A vertex doesn't have just a position. It has normals! It has colors! It may even have texture coordinates! How do we pass all of these into the graphics card efficiently? We'll dive into a little bit of the art of memory management and packing for this.

Vertex attribute interleaved.png

In the figure above, see how each vertex is a nicely packed set of 8 attributes—X, Y, Z, R, G, B, S, T(The S and T are texture coordinates. You don't have to worry about them yet)? This lets us pass along each vertex in one nice swoop, and byte-aligned to boot! You can construct a C++ struct to format your vertices this way, and pass it along to OpenGL as a VBO much like how we do it in Cube.cpp's Cube::Cube constructor.

More information on how to do this can be found on the Learn OpenGL site.

GLSL Structure

Now we learned in the pipeline section that the whole vertex and fragment parts of the pipeline are programmable. How do we exactly achieve this? What language do we use? Well, in the case of OpenGL we use GLSL(GL Shading Language)! Let's see what each of these do as well as some examples.

Vertex Shader

A vertex shader does the work that our application used to do in Project 1. It's main goal is to transform the vertex to a context that can be drawn, by setting up the ModelViewProjection transformation. In addition to this color, lighting, camera effect calculations will often times happen in here. It will also output and pass along any information that would be useful in the fragment shader.

Below is an example of how the vertex shader in the starter code for Project 2 functions.

Vertex shader.PNG

Fragment Shader

If the vertex shader manipulated vertices, the fragment shader manipulates fragments. Tautology aside, what is a fragment? Well it's quite close to a pixel but not quite.

Fragment vs pixel.png

As you can see above, a fragment is a partial pixel that a triangle intersects into. The pixel merging pipeline that actually merges these fragments to determine the final color a pixel should be hasn't happened yet, and so our fragments aren't grid-aligned pixels yet.

So what are the operations that can be done on these fragments? We can still change the final color of the pixel at this point, meaning we can still apply screen-space lighting effects(such as per-pixel lighting) or z-buffer manipulations. The ultimate job of the fragment shader is to output the final color of the fragment, as we see below from the starter code for Project 2.

Fragment shader.PNG

Some Coding Tips

Here's some code tips to help you through the fun filled journey that Project 2 is.

Before we get into them, remember the Coding Tips from Week 2? Many of them still apply so keep them in mind! Especially the creation of utility functions, making sure you use object construction correctly, and avoiding hairy if-statements.

Update Your Drivers

Now that we're jumping to modern OpenGL, make sure you have your latest drivers that support this. Any machine that was bought in a reasonable timeframe should support all these functions, but make sure your drivers are up-to-date. For Windows machine, this will be your graphics driver from your vendor, and for OS X machines, your OS version must be 10.9 or higher.

Segmentation Faults? Check Your Object Initializations!

For OpenGL to properly initialize an object, an OpenGL context needs to be created. This happens only after GLFW itself is initialized. This means any object initialization that utilize OpenGL such as Cube.cpp must be done in initialize_objects() at the earliest.

To summarize, take note of these three points:

  • Global pointer declaration is okay.
  • Global pointer initialization is NOT okay.
    • Perform all calls to new in initialize_objects()
  • Calling constructors of objects that perform gl*() function calls before GLFW is initialized will result in a segmentation fault.