Discussion4S16
Contents |
Overview
Week 4 discussion recap (04/18/16)
Slides: download here
Materials
Ambient
Diffuse
Specular
Shininess
Fiat Lux
Some Notation
Directional Lights
Point Lights
Spot Lights
OpenGL Pipeline, Once More With Feeling
Why?
Remember how we discussed in week 3 that the modern OpenGL pipeline offers flexibility? Another thing it provides is also skipping the overhead of constant CPU to GPU communication.
Below is an illustration of the various functions and language supported keywords that let us transfer data without jumping back and forth between the CPU and GPU.
Once we've set this pipeline up, after glDrawElements
is called, we never have to return back to the CPU until the GPU has finished blazingly quickly drawin the frame! We'll take a look at each section in detail.
CPU to GLSL
We've already learned one way to transfer data from the CPU to GLSL by means of VAOs in week 3's discussion. However, that was useful for passing in tons of vertex data that will differ per run of the vertex shader. What if all we want is a simpler data that will be uniform throughout our execution of the shaders?
This is where the uniform
keyword comes in. If you declare a variable uniform
in GLSL, it will be assigned an accessible location that we can store data to.
Where is my variable?
glGetUniformLocation(shaderProgram, variableName)
returns a GLuint
which is the location of where the variable is stored. This variable location can be used in the glUniform*
functions to move data into the GPU.
Usage in Cube.cpp
's draw
method:
GLuint MatrixID = glGetUniformLocation(shaderProgram, "MVP");
How do I store things in them?
The suite of glUniform*
functions help us send data from the CPU to the GPU. Below are some typically used examples.
-
glUniform1i(variableLocation, value)
: Used for passing in a single int value. This is extremely simple to use and we can just give it the uniform location ID and value. -
glUniform3fv(variableLocation, num, value)
: Used for passing in a vector of 3 floats. Thenum
in the middle is how many vectors we want to pass in. This is usually 1, as we're passing in one vector at a time, unless we have an array of vectors we want to pass in. -
glUniformMatrix4fv(variableLocation, num, transpose, value)
: Used for passing in a whole 4 by 4 matrix. The only different syntax is thetranspose
boolean value. This will usually beGL_FALSE
in our case asglm
already uses column major, matching OpenGL's matrix storage scheme. SeeCube.cpp
'sdraw
method for an example.
You can view the full list and function signature in the OpenGL's glUniform* spec.
Vertex Shader to Fragment Shader
Attributes such as normals aren't uniform across a fragment shader. How do we pass on such variables from Vertex shader to fragment shader?
Again, the key is for us to not return to the CPU and cause overhead once the GPU has begun its work. The out
and in
keyword lets us send data from the vertex shader, down to the next shader that will run(usually the fragment shader).
A Direction Light Example
Let's say we're passing in a simplified directional light(it has two attributes, on
and dir
) from our application into the vertex shader, then to the fragment shader. This is very similar to what you will end up doing in project 2 part 4
We'd first pass in our directional light attributes into the uniform variables setup in the vertex shader. How might we do this? From the CPU to GLSL section, you can see that there are two steps:
- Find out the variable location using
glGetUniformLocation
. - Set the variable with the appropriate
glUniform*
.
You can see this implemented in the code snippet below.
Next, let's simply forward our directional light into the fragment shader, which will handle the actual per-pixel lighting calculations. This is done by declaring the same variable in each of the vertex and fragment shaders, as out
and in
respectively.
Of course, since all the vertex shader does is forward the directional light information to the fragment shader, we could've directly set the uniform variables of the fragment shader, but this was for demonstrating going through the whole pipeline. Certain attributes may actually come into the vertex shader, get transformed somehow, and then get passed along to the fragment shader. Can you think of any examples?