Discussion4S16

From Immersive Visualization Lab Wiki
Revision as of 02:35, 19 April 2016 by Kyl063 (Talk | contribs)

Jump to: navigation, search

Contents

Overview

Week 4 discussion recap (04/18/16)

Slides: download here

Materials

Lights have material properties that define what kind of light they emit. Objects have materials that define how they'll react to these lights.

You can think of these properties as approximating the properties of a physical light's wavelength model and a material's reflectance model. If you want to learn more about physically accurate rendering techniques, CSE168 is your friend.

Ambient

The Ambient is used to define the "inherent color" of the object;this will be denoted as ka.

The light's ambient coefficient—denoted as La—is how much ambient influence the light emits. If this value is 1, the object's "inherent color" will shine through perfectly.

Diffuse

A Diffuse material is a material that reflects light rays in many directions(it "diffuses" the light);this is denoted with kd.

The light's diffuse coefficient—written as Ld—is how much of the color the light emits will be diffused.

Specular

A Specular material is to simulate the effect of shiny objects creating a "spec" where the reflection direction and eye match up(such as in metals); this is written as ks.

Similar to the other light properties, the specular property of a light(Ls) is how much specular influence the light emits.

Shininess

The shininess of an object is how much the specular is dispersed over the object. The higher this value, the smaller the highlight "spec". This is calculated through the exponent α.

There is no equivalent for lights.

Fiat Lux

Some Notation

Directional Lights

Point Lights

Spot Lights

OpenGL Pipeline, Once More With Feeling

Why?

Remember how we discussed in week 3 that the modern OpenGL pipeline offers flexibility? Another thing it provides is also skipping the overhead of constant CPU to GPU communication.

Below is an illustration of the various functions and language supported keywords that let us transfer data without jumping back and forth between the CPU and GPU.

GPUDataPipeline.png

Once we've set this pipeline up, after glDrawElements is called, we never have to return back to the CPU until the GPU has finished blazingly quickly drawin the frame! We'll take a look at each section in detail.

CPU to GLSL

We've already learned one way to transfer data from the CPU to GLSL by means of VAOs in week 3's discussion. However, that was useful for passing in tons of vertex data that will differ per run of the vertex shader. What if all we want is a simpler data that will be uniform throughout our execution of the shaders?

This is where the uniform keyword comes in. If you declare a variable uniform in GLSL, it will be assigned an accessible location that we can store data to.

Where is my variable?

glGetUniformLocation(shaderProgram, variableName) returns a GLuint which is the location of where the variable is stored. This variable location can be used in the glUniform* functions to move data into the GPU.

Usage in Cube.cpp's draw method:

GLuint MatrixID = glGetUniformLocation(shaderProgram, "MVP");

How do I store things in them?

The suite of glUniform* functions help us send data from the CPU to the GPU. Below are some typically used examples.

  • glUniform1i(variableLocation, value): Used for passing in a single int value. This is extremely simple to use and we can just give it the uniform location ID and value.
  • glUniform3fv(variableLocation, num, value): Used for passing in a vector of 3 floats. The num in the middle is how many vectors we want to pass in. This is usually 1, as we're passing in one vector at a time, unless we have an array of vectors we want to pass in.
  • glUniformMatrix4fv(variableLocation, num, transpose, value): Used for passing in a whole 4 by 4 matrix. The only different syntax is the transpose boolean value. This will usually be GL_FALSE in our case as glm already uses column major, matching OpenGL's matrix storage scheme. See Cube.cpp's draw method for an example.

You can view the full list and function signature in the OpenGL's glUniform* spec.

Vertex Shader to Fragment Shader

Attributes such as normals aren't uniform across a fragment shader. How do we pass on such variables from Vertex shader to fragment shader?

Again, the key is for us to not return to the CPU and cause overhead once the GPU has begun its work. The out and in keyword lets us send data from the vertex shader, down to the next shader that will run(usually the fragment shader).

A Direction Light Example

Let's say we're passing in a simplified directional light(it has two attributes, on and dir) from our application into the vertex shader, then to the fragment shader. This is very similar to what you will end up doing in project 2 part 4

We'd first pass in our directional light attributes into the uniform variables setup in the vertex shader. How might we do this? From the CPU to GLSL section, you can see that there are two steps:

  • Find out the variable location using glGetUniformLocation.
  • Set the variable with the appropriate glUniform*.

You can see this implemented in the code snippet below.

Cpu to vert.png

Next, let's simply forward our directional light into the fragment shader, which will handle the actual per-pixel lighting calculations. This is done by declaring the same variable in each of the vertex and fragment shaders, as out and in respectively.

Vert to frag.png

Of course, since all the vertex shader does is forward the directional light information to the fragment shader, we could've directly set the uniform variables of the fragment shader, but this was for demonstrating going through the whole pipeline. Certain attributes may actually come into the vertex shader, get transformed somehow, and then get passed along to the fragment shader. Can you think of any examples?