Discussion7S16

From Immersive Visualization Lab Wiki
Revision as of 02:20, 10 May 2016 by Kyl063 (Talk | contribs)

Jump to: navigation, search

Contents

Overview

Week 7 Discussion recap(05/09/16)

Slides: Download here

Bezier Curves

Selection Buffers

Have you ever wondered how those light gun controllers for arcade games or console games worked? In Nintendo's game Duck Hunt, for example, the player points the light gun at a duck and push the trigger, and the game somehow registers that you hit the duck! The hint is in the fact that the name gun is actually a red herring.

The moment the player pushes the trigger, what happens is the whole image is redrawn—faster than the human eye can catch. This special re-render essentially draws all the shootable objects(e.g. ducks for Duck Hunt) in white, but everything else in black. Then, the light gun—equipped with a photodiode sensor—reads the specially redrawn image and senses whether the player pointed at a white portion or not, meaning the "gun" is actually more like a camera, and receives light instead of shooting light. (Read more here and here)

Borrowing Old Ideas

For doing selection in 3D, we can use a similar idea. When a user clicks, we redraw the whole scene with a special render, coloring each selectable object with a unique color.

In order to do this, first we need to assign each object a unique ID that identifies what we've just selected. Let's look at the following chess pieces, and the following assigned IDs:

Selectionbuffer00.PNG

Now after we render this, we want to be able to retrieve the ID of each object back from the render. How might we accomplish that?

Well one way to do it would be to directly use the ID as the color! So ID 0 will be the darkest, and ID 3 will be the brightest object in this render. For example:

Selectionbuffer01.jpg

Great! Now when the user had clicked object 3 for example, they can read that pixel, see what the brightness is, and try to find out what object's color that matches to!

The grayscale image up there is the Selection Buffer rendering. In that particular selection buffer, we've exaggerated the differences between each of the objects. In a more typical setting you might use the color vec4(id/255.0f, 0.0f, 0.0f, 1.0f), meaning that each of those objects will actually be quite close in color.

The Code

So how do we implement this in OpenGL? Well, we first need the subroutine to redraw the selection buffer. Let's implement a method selectionDraw that draws the object, but with the selection shader instead. The ID is going to be remembered per selectable object we create. This ID will then be sent to our selection shader, which simply colors the whole object with a flat color based on that ID.

Selectionbuffercode00.PNG

What about the Vertex shader you ask? Well we're drawing the object the same way that it's normally drawn in our display buffer. What should our vertex shader be then?

Now let's handle the case when we actually call all the selectionDraws. This will happen on mouse click so we'd place it in our Window class's mouse_button_callback. On a left-button click, we draw every selectable object with the selectionShader, and then call glReadPixels to get the pixel value of where the user clicked. If we clicked an actual selectable, we're in essence retrieving the color that the selection.frag shader colored our object with.

Now all we need is some basic type conversion and we're good!

Selectionbuffercode01.PNG

So we've gotten the object that we've selected. How do we now move that selection? Well that will be left as an exercise for the reader. :)

You can read more on this in the Lighthouse 3D-Picking Tutorial

Raycast

Raycasting is an alternate method for selection, done by determining where in 3D space our mouse clicked. Because our mouse works along a 2D x-y plane, but our world is in 3D, we need to convert from a 2D coordinate system to 3D. Does this sound familiar at all?

Hopefully it does, as it's basically the rasterization equation from the beginning of the quarter! Now of course, we aren't going from a point in 2D to a point in 3D here. Because we're going from a lower dimension to a higher dimension(2→3), we simply don't have enough information. So we decide that we're selecting everything along the Z-axis, or put in another way we're shooting a ray along the Z-axis, and hoping that it hits something.

By the way, we're simply going to go over an overview of this method in the context of this class, as this involves a bit more math than we want to add onto your Bezier curve calculations.

So raycasting goes like this:

  1. First shoot a ray from the camera towards the mouse pointer.
  2. Find the first object that intersects with that ray.
  3. That object is our selection.

It's really a beautifully simple way of determining object selection. The architecture is going to be a lot simpler than programming a selection buffer algorithm. Of course it involves a bit of intersection math(which usually becomes the bottleneck), but when you have 3D pointer devices(e.g. Nintendo Wiimote or Sony Move) this is definitely the more natural way of selecting objects than a selection buffer. However, modern graphics cards are so heavily optimized to deal with buffers, but less so with intersection(though that is changing with hardware ray tracing libraries such as Nvidia Optix) that sometimes the complicated architecture is worth it.

All in all, both selection methods have pros and cons that make them equally viable choices for selection.

If you want to learn more about ray casting, we recommend you read these excellently illustrated Ray Casting Notes or take CSE168, in which you'll learn Raytracing, where the math for rays are used everywhere.