Difference between revisions of "Project2S17"

From Immersive Visualization Lab Wiki
Jump to: navigation, search
(Project Description (100 Points))
(Project Description (100 Points))
Line 26: Line 26:
  
 
[[Image:bear-thumb.jpg|512px]]
 
[[Image:bear-thumb.jpg|512px]]
 +
 +
And this is how the cube map faces are labeled:
 +
 +
[[Image:Bear-left-cubemap-labeled.jpg|512px]]
  
 
The panorama was shot with camera lenses parallel to one another, so the resulting cube maps will need to be separated by a human eye distance when rendered, i.e., their physical placement needs to be horizontally offset from each other for each eye. Support all of the functions you implemented for the calibration cube also for the sky box (A, B and X buttons, both thumb sticks).
 
The panorama was shot with camera lenses parallel to one another, so the resulting cube maps will need to be separated by a human eye distance when rendered, i.e., their physical placement needs to be horizontally offset from each other for each eye. Support all of the functions you implemented for the calibration cube also for the sky box (A, B and X buttons, both thumb sticks).
  
 
[check back often - more details to follow, including extra credit options]
 
[check back often - more details to follow, including extra credit options]

Revision as of 00:48, 26 April 2017

Levels of Immersion

In this project we are going to explore different levels of immersion we can create for the Oculus Rift. Like in project 1, we're going to use the Oculus SDK, OpenGL and C++.

Starter Code

We recommend starting either with your project 1 code, or going back to the Minimal Example you started your project 1 with. It's very compact but yet uses most of the API functions you will need.

Here is the minimal example .cpp file with comments on where the hooks are for the various functions required for this homework project. Note that this file has a main() function, so that it can be built as a console application, and will start up with a console window into which debug messages can be printed. In Visual Studio, you need to enable console mode in the project's properties window with Configuration Properties -> Linker -> System -> SubSystem -> Console.

Project Description (100 Points)

You need to do the following things:

Modify the minimal example to render a single cube with this test image on all its faces. The image needs to be right side up on the vertical faces. The cube should be 20cm wide and its closest face should be about 30cm in front of the user. Here is source code to load the PPM file into memory so that you can apply it as a texture.

Cycle between 3D stereo, mono (both eyes see identical view), left only, right only with the 'A' button.

Cycle between head tracking modes with the 'B' button: no tracking, orientation only, position only, both.

Gradually vary the interocular distance (IOD) with the right thumb stick left/right. For this you have to learn about how the Oculus SDK specifies the IOD. They don't just use one number for the separation distance, but each eye has a 3D offset from a central point instead. Find out what these offsets are in the default case, and modify only their horizontal offsets to change the IOD.

Gradually vary the physical size of the cube with the left thumb stick left/right. This means changing the width of the cube from 30cm to smaller or larger values. Compare the effect with changing the IOD.

Render this stereo panorama image around the user as a sky box. The trick here is to render different textures on the sky box for each eye, which is necessary to achieve a 3D effect. Switch between cube and stereo panorama with the 'X' button. There are six cube map images in PPM format in the ZIP file. Each is 2k x 2k pixels in size. The files are named nx, ny and nz for the negative x, y and z axis images. The positive axis files are named px, py and pz. Here is a downsized picture of the panorama image:

Bear-thumb.jpg

And this is how the cube map faces are labeled:

Bear-left-cubemap-labeled.jpg

The panorama was shot with camera lenses parallel to one another, so the resulting cube maps will need to be separated by a human eye distance when rendered, i.e., their physical placement needs to be horizontally offset from each other for each eye. Support all of the functions you implemented for the calibration cube also for the sky box (A, B and X buttons, both thumb sticks).

[check back often - more details to follow, including extra credit options]