Project2S18
From Immersive Visualization Lab Wiki
Levels of Immersion
In this project we are going to explore different levels of immersion with the Oculus Rift. Like in project 1, we're going to use the Oculus SDK, OpenGL and C++.
Starter Code
We recommend starting either with your project 1 code, or going back to the starter code. It's very compact but yet uses most of the API functions you will need.
Here is the minimal example .cpp file with comments on where the hooks are for the various functions required for this homework project. Note that this file has a main() function, so that it can be built as a console application, and will start up with a console window into which debug messages can be printed.
Project Description (100 Points)
You need to do the following things:
- Modify the starter code (in which many cubes are rendered) to render just two cubes, one behind another along the user's line of sight.
- Use texture mapping to display this calibration image on all faces of both cubes. The image needs to be right side up on the vertical faces, and make sure it's not mirrored. Each cube should be 20cm wide and its closest face should be about 30cm in front of the user. Here is source code to load the PPM file into memory so that you can apply it as a texture. Sample code for texture loading can be found in the OpenVR OpenGL example's function SetupTexturemaps().
- Render this stereo panorama image around the user as a sky box. The trick here is to render different textures on the sky box for each eye, which is necessary to achieve a 3D effect. Cycle between calibration cube, stereo panorama, and both displayed simultaneously with the 'X' button.
- Cycle between the following four modes with the 'A' button: 3D stereo, mono (the same image rendered on both eyes, from viewpoint half way between the eyes), left eye only (right eye black), right eye only (left eye black). Cycling means that repeated pressing of the 'A' button will change viewing modes from 3D stereo to mono to left only to right only and back to 3D stereo. Regardless of which mode is active, head tracking should work correctly, depending on which mode it's in as described below.
- Cycle between different head tracking modes with the 'B' button: no tracking (position and orientation frozen to what they just were before the user pressed the button), orientation only (position frozen to what it just was before the mode was selected), position only (orientation frozen to what it just was), both (which is the normal tracking mode).
More to follow...