Project2S17

From Immersive Visualization Lab Wiki
Revision as of 16:23, 27 April 2017 by Jschulze (Talk | contribs)

Jump to: navigation, search

Contents

Levels of Immersion

In this project we are going to explore different levels of immersion we can create for the Oculus Rift. Like in project 1, we're going to use the Oculus SDK, OpenGL and C++.

Starter Code

We recommend starting either with your project 1 code, or going back to the Minimal Example you started your project 1 with. It's very compact but yet uses most of the API functions you will need.

Here is the minimal example .cpp file with comments on where the hooks are for the various functions required for this homework project. Note that this file has a main() function, so that it can be built as a console application, and will start up with a console window into which debug messages can be printed. In Visual Studio, you need to enable console mode in the project's properties window with Configuration Properties -> Linker -> System -> SubSystem -> Console.

Project Description (100 Points)

You need to do the following things:

  • Modify the minimal example (in which many cubes are rendered) to render a single cube with this calibration image on all its faces, across the entire width and height of the respective face. The image needs to be right side up on the vertical faces. On top and bottom there also needs to be an image but the orientation is your choice - as long as the image isn't mirrored. The cube should be 20cm wide and its closest face should be about 30cm in front of the user. Here is source code to load the PPM file into memory so that you can apply it as a texture. Sample code for texture loading can be found in the OpenVR OpenGL example's function SetupTexturemaps().
  • Cycle between the following four modes with the 'A' button: 3D stereo, mono (both eyes see identical view), left eye only (right eye dark), right eye only (left eye dark). Cycling means that repeated pressing of the 'A' button will change viewing modes from 3D stereo to mono to left only to right only and back to 3D stereo. Regardless of which mode is active, head tracking should work correctly, depending on which mode it's in as described below.
  • Cycle between head tracking modes with the 'B' button: no tracking, orientation only, position only, both. When you turn off one or both of the tracking types, freeze the last measured values (rather than defaulting to hard coded values) so that the image won't jump when you freeze tracking.
  • Gradually vary the interocular distance (IOD) with the right thumb stick left/right. For this you have to learn about how the Oculus SDK specifies the IOD. They don't just use one number for the separation distance, but each eye has a 3D offset from a central point instead. Find out what these offsets are in the default case, and modify only their horizontal offsets to change the IOD. Support an IOD range from zero to 100cm.
  • Gradually vary the physical size of the cube with the left thumb stick left/right. This means changing the width of the cube from 30cm to smaller or larger values. Support a range from 1cm to 100cm. Compare the effect with changing the IOD (observe and tell us about it during grading).
  • Render this stereo panorama image around the user as a sky box. The trick here is to render different textures on the sky box for each eye, which is necessary to achieve a 3D effect. Cycle between calibration cube, stereo panorama, and both displayed simultaneously with the 'X' button.

Notes on Panorama Rendering

There are six cube map images in PPM format in the ZIP file. Each is 2k x 2k pixels in size. The files are named nx, ny and nz for the negative x, y and z axis images. The positive axis files are named px, py and pz. Here is a downsized picture of the panorama image:

Bear-thumb.jpg

And this is how the cube map faces are labeled:

Bear-left-cubemap-labeled.jpg

The panorama was shot with camera lenses parallel to one another, so the resulting cube maps will need to be separated by a human eye distance when rendered, i.e., their physical placement needs to be horizontally offset from each other for each eye. Support all of the functions you implemented for the calibration cube also for the sky box (A, B and X buttons, both thumb sticks).

Make the sky box very large - at least 20 meters wide, and centered around the user's head tracking location.

Extra Credit (up to 10 points)

  • Create your monoscopic own sky box: borrow the Samsung Gear 360 camera from the media lab, or use your cell phone's panorama function to capture a 360 degree panorama picture. Process it into cube maps - this on-line conversion tool may be useful to you. Texture the sky box with this texture. Make it an alternate option to the Bear image when the 'X' button is pressed. (5 points)
  • Create your own stereo sky box: this one's a lot more difficult than the mono option. Just shooting 360s from two eye locations isn't enough because it'll only give you stereo in front of and behind you. So you need to take stereo pairs in many directions and stitch them together. At Qualcomm Institute we use PT GUI for this, but there are other options. (10 points) You'll get 7 points if you manage to get stereo working in at least forward and backward direction, which you can do by placing the Gear 360 camera in two locations an eye distance apart and use each of them as a separate cubemap like the Bear panorama.

[check back later - more options may be added]