Project2S17

From Immersive Visualization Lab Wiki
Revision as of 00:05, 26 April 2017 by Jschulze (Talk | contribs)

Jump to: navigation, search

Contents

Levels of Immersion

In this project we are going to explore different levels of immersion we can create for the Oculus Rift. Like in project 1, we're going to use the Oculus SDK, OpenGL and C++.

Starter Code

We recommend starting either with your project 1 code, or going back to the Minimal Example you started your project 1 with. It's very compact but yet uses most of the API functions you will need.

Here is the minimal example .cpp file with comments on where the hooks are for the various functions required for this homework project. Note that this file has a main() function, so that it can be built as a console application, and will start up with a console window into which debug messages can be printed. In Visual Studio, you need to enable console mode in the project's properties window with Configuration Properties -> Linker -> System -> SubSystem -> Console.

Project Description (100 Points)

You need to do the following things:

  • Modify the minimal example to render a single cube with this calibration image on all its faces. The image needs to be right side up on the vertical faces. The cube should be 20cm wide and its closest face should be about 30cm in front of the user. Here is source code to load the PPM file into memory so that you can apply it as a texture.
  • Cycle between 3D stereo, mono (both eyes see identical view), left only, right only with the 'A' button.
  • Cycle between head tracking modes with the 'B' button: no tracking, orientation only, position only, both. When you turn off one or both of the tracking types, freeze the last measured values (rather than defaulting to hard coded values) so that the image won't jump when you freeze tracking.
  • Gradually vary the interocular distance (IOD) with the right thumb stick left/right. For this you have to learn about how the Oculus SDK specifies the IOD. They don't just use one number for the separation distance, but each eye has a 3D offset from a central point instead. Find out what these offsets are in the default case, and modify only their horizontal offsets to change the IOD. Support an IOD range from zero to 100cm.
  • Gradually vary the physical size of the cube with the left thumb stick left/right. This means changing the width of the cube from 30cm to smaller or larger values. Support a range from 1cm to 100cm. Compare the effect with changing the IOD (observe and tell us about it during grading).
  • Render this stereo panorama image around the user as a sky box. The trick here is to render different textures on the sky box for each eye, which is necessary to achieve a 3D effect. Switch between cube and stereo panorama with the 'X' button.

Notes on Panorama Rendering

There are six cube map images in PPM format in the ZIP file. Each is 2k x 2k pixels in size. The files are named nx, ny and nz for the negative x, y and z axis images. The positive axis files are named px, py and pz. Here is a downsized picture of the panorama image:

Bear-thumb.jpg

And this is how the cube map faces are labeled:

Bear-left-cubemap-labeled.jpg

The panorama was shot with camera lenses parallel to one another, so the resulting cube maps will need to be separated by a human eye distance when rendered, i.e., their physical placement needs to be horizontally offset from each other for each eye. Support all of the functions you implemented for the calibration cube also for the sky box (A, B and X buttons, both thumb sticks).

Extra Credit (up to 10 points)

  • Create your monoscopic own sky box: borrow the Samsung Gear 360 camera from the media lab, or use your cell phone's panorama function to capture a 360 degree panorama picture. Process it into cube maps - this on-line conversion tool may be useful to you. Texture the sky box with this texture. Make it the third option when you click the 'X' button (besides calibration cube and the bear panorama). (5 points)
  • Create your own stereo sky box: this one's a lot more difficult than the mono option. Just shooting 360s from two eye locations isn't enough because it'll only give you stereo in front of and behind you. So you need to take stereo pairs in many directions and stitch them together. At Qualcomm Institute we use PT GUI for this, but there are other options. (10 points) You'll get 7 points if you manage to get stereo working in at least forward and backward direction, which you can do by placing the Gear 360 camera in two locations an eye distance apart and use each of them as a separate cubemap like the Bear panorama.

[check back later - more options may be added]