Project2S19

From Immersive Visualization Lab Wiki
Revision as of 13:33, 26 April 2019 by Jschulze (Talk | contribs)

Jump to: navigation, search

Contents

Levels of Immersion

In this project we are going to explore different levels of immersion with the Oculus Rift. Like in project 1, we're going to use the Oculus SDK, OpenGL and C++.

Starter Code

We recommend starting with the project 2 starter code that we provide. It renders a monoscopic sky box around the user, as well as two textured cubes in front of the user, one behind the other. Alternatively, you can start with your code from project 1 and add the relevant sections for the cubes and the sky box from the new starter code.

Project Description (100 Points)

You need to add the following features to the starter code:

  1. The sky box is currently monoscopic, which makes it look flat like a poster on a wall. To make it look 3D, render a different texture on the sky box for each of the user's eyes (Skybox textures for right eye). Cycle with the 'X' button between showing the entire scene (both cubes and the stereo sky box), just the sky box in stereo, and just the sky box in mono (i.e., one of the panorama images is rendered to both eyes). (15 points)
  2. Gradually vary the physical size of both cubes with the left thumb stick left/right. This means changing the size of the cubes from 30cm to smaller or larger values. Pushing down on the thumb stick should reset the cubes to their initial sizes (2 points). Support a range from 0.01m to 0.5m. Make sure the cubes' center points do not move when their size changes (i.e., scale the cubes about their centers) (2 points). (10 points total)
  3. Cycle between the following four modes with the 'A' button: 3D stereo, mono (the same image rendered on both eyes), left eye only (right eye black), right eye only (left eye black), inverted stereo (left eye image rendered to right eye and vice versa). Regardless of which mode is active, head tracking should work correctly, depending on which mode it's in as described below. (10 points, 2 points per mode)
  4. Cycle between different head tracking modes with the 'B' button: regular tracking (both position and orientation), orientation only (position frozen to what it just was before the mode was selected), position only (orientation frozen to what it just was), no tracking (position and orientation frozen to what they just were when the user pressed the button). (20 points, 5 points per tracking mode)
  5. Gradually vary the interocular distance (IOD) with the right thumb stick left/right. Pushing down on the thumb stick should reset the IOD to the default. You'll have to learn about how the Oculus SDK specifies the IOD. They don't just use one number for the separation distance, but each eye has a 3D offset from a central point. Find out what these offsets are in the default case, and modify only their horizontal offsets to change the IOD, leaving the other two offsets untouched. Support an IOD range from -0.1m to +0.3m. (15 points)
  6. Explore what a lag (i.e., time delay) in tracking (head and controllers) would look like. Start by rendering a sphere at your dominant hand's controller position, just like in project 1. Then, instead of using the current camera matrix you get from the Oculus SDK, save it into a ring buffer with at least 30 entries and replace it with the camera matrix for the previous frame. The default lag should be zero frames, but every click of the right index trigger you add one frame of tracking lag. The left index trigger reduces the tracking lag by one frame. Display the amount of frames of lag in the terminal window with a lable, e.g., "Tracking lag: 0 frames". (20 points)
  7. Explore what it would look like if rendering a frame took more than 1/90th of a second (i.e., the Oculus Rift's refresh rate). Default is no additional delay, but when the user pulls the right middle finger trigger you add one frame of rendering lag. This means that in every other frame you re-render the same image as from the frame before. Every right middle finger trigger pull should increase the rendering duration by 1 frame (you render the same image as in the frame before another time without updating the camera matrix. The left middle finger trigger should reduce the rendering duration by 1 frame. Display the duration of rendering (i.e., the number of frames a rendered image is repeated for) in the terminal window with a lable such as "Rendering duration: 2 frames" (10 points)

Notes:

  • Cycling means that each time the respective button is pressed the viewing mode will change from one mode to the next, and eventually back to the first mode.
  • The view in the Rift always needs to look like the control window on the screen: the render texture should never shift off the display panels in the Rift.
  • In modes that the skybox is rendered in mono, you're still rendering the scene in stereo, but the texture on the skybox is the same for left and right eye.

Extra Credit (up to 10 points)

There are four options for extra credit.

  1. Stereo Image Viewer: Take two regular, non-panoramic photos from an eye distance apart (about 65mm) with a regular camera such as the one in your cell phone. Use the widest angle your camera can be set to, as close to a 90 degree field of view as you can get. Cut the edges off to make the images square and exactly the same size. Modify your texturing code for the cubes so that they support stereo images (showing a different image to each eye). Use your custom images as the textures and render them in stereo so that you see a 3D image on the cube. You may have to make the cube bigger to see a correct stereo image - the size of the image should fill as much of your field of view as the camera's field of view was when it took the images. (5 points)
  2. Custom Sky Box: Create your own (monoscopic) sky box: borrow a Samsung Gear 360 camera from the media lab, or use your cell phone's panorama function to capture a 360 degree panorama picture (or use Google's StreetView app, which is free for Android and iPhone). Process it into cube maps - this on-line conversion tool can do this for you. Texture the sky box with the resulting textures. Note you'll have to download each of the six textures separately. Make it an alternate option to the Bear image when the 'X' button is pressed. (5 points)
  3. Super-Rotation: Modify the regular orientation tracking so that it exaggerates horizontal head rotations by a factor of two. This means that starting when the user's head faces straight forward, any rotation to left or right (=heading) is multiplied by two and this new head orientation is used to render the image. Do not modify pitch or roll. In this mode the user will be able to look behind them by just rotating their head by 90 degrees to either side. Get this mode to work with your skybox and calibration cubes, tracking fully on, and correct stereo rendering. This publication gives more information about this technique. (5 points)
  4. Smoother controller tracking: Render a sphere in the location of your dominant hand's controller location, just like in homework #1. Move your hand around and notice how the sphere follows it. Now push a previously unused controller button to enter 'Smoothing Mode'. In this mode, calculate the moving average over the past n frames' positional tracking values (we won't average over orientation here). Use this averaged position to render the sphere. Allow the user to change the number of frames (n) you're averaging over within a range of 1 to 45 in increments of 1. Notice how with larger values of n the tracking gets smoother, but there's also more lag between controller motion and the motion of the sphere. (5 points)