Project2S20

From Immersive Visualization Lab Wiki
Revision as of 12:31, 14 April 2020 by Jschulze (Talk | contribs)

Jump to: navigation, search

Contents

Levels of Immersion

THIS PROJECT IS UNDER CONSTRUCTION. DO NOT START YET.

In this project we are going to explore different levels of immersion with your headset. We are using Unity with C# for this project again like in project 1.

We recommend starting with your code from project 1 and add the relevant sections for the cubes and the sky box.

Milestones

  • Week 1: sky box and control buttons
  • Week 2:
  • Week 3:

Project Description (100 Points)

You need to add the following features to your project:

  1. 3D Sky box: The sky box is currently monoscopic, which makes it look flat like a poster on a wall. To make it look 3D, render a different texture on the sky box for each of the user's eyes (Skybox textures). Cycle with the 'X' button between showing the entire scene in stereo (cube and sky box), just the sky box in stereo, and just the sky box in mono (i.e., one of the panorama images is rendered to both eyes). (15 points)
  2. Rendering scale: Gradually vary the physical size of the cube with the left thumb stick right/left. This means increasing or decreasing the size of the cube. Pushing down on the thumb stick should reset the cubes to their initial sizes (2 points). Support at least range from 0.01m to 0.5m. Make sure the cube's center point do not move when their size changes (i.e., scale the cube about it center) (2 points). (10 points total)
  3. Stereo modes: Cycle between the following four modes with the 'A' button: 3D stereo, mono (the same image rendered on both eyes), left eye only (right eye black), right eye only (left eye black), inverted stereo (left eye image rendered to right eye and vice versa). Regardless of which mode is active, head tracking should work correctly, depending on which mode it's in as described below. (10 points, 2 points per mode)
  4. Head tracking: Cycle between different head tracking modes with the 'B' button: regular tracking (both position and orientation), orientation, no tracking (orientation frozen). (20 points, 5 points per tracking mode)
  5. Variable IOD: Gradually vary the interocular distance (IOD) with the right thumb stick left/right. Pushing down on the thumb stick should reset the IOD to the default. They don't just use one number for the separation distance, but each eye has a 3D offset from a central point. Find out what these offsets are in the default case, and modify only their horizontal offsets to change the IOD, leaving the other two offsets untouched. Support an IOD range from -0.1m to +0.3m. (15 points)
  6. Tracking Lag: Explore what a lag (i.e., time delay) in controller tracking would look like. Then, instead of using the current camera matrix to render your cursor/reticle, save its position into a ring buffer (or similar data structure) with at least 20 entries and replace it with the camera matrix for the previous frame. The default lag should be zero frames, but every click of the right index trigger you add one frame of tracking lag. The left index trigger reduces the tracking lag by one frame. Display the amount of frames of lag in the terminal window with a label, e.g., "Tracking lag: 0 frames". (20 points)
  7. Rendering lag: Explore what it would look like if rendering a frame took more than the time allotted for it (i.e., 60 fps on smartphones). Default is no additional delay, but every time the user pulls the right middle finger trigger you add one frame of rendering lag. This means that for as many frames as your delay is you skip rendering the updated images. The left middle finger trigger should reduce the rendering delay by 1 frame. Display the delay of rendering (i.e., the number of frames a rendered image is repeated for) in the terminal window with a label such as "Rendering delay: 2 frames". Allow for up to 10 frames of delay. (10 points)

Tips for Rendering the Skybox

There are six cube map images in JPG format in the ZIP file. Each is 2k x 2k pixels in size. The files are named nx, ny and nz for the negative x, y and z axis images. The positive axis files are named px, py and pz. Here is a downsized picture of the panorama image:

Bear-thumb.jpg

And this is how the cube map faces are labeled:

Bear-left-cubemap-labeled.jpg

The panorama was shot with camera lenses parallel to one another, so the resulting cube maps will need to be separated by a human eye distance when rendered, i.e., their physical placement needs to be horizontally offset from each other for each eye.

Notes:

  • Cycling means that each time the respective button is pressed the viewing mode will change from one mode to the next, and eventually back to the first mode.
  • In modes that the skybox is rendered in mono, you're still rendering the scene in stereo, but the texture on the skybox is the same for left and right eye.
  • Here is a great tutorial on 3D skyboxes in Unity

Extra Credit (up to 10 points)

There are three options for extra credit, for a total of 10 points maximum.

  1. Viewmaster Simulator: Take two regular, non-panoramic photos from an eye distance apart (about 65mm) with a regular camera such as the one in your cell phone. Use the widest angle your camera can be set to, as close to a 90 degree field of view as you can get. Cut the edges off to make the images square and exactly the same size. Use your custom images as the textures on a rectangle for each eye. You may have to shift one of the images left or right see a correct stereo image that doesn't hurt your eyes. (5 points)
  2. Custom Sky Box: Create your own (monoscopic) sky box: use your cell phone's panorama function to capture a 360 degree panorama picture (or use Google's StreetView app, which is free for Android and iPhone). Process it into cube maps - this on-line conversion tool can do this for you. Texture the sky box with the resulting textures. Note you'll have to download each of the six textures separately. Make it an alternate option to the Bear image when the 'X' button is pressed. (5 points)
  3. Super-Rotation: Modify the regular orientation tracking so that it exaggerates horizontal head rotations by a factor of two. This means that starting when the user's head faces straight forward, any rotation to left or right (=heading) is multiplied by two and this new head orientation is used to render the image. Do not modify pitch or roll. In this mode the user will be able to look behind them by just rotating their head by 90 degrees to either side. Get this mode to work with your skybox and calibration cubes, tracking fully on, and correct stereo rendering. This publication gives more information about this technique. (5 points)