Difference between revisions of "Project2S18"

From Immersive Visualization Lab Wiki
Jump to: navigation, search
(Project Description (100 Points))
(Extra Credit (up to 10 points))
 
(15 intermediate revisions by 2 users not shown)
Line 8: Line 8:
  
 
[[Media:starter-project2.cpp | Here is the minimal example .cpp file]] with comments on where the hooks are for the various functions required for this homework project. Note that this file has a main() function, so that it can be built as a console application, and will start up with a console window into which debug messages can be printed.
 
[[Media:starter-project2.cpp | Here is the minimal example .cpp file]] with comments on where the hooks are for the various functions required for this homework project. Note that this file has a main() function, so that it can be built as a console application, and will start up with a console window into which debug messages can be printed.
 +
 +
'''Note:'''
 +
* The minimal example code uses OGLplus libraries to render the cubes, but you are supposed to replace the rendering handled with OGLplus by your own rendering methods.
  
 
==Project Description (100 Points)==
 
==Project Description (100 Points)==
Line 13: Line 16:
 
You need to do the following things:
 
You need to do the following things:
  
# Modify the starter code (in which many cubes are rendered) to render just two cubes, one behind another along the user's line of sight.  
+
# Modify the starter code (in which many cubes are rendered) to render just two cubes, one behind another along the user's line of sight. (10 points)
# Use texture mapping to display [[Media:vr_test_pattern.zip | this calibration image]] on all faces of both cubes. The image needs to be right side up on the vertical faces, and make sure it's not mirrored. Each cube should be 20cm wide and its closest face should be about 30cm in front of the user. Here is [[Media:loadppm.txt | source code to load the PPM file]] into memory so that you can apply it as a texture. Sample code for texture loading can be found in [[Media:Hellovr_opengl_main.cpp|the OpenVR OpenGL example's]] function SetupTexturemaps().
+
# Use texture mapping to display [[Media:vr_test_pattern.zip | this calibration image]] on all faces of both cubes. The image needs to be right side up on the vertical faces, and make sure it's not mirrored. Each cube should be 20cm wide and its closest face should be about 30cm in front of the user. Here is [[Media:loadppm.txt | source code to load the PPM file]] into memory so that you can apply it as a texture. Sample code for texture loading can be found in [[Media:Hellovr_opengl_main.cpp|the OpenVR OpenGL example's]] function SetupTexturemaps(). (20 points)
# Render [http://web.eng.ucsd.edu/~jschulze/tmp/bear-stereo-cubemaps.zip this stereo panorama image] around the user as a sky box. The trick here is to render different textures on the sky box for each eye, which is necessary to achieve a 3D effect. Cycle with the 'X' button between showing the entire scene (cubes and sky box), just the sky box in stereo, and just the sky box in mono (i.e., one of the panorama images is rendered to both eyes).
+
# Render [http://web.eng.ucsd.edu/~jschulze/tmp/bear-stereo-cubemaps.zip this stereo panorama image] around the user as a 10m wide sky box with the user at its center. The trick here is to render different textures on the sky box for each eye, which is necessary to achieve a 3D effect. Cycle with the 'X' button between showing the entire scene (both cubes and the stereo sky box), just the sky box in stereo, and just the sky box in mono (i.e., one of the panorama images is rendered to both eyes). (15 points)
# Gradually vary the physical size of both cubes with the left thumb stick left/right. This means changing the width of the cubes from 30cm to smaller or larger values. Pushing down on the thumb stick should reset the cubes to their initial sizes. Support a range from 0.01m to 0.5m. Make sure the cubes' center points do not move when their size changes (i.e., scale the cubes about their centers).  
+
# Gradually vary the physical size of both cubes with the left thumb stick left/right. This means changing the size of the cubes from 30cm to smaller or larger values. Pushing down on the thumb stick should reset the cubes to their initial sizes (2 points). Support a range from 0.01m to 0.5m. Make sure the cubes' center points do not move when their size changes (i.e., scale the cubes about their centers) (2 points). (10 points total)
# Cycle between the following four modes with the 'A' button: 3D stereo, mono (the same image rendered on both eyes), left eye only (right eye black), right eye only (left eye black), inverted stereo (left eye image rendered to right eye and vice versa). Regardless of which mode is active, head tracking should work correctly, depending on which mode it's in as described below.
+
# Cycle between the following four modes with the 'A' button: 3D stereo, mono (the same image rendered on both eyes), left eye only (right eye black), right eye only (left eye black), inverted stereo (left eye image rendered to right eye and vice versa). Regardless of which mode is active, head tracking should work correctly, depending on which mode it's in as described below. (10 points, 2 points per mode)
# Cycle between different head tracking modes with the 'B' button: no tracking (position and orientation frozen to what they just were before the user pressed the button), orientation only (position frozen to what it just was before the mode was selected), position only (orientation frozen to what it just was), both (which is the normal tracking mode).
+
# Cycle between different head tracking modes with the 'B' button: regular tracking (both position and orientation), orientation only (position frozen to what it just was before the mode was selected), position only (orientation frozen to what it just was), no tracking (position and orientation frozen to what they just were when the user pressed the button). (20 points, 5 points per tracking mode)  
# Gradually vary the interocular distance (IOD) with the right thumb stick left/right. Pushing down on the thumb stick should reset the IOD to the default. You'll have to learn about how the Oculus SDK specifies the IOD. They don't just use one number for the separation distance, but each eye has a 3D offset from a central point. Find out what these offsets are in the default case, and modify only their horizontal offsets to change the IOD, leaving the other two offsets untouched. Support an IOD range from -0.1m to +0.2m.
+
# Gradually vary the interocular distance (IOD) with the right thumb stick left/right. Pushing down on the thumb stick should reset the IOD to the default. You'll have to learn about how the Oculus SDK specifies the IOD. They don't just use one number for the separation distance, but each eye has a 3D offset from a central point. Find out what these offsets are in the default case, and modify only their horizontal offsets to change the IOD, leaving the other two offsets untouched. Support an IOD range from -0.1m to +0.3m. (15 points)
  
 
'''Notes:'''
 
'''Notes:'''
 
* Cycling means that each time the respective button is pressed the viewing mode will change from one mode to the next, and eventually back to the first mode.
 
* Cycling means that each time the respective button is pressed the viewing mode will change from one mode to the next, and eventually back to the first mode.
 
* The view in the Rift always needs to look like the control window on the screen: the render texture should never shift off the display panels in the Rift.
 
* The view in the Rift always needs to look like the control window on the screen: the render texture should never shift off the display panels in the Rift.
* To get the full scores you need to pass the following test during grading: we may test the functionality of the A and B buttons by having you put on the Rift and the grader will select the mode. You will need to say what mode the system is in. If you can't it's -5 points.
+
* In modes that the skybox is rendered in mono, you're still rendering the scene in stereo, but the texture on the skybox is the same for left and right eye.
  
 
===Notes on Panorama Rendering===
 
===Notes on Panorama Rendering===
Line 36: Line 39:
 
[[Image:Bear-left-cubemap-labeled.jpg|512px]]
 
[[Image:Bear-left-cubemap-labeled.jpg|512px]]
  
The panorama was shot with camera lenses parallel to one another, so the resulting cube maps will need to be separated by a human eye distance when rendered, i.e., their physical placement needs to be horizontally offset from each other for each eye. Support all of the functions you implemented for the calibration cube also for the sky box (A, B and X buttons, both thumb sticks).
+
The panorama was shot with camera lenses parallel to one another, so the resulting cube maps will need to be separated by a human eye distance when rendered, i.e., their physical placement needs to be horizontally offset from each other for each eye.
 
+
Make the sky box very large - at least 20 meters wide, and centered around the user's head tracking location.
+
  
 
==Extra Credit (up to 10 points)==
 
==Extra Credit (up to 10 points)==
  
There are four options for extra credit. Note that the options above aren't cumulative. You can get points for only one of them.
+
There are four options for extra credit.
  
# Take two photos at an eye distance apart (about 65mm), and show each of them in one eye of the Rift without tracking. You should see a stereo image. Make it an alternate option to the Bear image when the 'X' button is pressed. (3 points)
+
# Stereo Image Viewer: Take two regular, non-panoramic photos from an eye distance apart (about 65mm) with a regular camera such as the one in your cell phone. Use the widest angle your camera can be set to, as close to a 90 degree field of view as you can get. Cut the edges off to make the images square and exactly the same size. Modify your texturing code for the cubes so that they support stereo images (showing a different image to each eye). Use your custom images as the textures and render them in stereo so that you see a 3D image on the cube. You may have to make the cube bigger to see a correct stereo image - the size of the image should fill as much of your field of view as the camera's field of view was when it took the images. (5 points)
# Create your own monoscopic sky box: borrow the Samsung Gear 360 camera from the media lab, or use your cell phone's panorama function to capture a 360 degree panorama picture (or use Google's StreetView app, which is free for Android and iPhone). Process it into cube maps - [https://jaxry.github.io/panorama-to-cubemap/ this on-line conversion tool] can do this for you. Texture the sky box with the resulting textures. Note you'll have to download each of the six textures separately. Make it an alternate option to the Bear image when the 'X' button is pressed. (5 points)
+
# Custom Sky Box: Create your own (monoscopic) sky box: borrow a Samsung Gear 360 camera from the media lab, or use your cell phone's panorama function to capture a 360 degree panorama picture (or use Google's StreetView app, which is free for Android and iPhone). Process it into cube maps - [https://jaxry.github.io/panorama-to-cubemap/ this on-line conversion tool] can do this for you. Texture the sky box with the resulting textures. Note you'll have to download each of the six textures separately. Make it an alternate option to the Bear image when the 'X' button is pressed. (5 points)
# Monoscopic panoramic video: Download one of the 360 degree movies from https://www.videoblocks.com/videos/footage/360-files. Display it as a video texture in the Rift. For that you'll need to extract the image frames from the video, for instance with [www.ffmpeg.com FFMPEG]. Then you need to create a spherical triangle mesh with texture coordinates. Then you render each frame onto the inside of the sphere. Finally, you need to play back the video at the frame rate it was shot at, regardless of the frame rate the Rift renders at (usually 90 fps). Auto-repeat the movie once it ends. (8 points)
+
# Super-Rotation: Modify the regular orientation tracking so that it exaggerates horizontal head rotations by a factor of two. This means that starting when the user's head faces straight forward, any rotation to left or right (=heading) is multiplied by two and this new head orientation is used to render the image. Do not modify pitch or roll. In this mode the user will be able to look behind them by just rotating their head by 90 degrees to either side. Get this mode to work with your skybox and calibration cubes, tracking fully on, and correct stereo rendering. (5 points)
# Stereoscopic panoramic video: Similar to Option 3, but now in stereo. Download a stereo panorama film from the internet (for instance [https://forums.oculus.com/developer/discussion/19463/stereo-360-movies-from-stereo-360-panoramas this one which was shot with a GoPro rig)]. The video may have one eye in the upper half, one in the lower. Using FFMPEG as in Option 3, process the video for each eye separately and play it back in the Rift with head tracking so that you can look around the video as it's playing. Make it run at its original frame rage (fps), despite the Rift rendering at 90fps. (10 points)
+
# Smoother controller tracking: Render a sphere in the location of your dominant hand's controller location, just like in homework #1. Move your hand around and notice how the sphere follows it. Now push a previously unused controller button to enter 'Smoothing Mode'. In this mode, calculate the [https://www.itl.nist.gov/div898/handbook/pmc/section4/pmc421.htm moving average] over the past n frames' positional tracking values (we won't average over orientation here). Use this averaged position to render the sphere. Allow the user to change the number of frames (n) you're averaging over within a range of 1 to 45 in increments of 1. Notice how with larger values of n the tracking gets smoother, but there's also more lag between controller motion and the motion of the sphere. (5 points)
-->
+
More to follow...
+

Latest revision as of 12:35, 4 May 2018

Contents

Levels of Immersion

In this project we are going to explore different levels of immersion with the Oculus Rift. Like in project 1, we're going to use the Oculus SDK, OpenGL and C++.

Starter Code

We recommend starting either with your project 1 code, or going back to the starter code. It's very compact but yet uses most of the API functions you will need.

Here is the minimal example .cpp file with comments on where the hooks are for the various functions required for this homework project. Note that this file has a main() function, so that it can be built as a console application, and will start up with a console window into which debug messages can be printed.

Note:

  • The minimal example code uses OGLplus libraries to render the cubes, but you are supposed to replace the rendering handled with OGLplus by your own rendering methods.

Project Description (100 Points)

You need to do the following things:

  1. Modify the starter code (in which many cubes are rendered) to render just two cubes, one behind another along the user's line of sight. (10 points)
  2. Use texture mapping to display this calibration image on all faces of both cubes. The image needs to be right side up on the vertical faces, and make sure it's not mirrored. Each cube should be 20cm wide and its closest face should be about 30cm in front of the user. Here is source code to load the PPM file into memory so that you can apply it as a texture. Sample code for texture loading can be found in the OpenVR OpenGL example's function SetupTexturemaps(). (20 points)
  3. Render this stereo panorama image around the user as a 10m wide sky box with the user at its center. The trick here is to render different textures on the sky box for each eye, which is necessary to achieve a 3D effect. Cycle with the 'X' button between showing the entire scene (both cubes and the stereo sky box), just the sky box in stereo, and just the sky box in mono (i.e., one of the panorama images is rendered to both eyes). (15 points)
  4. Gradually vary the physical size of both cubes with the left thumb stick left/right. This means changing the size of the cubes from 30cm to smaller or larger values. Pushing down on the thumb stick should reset the cubes to their initial sizes (2 points). Support a range from 0.01m to 0.5m. Make sure the cubes' center points do not move when their size changes (i.e., scale the cubes about their centers) (2 points). (10 points total)
  5. Cycle between the following four modes with the 'A' button: 3D stereo, mono (the same image rendered on both eyes), left eye only (right eye black), right eye only (left eye black), inverted stereo (left eye image rendered to right eye and vice versa). Regardless of which mode is active, head tracking should work correctly, depending on which mode it's in as described below. (10 points, 2 points per mode)
  6. Cycle between different head tracking modes with the 'B' button: regular tracking (both position and orientation), orientation only (position frozen to what it just was before the mode was selected), position only (orientation frozen to what it just was), no tracking (position and orientation frozen to what they just were when the user pressed the button). (20 points, 5 points per tracking mode)
  7. Gradually vary the interocular distance (IOD) with the right thumb stick left/right. Pushing down on the thumb stick should reset the IOD to the default. You'll have to learn about how the Oculus SDK specifies the IOD. They don't just use one number for the separation distance, but each eye has a 3D offset from a central point. Find out what these offsets are in the default case, and modify only their horizontal offsets to change the IOD, leaving the other two offsets untouched. Support an IOD range from -0.1m to +0.3m. (15 points)

Notes:

  • Cycling means that each time the respective button is pressed the viewing mode will change from one mode to the next, and eventually back to the first mode.
  • The view in the Rift always needs to look like the control window on the screen: the render texture should never shift off the display panels in the Rift.
  • In modes that the skybox is rendered in mono, you're still rendering the scene in stereo, but the texture on the skybox is the same for left and right eye.

Notes on Panorama Rendering

There are six cube map images in PPM format in the ZIP file. Each is 2k x 2k pixels in size. The files are named nx, ny and nz for the negative x, y and z axis images. The positive axis files are named px, py and pz. Here is a downsized picture of the panorama image:

Bear-thumb.jpg

And this is how the cube map faces are labeled:

Bear-left-cubemap-labeled.jpg

The panorama was shot with camera lenses parallel to one another, so the resulting cube maps will need to be separated by a human eye distance when rendered, i.e., their physical placement needs to be horizontally offset from each other for each eye.

Extra Credit (up to 10 points)

There are four options for extra credit.

  1. Stereo Image Viewer: Take two regular, non-panoramic photos from an eye distance apart (about 65mm) with a regular camera such as the one in your cell phone. Use the widest angle your camera can be set to, as close to a 90 degree field of view as you can get. Cut the edges off to make the images square and exactly the same size. Modify your texturing code for the cubes so that they support stereo images (showing a different image to each eye). Use your custom images as the textures and render them in stereo so that you see a 3D image on the cube. You may have to make the cube bigger to see a correct stereo image - the size of the image should fill as much of your field of view as the camera's field of view was when it took the images. (5 points)
  2. Custom Sky Box: Create your own (monoscopic) sky box: borrow a Samsung Gear 360 camera from the media lab, or use your cell phone's panorama function to capture a 360 degree panorama picture (or use Google's StreetView app, which is free for Android and iPhone). Process it into cube maps - this on-line conversion tool can do this for you. Texture the sky box with the resulting textures. Note you'll have to download each of the six textures separately. Make it an alternate option to the Bear image when the 'X' button is pressed. (5 points)
  3. Super-Rotation: Modify the regular orientation tracking so that it exaggerates horizontal head rotations by a factor of two. This means that starting when the user's head faces straight forward, any rotation to left or right (=heading) is multiplied by two and this new head orientation is used to render the image. Do not modify pitch or roll. In this mode the user will be able to look behind them by just rotating their head by 90 degrees to either side. Get this mode to work with your skybox and calibration cubes, tracking fully on, and correct stereo rendering. (5 points)
  4. Smoother controller tracking: Render a sphere in the location of your dominant hand's controller location, just like in homework #1. Move your hand around and notice how the sphere follows it. Now push a previously unused controller button to enter 'Smoothing Mode'. In this mode, calculate the moving average over the past n frames' positional tracking values (we won't average over orientation here). Use this averaged position to render the sphere. Allow the user to change the number of frames (n) you're averaging over within a range of 1 to 45 in increments of 1. Notice how with larger values of n the tracking gets smoother, but there's also more lag between controller motion and the motion of the sphere. (5 points)