Difference between revisions of "Project3S18"

From Immersive Visualization Lab Wiki
Jump to: navigation, search
(Grading)
(Extra Credit (10 points max.))
 
(16 intermediate revisions by 2 users not shown)
Line 36: Line 36:
 
[[Image:off-axis-viewing.png|480px]]
 
[[Image:off-axis-viewing.png|480px]]
  
To see the effect of rendering from the point of view of another user, you need to be able to switch the viewpoint to one of the controllers. The switch should happen when the right controller's middle finger trigger is pulled, and the viewpoint should remain at the controller and update while the controller is moving, until the middle finger trigger is released. When the trigger is released, the viewpoint should switch back to the user's head. Note that because we're rendering in stereo, you'll need to create two camera positions at the controller, one offset to the left, one to the right of it by half an average human eye distance, which is 65 millimeters. When switching the viewpoint, this should only affect the viewpoint used for rendering on the CAVE displays. The HMD's viewpoint should always be tracked from its correct position for a view of the space the CAVE is in.
+
'''Head-in-Hand Mode:''' To see the effect of rendering from the point of view of another user, you need to be able to switch the head position to one of the controllers. The switch should happen when the right controller's trigger is pulled, and the viewpoint should remain at the controller and update while the controller is moving, until the trigger is released. When the trigger is released, the viewpoint should switch back to the user's actual head position. Note that because we're rendering in stereo, you'll need to create two camera (=eye) positions at the controller, one offset to the left, one to the right by half the average human eye distance, which is 65 millimeters. When switching the viewpoint, this should only affect the viewpoint used for rendering on the CAVE displays. The HMD's viewpoint should always be tracked from its correct position for a view of the space the CAVE is in. Note that rotations of the controller should not rotate the scene, but because the eye/camera positions rotate about a common center point, the stereo effect on the CAVE walls should change: a 180 degree rotation should invert the stereo effect.
  
We also need a way to freeze the viewpoint the CAVE renders from, regardless of whether it's rendering from head or controller location. For this we'll use the 'B' button. When it is pressed, the viewpoint the CAVE renders from should freeze. When the 'B' button is pressed again, the view should unfreeze.  
+
'''Freeze Mode:''' We also need a way to freeze the viewpoint the CAVE renders from, regardless of whether it's rendering from head or controller location. For this we'll use the 'B' button. When it is pressed, the viewpoint the CAVE renders from should freeze. When the 'B' button is pressed again, the view should unfreeze.  
  
To make the calibration cube more useful, you need to be able to move it in all three dimensions, and make it bigger or smaller. Use both thumbsticks to allow for this. For instance, the left thumbstick could be used to move the cube left/right and forward/backward. The right thumbstick could move it up/down and the other direction makes it smaller/larger.
+
'''Calibration Cube:''' To make the calibration cube more useful, you need to be able to move it in all three dimensions, and make it bigger or smaller. Use both thumbsticks to allow for this. For instance, the left thumbstick could be used to move the cube left/right and forward/backward. The right thumbstick could move it up/down and the other direction makes it smaller/larger.
  
To help with debugging, you should enable debug mode with the 'A' button (activate debug mode while 'A' is depressed, disable upon release). In debug mode, the user sees not only the images on the virtual CAVE screens, but also all six viewing pyramids (technically they're sheared pyramids). These pyramids start in each of the eye positions and go to the corners of each of the screens. Three screens times two eyes is six pyramids. You can visualize the pyramids in wireframe mode with lines outlining them, or use solid surfaces. In either case, you should draw those pyramids that indicate the view from the left eye in green, those from the right eye should be red. You need to render the debug pyramids in rift space, whereas the calibration cube and the sky box need to be rendered in CAVE space.
+
'''Debug Mode:''' To help with debugging, when you've switched the viewpoint to the controller, you should enable debug mode with the 'A' button (activate debug mode while 'A' is depressed, disable upon release). In debug mode, the user sees not only the images on the virtual CAVE screens, but also all six viewing pyramids (technically they're sheared pyramids). These pyramids start in each of the eye positions and go to the corners of each of the screens. Three screens times two eyes is six pyramids. You can visualize the pyramids in wireframe mode with lines outlining them, or use solid surfaces. In either case, you should draw those pyramids that indicate the view from the left eye in green, those from the right eye should be red. You need to render the debug pyramids in rift space, whereas the calibration cube and the sky box need to be rendered in CAVE space.
  
 
The difference between the two coordinate spaces is that what you render in '''rift space''' can be rendered directly to your OpenGL canvas, whereas everything in '''CAVE space''' needs to be rendered to the off-screen buffers.
 
The difference between the two coordinate spaces is that what you render in '''rift space''' can be rendered directly to your OpenGL canvas, whereas everything in '''CAVE space''' needs to be rendered to the off-screen buffers.
Line 51: Line 51:
  
 
# Create CAVE geometry: 3 squares: 5 points
 
# Create CAVE geometry: 3 squares: 5 points
# Render the calibration cube to one the CAVE walls in mono using a frame buffer object (FBO), from any viewpoint (can be a fixed viewpoint): 20 points
+
# Render the calibration cube to one the CAVE walls in mono using a '''frame buffer object (FBO)''', from a fixed viewpoint: 20 points
# Render the scene with the sky box in mono, from any viewpoint: 5 points
+
# Render the scene with the '''sky box in mono''' to the CAVE walls, from a fixed viewpoint: 5 points
# Render scene to one screen in mono from user's head position (need to use off-center projection math): 20 points
+
# Render scene to one CAVE wall in mono from the '''user's eyes''' (need to use head tracking and '''off-center projection math'''): 20 points
# Render scene to one screen in stereo from user's eyes: 10 points
+
# Render scene to one CAVE wall in '''stereo''' from user's eyes: 10 points
# Render scene to two screens in stereo from user's eyes: 5 points  
+
# Render scene to '''two''' CAVE walls in stereo from user's eyes: 5 points  
# Render scene to all three screens in stereo from user's eyes: 5 points
+
# Render scene to '''all three CAVE screens''' in stereo from user's eyes: 5 points
# Ability to switch viewpoint to controller: 5 points
+
# Ability to '''switch viewpoint to controller''' with the trigger: 5 points
# Freezing the viewpoint ('B' button): 5 points
+
# Allow '''freezing/unfreezing the viewpoint''' with the 'B' button: 5 points
# Wire frame debugging functionality ('A' button): 10 points
+
# Add the '''wire frame debugging''' functionality to the 'A' button: 10 points
# Thumbstick motion of calibration cube in 3D: 5 points
+
# Thumbstick '''motion''' of calibration cube in 3D: 5 points
# Thumbstick resizing of calibration cube: 5 points
+
# Thumbstick '''resizing''' of calibration cube: 5 points
 +
 
 +
'''Grading:'''
 +
* -5 if off-center projection "almost" works: only issue is that images don't align between CAVE walls, but head tracking works
  
 
'''Notes:'''
 
'''Notes:'''
* You need to always render stereo images to the Oculus Rift. When the above milestone list mentions rendering in mono, it means that the image on the CAVE wall is monoscopic, but the Rift should still render that image separately from each eye point's perspective.  
+
* You need to always render '''stereo''' images to the Oculus Rift. When the above milestone list mentions rendering in mono, it means that the image on the CAVE wall can be monoscopic, but the Rift should still render that image separately from each eye point's perspective.  
 
* It can be helpful to render the sphere from project 1 at each controller position, in CAVE space, to make sure it renders in the right place.
 
* It can be helpful to render the sphere from project 1 at each controller position, in CAVE space, to make sure it renders in the right place.
 
* Many of the milestones are cumulative, for instance, if you show us that you can render a stereo image to a screen you don't also have to show us that you can render mono to get those points.
 
* Many of the milestones are cumulative, for instance, if you show us that you can render a stereo image to a screen you don't also have to show us that you can render mono to get those points.
Line 74: Line 77:
 
# Allow the user to fly around the virtual world that is displayed on the virtual CAVE walls. Allow flight in six degrees of freedom (6DOF). Only use one of the index trigger buttons, along with position and orientation of that controller, to control the flight. Use the [http://www.calit2.net/~jschulze/tmp/engineering-campus-small.zip UCSD campus model] to fly around of. Scale it up to its true size. (10 points)
 
# Allow the user to fly around the virtual world that is displayed on the virtual CAVE walls. Allow flight in six degrees of freedom (6DOF). Only use one of the index trigger buttons, along with position and orientation of that controller, to control the flight. Use the [http://www.calit2.net/~jschulze/tmp/engineering-campus-small.zip UCSD campus model] to fly around of. Scale it up to its true size. (10 points)
 
# You already have a sky box that's displayed in the virtual CAVE. Add a sky box of your own choosing around the user, which is in the same coordinate system as the virtual CAVE (meaning that it stays fixed in relation to the CAVE's displays). (3 points)
 
# You already have a sky box that's displayed in the virtual CAVE. Add a sky box of your own choosing around the user, which is in the same coordinate system as the virtual CAVE (meaning that it stays fixed in relation to the CAVE's displays). (3 points)
# Simulate the failure of one projector (just one eye, not an entire wall - we're assuming we're using passive stereo): button press disables random projector, which means you render an all black square for that eye on that wall. (3 points)
+
# Assume that the CAVE is driven by two projectors for each wall (one for each eye) using passive polarized stereo. Simulate the failure of one projector (just one eye, not an entire wall): button press disables random projector, which means you render an all black square for that eye on that wall. (3 points)
 
# Simulate the virtual CAVE more realistically by implementing one or more of the options below.
 
# Simulate the virtual CAVE more realistically by implementing one or more of the options below.
## Brightness falloff on LCD screens: the brightness of the pixels on an LCD screens depends on the angle the user looks at the screen from. When directly in front of the screen, the screen is brightest. When looking from a steep angle, it is darkest. Use this simplified model to simulate the brightness falloff for an LCD display based CAVE: calculate the angle between the vector from the viewer (eye point) to the center of each screen, and the normal to the screen. If the angle is zero, render the image unmodified. For angles between 0 and 90 degrees, reduce the brightness of the image to zero (black) proportionally to the angle. Reducing the brightness of the texture can be done by reducing the brightness of the polygon it's mapped to. (4 points) You get an additional 4 points (10 total) if you do this on a per pixel basis: calculate the angle from eye to each pixel separately and reduce the pixel's brightness accordingly, in a GLSL shader.
+
## Brightness falloff on LCD screens: the brightness of the pixels on an LCD screens depends on the angle the user looks at the screen from. When directly in front of the screen, the screen is brightest. When looking from a steep angle, it is darkest. Use this simplified model to simulate the brightness falloff for an LCD display based CAVE: calculate the angle between the vector from the viewer (eye point) to the center of each screen, and the normal to the screen. If the angle is zero, render the image unmodified. For angles between 0 and 90 degrees, reduce the brightness of the image to zero (black) proportionally to the angle. Reducing the brightness of the texture can be done by reducing the brightness of the polygon it's mapped to. (4 points) You get an additional 4 points if you do this on a per pixel basis: calculate the angle from eye to each pixel separately and reduce the pixel's brightness accordingly, in a GLSL shader.
 
## Vignetting on projected screens: projection screens aren't illuminated homogeneously across their surface, but they are brightest where the projector lamp is and fall off towards the edges. Assume the projectors are in the optical axis of the walls (ie, centered) at 2.4 meters distance behind the screens. Vignetting is a circular effect. Render the pixels at normal brightness in the spot that's where the line from eye to projector intersects the screen. Then make the brightness fall off linearly as the angle to the pixels decreases. You'll have to use a GLSL shader like in the extended version of problem 4.1 because the brightness reduction is different for each pixel. (10 points)
 
## Vignetting on projected screens: projection screens aren't illuminated homogeneously across their surface, but they are brightest where the projector lamp is and fall off towards the edges. Assume the projectors are in the optical axis of the walls (ie, centered) at 2.4 meters distance behind the screens. Vignetting is a circular effect. Render the pixels at normal brightness in the spot that's where the line from eye to projector intersects the screen. Then make the brightness fall off linearly as the angle to the pixels decreases. You'll have to use a GLSL shader like in the extended version of problem 4.1 because the brightness reduction is different for each pixel. (10 points)
 
## Linear polarization effect in passive stereo CAVE: assume the virtual CAVE was built with linearly polarizing projectors and glasses. Linear polarization filters work best only when they're oriented the same as the filter in the projector. When you turn the glasses, they will switch the polarization axis every 90 degrees, and only partly filter the light in-between. For each combination of screen and eye, calculate the angle the eye is oriented compared to the screen. Left and right eye should have opposite polarization (one has vertical, the other horizontal polarization). As the user's head turns you need to calculate the angle between the head and each screen, and linearly interpolate the blending factors for each image. When rendering the screens you need to blend left and right eye image, each with its blending factor. (7 points)
 
## Linear polarization effect in passive stereo CAVE: assume the virtual CAVE was built with linearly polarizing projectors and glasses. Linear polarization filters work best only when they're oriented the same as the filter in the projector. When you turn the glasses, they will switch the polarization axis every 90 degrees, and only partly filter the light in-between. For each combination of screen and eye, calculate the angle the eye is oriented compared to the screen. Left and right eye should have opposite polarization (one has vertical, the other horizontal polarization). As the user's head turns you need to calculate the angle between the head and each screen, and linearly interpolate the blending factors for each image. When rendering the screens you need to blend left and right eye image, each with its blending factor. (7 points)

Latest revision as of 13:29, 18 May 2018

Contents

CAVE Simulator

In this project you will need to create a VR application for the Oculus Rift, which simulates a virtual reality CAVE. Like in projects 1 and 2, you're going to need to use the Oculus SDK, OpenGL and C++.

In the discussions we're going to go over off-center projection with OpenGL and off-screen rendering in greater detail. Note that there won't be a discussion on Monday May 7th, but we will discuss this project in class on Tuesday, May 8th.

Resources:

Starter Code

You can start with your code from project 1 or 2, or go back to the Minimal Example. We will re-use the calibration cube and the sky box from project 2 so at least those parts of that project will be useful to you.

Project Description (100 Points)

This picture shows what a 3-sided virtual reality CAVE can look like:

3-sided-cave.jpg

Looking closely at the image, you will find that the yellow cross bar at the top does not look straight in the image, although it does to the primary user. In the image it is bent right where the edge between the two vertical screens is. This is typical behavior of VR CAVE systems, as they normally render the images for one head-tracked viewer. All other viewers see a usable image, but it can be distorted.

In this project you are tasked with creating a simulator for a 3-sided VR CAVE. This means that you need to create a virtual CAVE, which you can view with the Oculus Rift. In this CAVE you will need to display a sky box and the calibration cube from project 2. You will need to be able to switch between the view from the user's head and a view from one of the Oculus controllers.

The virtual CAVE should consist of three display screens. Each screen should be square with a width of 2.4 meters (roughly 8 feet). One of the screens should be on the floor, the other two should be vertical and seamlessly attached to the screen on the floor and to each other at right angles, just like in the picture. This application should be used standing, with the viewpoint set to your height so that it appears as if you're standing on the floor screen. The initial user position should be with their feet in the center of the floor screen, facing the edge between the two vertical screens.

In this position, the user should see the calibration cube from project 2 in front of them, again 20cm wide and at a distance of about 30cm. The sky box should also be visible. You can use the stereo sky box from project 2, or any other sky box even if it's mono. In this project it is not necessary that the user can turn the boxes off individually.

The big difference between this project and project 2 is that you need to render calibration cube and sky box to the simulated CAVE walls, instead of directly to the Rift's display. You need to do this by rendering the scene six times to off-screen buffers. Six times because there are three displays, and you need to render each separately from each eye position. To render from the different eye positions of a head-tracked user, you are going to need to render with an asymmetrical projection matrix. This picture illustrates the skewed projection pyramids originating from the two eyes:

Offaxis.gif

The pictures below illustrate what an off axis view looks like on the screen and from the user's point of view:

Off-axis-viewing.png

Head-in-Hand Mode: To see the effect of rendering from the point of view of another user, you need to be able to switch the head position to one of the controllers. The switch should happen when the right controller's trigger is pulled, and the viewpoint should remain at the controller and update while the controller is moving, until the trigger is released. When the trigger is released, the viewpoint should switch back to the user's actual head position. Note that because we're rendering in stereo, you'll need to create two camera (=eye) positions at the controller, one offset to the left, one to the right by half the average human eye distance, which is 65 millimeters. When switching the viewpoint, this should only affect the viewpoint used for rendering on the CAVE displays. The HMD's viewpoint should always be tracked from its correct position for a view of the space the CAVE is in. Note that rotations of the controller should not rotate the scene, but because the eye/camera positions rotate about a common center point, the stereo effect on the CAVE walls should change: a 180 degree rotation should invert the stereo effect.

Freeze Mode: We also need a way to freeze the viewpoint the CAVE renders from, regardless of whether it's rendering from head or controller location. For this we'll use the 'B' button. When it is pressed, the viewpoint the CAVE renders from should freeze. When the 'B' button is pressed again, the view should unfreeze.

Calibration Cube: To make the calibration cube more useful, you need to be able to move it in all three dimensions, and make it bigger or smaller. Use both thumbsticks to allow for this. For instance, the left thumbstick could be used to move the cube left/right and forward/backward. The right thumbstick could move it up/down and the other direction makes it smaller/larger.

Debug Mode: To help with debugging, when you've switched the viewpoint to the controller, you should enable debug mode with the 'A' button (activate debug mode while 'A' is depressed, disable upon release). In debug mode, the user sees not only the images on the virtual CAVE screens, but also all six viewing pyramids (technically they're sheared pyramids). These pyramids start in each of the eye positions and go to the corners of each of the screens. Three screens times two eyes is six pyramids. You can visualize the pyramids in wireframe mode with lines outlining them, or use solid surfaces. In either case, you should draw those pyramids that indicate the view from the left eye in green, those from the right eye should be red. You need to render the debug pyramids in rift space, whereas the calibration cube and the sky box need to be rendered in CAVE space.

The difference between the two coordinate spaces is that what you render in rift space can be rendered directly to your OpenGL canvas, whereas everything in CAVE space needs to be rendered to the off-screen buffers.

Grading

You'll get points for the following milestones. We recommend working on them in the sequence given below.

  1. Create CAVE geometry: 3 squares: 5 points
  2. Render the calibration cube to one the CAVE walls in mono using a frame buffer object (FBO), from a fixed viewpoint: 20 points
  3. Render the scene with the sky box in mono to the CAVE walls, from a fixed viewpoint: 5 points
  4. Render scene to one CAVE wall in mono from the user's eyes (need to use head tracking and off-center projection math): 20 points
  5. Render scene to one CAVE wall in stereo from user's eyes: 10 points
  6. Render scene to two CAVE walls in stereo from user's eyes: 5 points
  7. Render scene to all three CAVE screens in stereo from user's eyes: 5 points
  8. Ability to switch viewpoint to controller with the trigger: 5 points
  9. Allow freezing/unfreezing the viewpoint with the 'B' button: 5 points
  10. Add the wire frame debugging functionality to the 'A' button: 10 points
  11. Thumbstick motion of calibration cube in 3D: 5 points
  12. Thumbstick resizing of calibration cube: 5 points

Grading:

  • -5 if off-center projection "almost" works: only issue is that images don't align between CAVE walls, but head tracking works

Notes:

  • You need to always render stereo images to the Oculus Rift. When the above milestone list mentions rendering in mono, it means that the image on the CAVE wall can be monoscopic, but the Rift should still render that image separately from each eye point's perspective.
  • It can be helpful to render the sphere from project 1 at each controller position, in CAVE space, to make sure it renders in the right place.
  • Many of the milestones are cumulative, for instance, if you show us that you can render a stereo image to a screen you don't also have to show us that you can render mono to get those points.

Extra Credit (10 points max.)

You can choose between the following options for extra credit. If you work on multiple options, their points will add up but will not exceed 10 points in total. For each option, assign a button on one of the controllers to turn the option on or off.

  1. Allow the user to fly around the virtual world that is displayed on the virtual CAVE walls. Allow flight in six degrees of freedom (6DOF). Only use one of the index trigger buttons, along with position and orientation of that controller, to control the flight. Use the UCSD campus model to fly around of. Scale it up to its true size. (10 points)
  2. You already have a sky box that's displayed in the virtual CAVE. Add a sky box of your own choosing around the user, which is in the same coordinate system as the virtual CAVE (meaning that it stays fixed in relation to the CAVE's displays). (3 points)
  3. Assume that the CAVE is driven by two projectors for each wall (one for each eye) using passive polarized stereo. Simulate the failure of one projector (just one eye, not an entire wall): button press disables random projector, which means you render an all black square for that eye on that wall. (3 points)
  4. Simulate the virtual CAVE more realistically by implementing one or more of the options below.
    1. Brightness falloff on LCD screens: the brightness of the pixels on an LCD screens depends on the angle the user looks at the screen from. When directly in front of the screen, the screen is brightest. When looking from a steep angle, it is darkest. Use this simplified model to simulate the brightness falloff for an LCD display based CAVE: calculate the angle between the vector from the viewer (eye point) to the center of each screen, and the normal to the screen. If the angle is zero, render the image unmodified. For angles between 0 and 90 degrees, reduce the brightness of the image to zero (black) proportionally to the angle. Reducing the brightness of the texture can be done by reducing the brightness of the polygon it's mapped to. (4 points) You get an additional 4 points if you do this on a per pixel basis: calculate the angle from eye to each pixel separately and reduce the pixel's brightness accordingly, in a GLSL shader.
    2. Vignetting on projected screens: projection screens aren't illuminated homogeneously across their surface, but they are brightest where the projector lamp is and fall off towards the edges. Assume the projectors are in the optical axis of the walls (ie, centered) at 2.4 meters distance behind the screens. Vignetting is a circular effect. Render the pixels at normal brightness in the spot that's where the line from eye to projector intersects the screen. Then make the brightness fall off linearly as the angle to the pixels decreases. You'll have to use a GLSL shader like in the extended version of problem 4.1 because the brightness reduction is different for each pixel. (10 points)
    3. Linear polarization effect in passive stereo CAVE: assume the virtual CAVE was built with linearly polarizing projectors and glasses. Linear polarization filters work best only when they're oriented the same as the filter in the projector. When you turn the glasses, they will switch the polarization axis every 90 degrees, and only partly filter the light in-between. For each combination of screen and eye, calculate the angle the eye is oriented compared to the screen. Left and right eye should have opposite polarization (one has vertical, the other horizontal polarization). As the user's head turns you need to calculate the angle between the head and each screen, and linearly interpolate the blending factors for each image. When rendering the screens you need to blend left and right eye image, each with its blending factor. (7 points)