Difference between revisions of "Project2S17"

From Immersive Visualization Lab Wiki
Jump to: navigation, search
(Project Description (100 Points))
(Project Description (100 Points))
Line 24: Line 24:
  
 
Render [http://web.eng.ucsd.edu/~jschulze/tmp/bear-stereo-cubemaps.zip this stereo panorama image] around the user as a sky box. The trick here is to render different textures on the sky box for each eye, which is necessary to achieve a 3D effect. Switch between cube and stereo panorama with the 'X' button. There are six cube map images in PPM format in the ZIP file. Each is 2k x 2k pixels in size. The files are named nx, ny and nz for the negative x, y and z axis images. The positive axis files are named px, py and pz. Here is a picture of the entire panorama image:
 
Render [http://web.eng.ucsd.edu/~jschulze/tmp/bear-stereo-cubemaps.zip this stereo panorama image] around the user as a sky box. The trick here is to render different textures on the sky box for each eye, which is necessary to achieve a 3D effect. Switch between cube and stereo panorama with the 'X' button. There are six cube map images in PPM format in the ZIP file. Each is 2k x 2k pixels in size. The files are named nx, ny and nz for the negative x, y and z axis images. The positive axis files are named px, py and pz. Here is a picture of the entire panorama image:
[[Image:bear-thumb.jpg | width=500px]]
+
[[Image:bear-thumb.jpg|500px]]
  
  
 
[more details to follow]
 
[more details to follow]

Revision as of 23:18, 25 April 2017

Levels of Immersion

In this project we are going to explore different levels of immersion we can create for the Oculus Rift. Like in project 1, we're going to use the Oculus SDK, OpenGL and C++.

Starter Code

We recommend starting either with your project 1 code, or going back to the Minimal Example you started your project 1 with. It's very compact but yet uses most of the API functions you will need.

Here is the minimal example .cpp file with comments on where the hooks are for the various functions required for this homework project. Note that this file has a main() function, so that it can be built as a console application, and will start up with a console window into which debug messages can be printed. In Visual Studio, you need to enable console mode in the project's properties window with Configuration Properties -> Linker -> System -> SubSystem -> Console.

Project Description (100 Points)

You need to do the following things:

Modify the minimal example to render a single cube with this test image on all its faces. The image needs to be right side up on the vertical faces. The cube should be 20cm wide and its closest face should be about 30cm in front of the user. Here is source code to load the PPM file into memory so that you can apply it as a texture.

Cycle between 3D stereo, mono (both eyes see identical view), left only, right only with the 'A' button.

Cycle between head tracking modes with the 'B' button: no tracking, orientation only, position only, both.

Vary the interocular distance (IOD) with the right thumb stick left/right.

Vary the physical size of the cube with the left thumb stick left/right.

Render this stereo panorama image around the user as a sky box. The trick here is to render different textures on the sky box for each eye, which is necessary to achieve a 3D effect. Switch between cube and stereo panorama with the 'X' button. There are six cube map images in PPM format in the ZIP file. Each is 2k x 2k pixels in size. The files are named nx, ny and nz for the negative x, y and z axis images. The positive axis files are named px, py and pz. Here is a picture of the entire panorama image: Bear-thumb.jpg


[more details to follow]