Difference between revisions of "User:Kalice"

From Immersive Visualization Lab Wiki
Jump to: navigation, search
(Karen Lin)
(Image set processing)
Line 22: Line 22:
 
=== Image set processing ===
 
=== Image set processing ===
 
add something here
 
add something here
 +
 +
[[Image:overviewScanline.png]]
 +
[[Image:Patio_0_95_77.png]]
 +
[[Image:Patio_0_95_193.png]]
 +
[[Image:Patio_0_95_329.png]]
  
 
=== Output ===
 
=== Output ===

Revision as of 15:25, 23 May 2007

Contents

Karen Lin

I am a 2nd year CSE Masters student at UCSD specializing in graphics and vision. I work with Matthias Zwicker and Jurgen Schulze on a joint project in the Visualization Lab of Cal-IT2. There's nothing like playing with the latest multi-million dollar visual technology everyday.

Depth Of Focus Project

I am developing a realtime auto-focusing feature for video streaming onto 3D displays that will cut bandwidth usage by limiting detail outside the viewer's region of interest as well as reducing the number of raw data frames required to produce 3D video. The region of interest is defined as the depth at the viewer's focal point within the displayed scene.

I will reduce the required number of captured data frames by creating interpolated images from video capture, but retain comparable quality in the area of interest. Additionally, the interpolation scheme can simulate natural depth of focus by interpolating cleanly only between pixels at the depth of the viewer's focal point. I will determine this depth from a mouse pointer, and eventually via eye-tracking. This mode of image filtering will adaptively blur out the remaining areas of the image.


Algorithm

Input

  • Given a set of images of a 3D scene recorded into 2D by a camera tracking/panning horizontally along a plane. This can be also thought of as images captured from a camera array, in which a row of cameras are evenly spaced, identically calibrated.
  • Headtracking will keep track of the user's viewing position, allowing the screen image to change to the natural perspective change.
  • Wand/mouse click will signal the user's new focal point, resulting in a new depth/object to focus on.

Depth Finder

By varying the amount of shear transformation on an elliptical gaussian filter, the depth of a pixel can be determined. Imagine all the images being stacked together as a deck of cards. Now redefine a scanline in terms of 2D, as a 2D slice through the deck, seen from the side of the stack. This holds information on how quickly an object/pixel changes position as the camera position moves.

Image set processing

add something here

OverviewScanline.png Patio 0 95 77.png Patio 0 95 193.png Patio 0 95 329.png

Output

In the end, an image from the vantage point of the viewer gets rendered to the screen, not the exact input image from a camera at the viewer's position. In fact this allows new perspectives to be rendered without having any actual camera data from that precise point in the scene. The scene will also be focused on the object that the user has clicked on. The focus will look realistic because each pixel's blurring is directly proportional to its depth from the focal point.

Implementation

Schedule/Milestones

MUST finish thesis by June 8th. =p

Results

  • Downloads
  • Images

References/Links