Difference between revisions of "User:Kalice"
(→Implementation) |
(→References/Links) |
||
(9 intermediate revisions by one user not shown) | |||
Line 3: | Line 3: | ||
I am a 2nd year CSE Masters student at UCSD specializing in graphics and vision. I work with [http://graphics.ucsd.edu/~matthias/ Matthias Zwicker] and [http://www.calit2.net/~jschulze/ Jurgen Schulze] on a joint project in the Visualization Lab of Cal-IT2. There's nothing like playing with the latest multi-million dollar visual technology everyday. | I am a 2nd year CSE Masters student at UCSD specializing in graphics and vision. I work with [http://graphics.ucsd.edu/~matthias/ Matthias Zwicker] and [http://www.calit2.net/~jschulze/ Jurgen Schulze] on a joint project in the Visualization Lab of Cal-IT2. There's nothing like playing with the latest multi-million dollar visual technology everyday. | ||
− | + | = Depth Of Focus Project = | |
− | + | I am developing a realtime auto-focusing feature for video streaming onto 3D displays that will cut bandwidth usage by limiting detail outside the viewer's region of interest as well as reducing the number of raw data frames required to produce 3D video. The region of interest is defined as the depth at the viewer's focal point within the displayed scene. | |
− | + | I will reduce the required number of captured data frames by creating interpolated images from video capture, but retain comparable quality in the area of interest. Additionally, the interpolation scheme can simulate natural depth of focus by interpolating cleanly only between pixels at the depth of the viewer's focal point. I will determine this depth from a mouse pointer, and eventually via eye-tracking. This mode of image filtering will adaptively blur out the remaining areas of the image. | |
− | + | ||
+ | |||
+ | == Algorithm == | ||
+ | === Input === | ||
* Given a set of images of a 3D scene recorded into 2D by a camera tracking/panning horizontally along a plane. This can be also thought of as images captured from a camera array, in which a row of cameras are evenly spaced, identically calibrated. | * Given a set of images of a 3D scene recorded into 2D by a camera tracking/panning horizontally along a plane. This can be also thought of as images captured from a camera array, in which a row of cameras are evenly spaced, identically calibrated. | ||
Line 13: | Line 16: | ||
* Wand/mouse click will signal the user's new focal point, resulting in a new depth/object to focus on. | * Wand/mouse click will signal the user's new focal point, resulting in a new depth/object to focus on. | ||
− | + | === Depth Finder === | |
− | + | By varying the amount of shear transformation on an elliptical gaussian filter, the depth of a pixel can be determined. | |
− | + | Imagine all the images being stacked together as a deck of cards. Now redefine a scanline in terms of 2D, as a 2D slice through the deck, seen from the side of the stack. This holds information on how quickly an object/pixel changes position as the camera position moves. | |
+ | |||
+ | === Image set processing === | ||
+ | by stacking the same scanline from each image in the set, we create slice images. The slopes of these lines correspond to how quickly an object moves in the scene relative to the camera movement. The more horizontal a line is, the closer the object is. However, if we think in terms of pixels, each pixel in the scanline will have it's own depth, which is equal to the slope of the line. The depth of every pixel in the image can be differentiated this way. | ||
+ | |||
+ | [[Image:overviewScanline.png]] | ||
+ | [[Image:Patio_0_95_77.png]] | ||
+ | [[Image:Patio_0_95_193.png]] | ||
+ | [[Image:Patio_0_95_329.png]] | ||
+ | |||
+ | To actually determine the slope of a line, differently sheared elliptical filters are used to match up with the slope of the line. The filter with the minimum variance will give the correct shape of the filter necessary to focus the image on only the desired depth. | ||
+ | |||
+ | [[Image:Filtersq000.png|200px|shear 0]] [[Image:Filtersq006.png|200px|shear 6]] [[Image:Filtersq027.png|200px|shear 27]] | ||
+ | |||
+ | === Output === | ||
+ | In the end, an image from the vantage point of the viewer gets rendered to the screen, not the exact input image from a camera at the viewer's position. In fact this allows new perspectives to be rendered without having any actual camera data from that precise point in the scene. The scene will also be focused on the object that the user has clicked on. The focus will look realistic because each pixel's blurring is directly proportional to its depth from the focal point. | ||
− | + | == Implementation == | |
* using [http://www.openscenegraph.com OpenSceneGraph], OpenGL, GLSL, OpenCOVER, and COVISE | * using [http://www.openscenegraph.com OpenSceneGraph], OpenGL, GLSL, OpenCOVER, and COVISE | ||
− | + | == Schedule/Milestones == | |
MUST finish thesis by June 8th. =p | MUST finish thesis by June 8th. =p | ||
− | + | == Results == | |
− | + | ||
* Downloads | * Downloads | ||
* Images | * Images | ||
− | + | == References/Links == | |
− | + | * [http://www.cs.cmu.edu/~ph/texfund/texfund.pdf Ellipses and Gaussians Appendix B] | |
− | [http:// | + | * [http://delivery.acm.org/10.1145/1010000/1006088/p247-zwicker.pdf?key1=1006088&key2=1783443611&coll=GUIDE&dl=GUIDE&CFID=4464737&CFTOKEN=65734564 Bounding Box Equations 8 and 9 ] |
Latest revision as of 14:28, 26 September 2007
Contents |
Karen Lin
I am a 2nd year CSE Masters student at UCSD specializing in graphics and vision. I work with Matthias Zwicker and Jurgen Schulze on a joint project in the Visualization Lab of Cal-IT2. There's nothing like playing with the latest multi-million dollar visual technology everyday.
Depth Of Focus Project
I am developing a realtime auto-focusing feature for video streaming onto 3D displays that will cut bandwidth usage by limiting detail outside the viewer's region of interest as well as reducing the number of raw data frames required to produce 3D video. The region of interest is defined as the depth at the viewer's focal point within the displayed scene.
I will reduce the required number of captured data frames by creating interpolated images from video capture, but retain comparable quality in the area of interest. Additionally, the interpolation scheme can simulate natural depth of focus by interpolating cleanly only between pixels at the depth of the viewer's focal point. I will determine this depth from a mouse pointer, and eventually via eye-tracking. This mode of image filtering will adaptively blur out the remaining areas of the image.
Algorithm
Input
- Given a set of images of a 3D scene recorded into 2D by a camera tracking/panning horizontally along a plane. This can be also thought of as images captured from a camera array, in which a row of cameras are evenly spaced, identically calibrated.
- Headtracking will keep track of the user's viewing position, allowing the screen image to change to the natural perspective change.
- Wand/mouse click will signal the user's new focal point, resulting in a new depth/object to focus on.
Depth Finder
By varying the amount of shear transformation on an elliptical gaussian filter, the depth of a pixel can be determined. Imagine all the images being stacked together as a deck of cards. Now redefine a scanline in terms of 2D, as a 2D slice through the deck, seen from the side of the stack. This holds information on how quickly an object/pixel changes position as the camera position moves.
Image set processing
by stacking the same scanline from each image in the set, we create slice images. The slopes of these lines correspond to how quickly an object moves in the scene relative to the camera movement. The more horizontal a line is, the closer the object is. However, if we think in terms of pixels, each pixel in the scanline will have it's own depth, which is equal to the slope of the line. The depth of every pixel in the image can be differentiated this way.
To actually determine the slope of a line, differently sheared elliptical filters are used to match up with the slope of the line. The filter with the minimum variance will give the correct shape of the filter necessary to focus the image on only the desired depth.
Output
In the end, an image from the vantage point of the viewer gets rendered to the screen, not the exact input image from a camera at the viewer's position. In fact this allows new perspectives to be rendered without having any actual camera data from that precise point in the scene. The scene will also be focused on the object that the user has clicked on. The focus will look realistic because each pixel's blurring is directly proportional to its depth from the focal point.
Implementation
- using OpenSceneGraph, OpenGL, GLSL, OpenCOVER, and COVISE
Schedule/Milestones
MUST finish thesis by June 8th. =p
Results
- Downloads
- Images