Difference between revisions of "User:Kalice"
(→Algorithm) |
(→Karen Lin) |
||
Line 3: | Line 3: | ||
I am a 2nd year CSE Masters student at UCSD specializing in graphics and vision. I work with [http://graphics.ucsd.edu/~matthias/ Matthias Zwicker] and [http://www.calit2.net/~jschulze/ Jurgen Schulze] on a joint project in the Visualization Lab of Cal-IT2. There's nothing like playing with the latest multi-million dollar visual technology everyday. | I am a 2nd year CSE Masters student at UCSD specializing in graphics and vision. I work with [http://graphics.ucsd.edu/~matthias/ Matthias Zwicker] and [http://www.calit2.net/~jschulze/ Jurgen Schulze] on a joint project in the Visualization Lab of Cal-IT2. There's nothing like playing with the latest multi-million dollar visual technology everyday. | ||
− | + | = Depth Of Focus Project = | |
I am developing a realtime auto-focusing feature for video streaming onto 3D displays that will cut bandwidth usage by limiting detail outside the viewer's region of interest as well as reducing the number of raw data frames required to produce 3D video. The region of interest is defined as the depth at the viewer's focal point within the displayed scene. | I am developing a realtime auto-focusing feature for video streaming onto 3D displays that will cut bandwidth usage by limiting detail outside the viewer's region of interest as well as reducing the number of raw data frames required to produce 3D video. The region of interest is defined as the depth at the viewer's focal point within the displayed scene. | ||
Line 9: | Line 9: | ||
− | + | == Algorithm == | |
− | + | === Input === | |
* Given a set of images of a 3D scene recorded into 2D by a camera tracking/panning horizontally along a plane. This can be also thought of as images captured from a camera array, in which a row of cameras are evenly spaced, identically calibrated. | * Given a set of images of a 3D scene recorded into 2D by a camera tracking/panning horizontally along a plane. This can be also thought of as images captured from a camera array, in which a row of cameras are evenly spaced, identically calibrated. | ||
Line 16: | Line 16: | ||
* Wand/mouse click will signal the user's new focal point, resulting in a new depth/object to focus on. | * Wand/mouse click will signal the user's new focal point, resulting in a new depth/object to focus on. | ||
− | + | === Depth Finder === | |
By varying the amount of shear transformation on an elliptical gaussian filter, the depth of a pixel can be determined. | By varying the amount of shear transformation on an elliptical gaussian filter, the depth of a pixel can be determined. | ||
Imagine all the images being stacked together as a deck of cards. Now redefine a scanline in terms of 2D, as a 2D slice through the deck, seen from the side of the stack. This holds information on how quickly an object/pixel changes position as the camera position moves. | Imagine all the images being stacked together as a deck of cards. Now redefine a scanline in terms of 2D, as a 2D slice through the deck, seen from the side of the stack. This holds information on how quickly an object/pixel changes position as the camera position moves. | ||
− | + | === Image set processing === | |
add something here | add something here | ||
− | + | === Output === | |
In the end, an image from the vantage point of the viewer gets rendered to the screen, not the exact input image from a camera at the viewer's position. In fact this allows new perspectives to be rendered without having any actual camera data from that precise point in the scene. The scene will also be focused on the object that the user has clicked on. The focus will look realistic because each pixel's blurring is directly proportional to its depth from the focal point. | In the end, an image from the vantage point of the viewer gets rendered to the screen, not the exact input image from a camera at the viewer's position. In fact this allows new perspectives to be rendered without having any actual camera data from that precise point in the scene. The scene will also be focused on the object that the user has clicked on. The focus will look realistic because each pixel's blurring is directly proportional to its depth from the focal point. | ||
− | + | == Implementation == | |
* using [http://www.openscenegraph.com OpenSceneGraph], OpenGL, GLSL, OpenCOVER, and COVISE | * using [http://www.openscenegraph.com OpenSceneGraph], OpenGL, GLSL, OpenCOVER, and COVISE | ||
− | + | == Schedule/Milestones == | |
MUST finish thesis by June 8th. =p | MUST finish thesis by June 8th. =p | ||
− | + | == Results == | |
− | + | ||
* Downloads | * Downloads | ||
* Images | * Images | ||
− | + | == References/Links == | |
− | + | ||
* [http://www.cs.cmu.edu/~ph/texfund/texfund.pdf Ellipses and Gaussians Appendix B] | * [http://www.cs.cmu.edu/~ph/texfund/texfund.pdf Ellipses and Gaussians Appendix B] | ||
* [http://delivery.acm.org/10.1145/1010000/1006088/p247-zwicker.pdf?key1=1006088&key2=1783443611&coll=GUIDE&dl=GUIDE&CFID=4464737&CFTOKEN=65734564 Bounding Box Equations 8 and 9 ] | * [http://delivery.acm.org/10.1145/1010000/1006088/p247-zwicker.pdf?key1=1006088&key2=1783443611&coll=GUIDE&dl=GUIDE&CFID=4464737&CFTOKEN=65734564 Bounding Box Equations 8 and 9 ] | ||
* [http://ivl.calit2.net/wiki/index.php/Porters Testing out Porter's Pub] | * [http://ivl.calit2.net/wiki/index.php/Porters Testing out Porter's Pub] |
Revision as of 13:07, 4 May 2007
Contents |
Karen Lin
I am a 2nd year CSE Masters student at UCSD specializing in graphics and vision. I work with Matthias Zwicker and Jurgen Schulze on a joint project in the Visualization Lab of Cal-IT2. There's nothing like playing with the latest multi-million dollar visual technology everyday.
Depth Of Focus Project
I am developing a realtime auto-focusing feature for video streaming onto 3D displays that will cut bandwidth usage by limiting detail outside the viewer's region of interest as well as reducing the number of raw data frames required to produce 3D video. The region of interest is defined as the depth at the viewer's focal point within the displayed scene.
I will reduce the required number of captured data frames by creating interpolated images from video capture, but retain comparable quality in the area of interest. Additionally, the interpolation scheme can simulate natural depth of focus by interpolating cleanly only between pixels at the depth of the viewer's focal point. I will determine this depth from a mouse pointer, and eventually via eye-tracking. This mode of image filtering will adaptively blur out the remaining areas of the image.
Algorithm
Input
- Given a set of images of a 3D scene recorded into 2D by a camera tracking/panning horizontally along a plane. This can be also thought of as images captured from a camera array, in which a row of cameras are evenly spaced, identically calibrated.
- Headtracking will keep track of the user's viewing position, allowing the screen image to change to the natural perspective change.
- Wand/mouse click will signal the user's new focal point, resulting in a new depth/object to focus on.
Depth Finder
By varying the amount of shear transformation on an elliptical gaussian filter, the depth of a pixel can be determined. Imagine all the images being stacked together as a deck of cards. Now redefine a scanline in terms of 2D, as a 2D slice through the deck, seen from the side of the stack. This holds information on how quickly an object/pixel changes position as the camera position moves.
Image set processing
add something here
Output
In the end, an image from the vantage point of the viewer gets rendered to the screen, not the exact input image from a camera at the viewer's position. In fact this allows new perspectives to be rendered without having any actual camera data from that precise point in the scene. The scene will also be focused on the object that the user has clicked on. The focus will look realistic because each pixel's blurring is directly proportional to its depth from the focal point.
Implementation
- using OpenSceneGraph, OpenGL, GLSL, OpenCOVER, and COVISE
Schedule/Milestones
MUST finish thesis by June 8th. =p
Results
- Downloads
- Images