From Immersive Visualization Lab Wiki
Revision as of 09:33, 1 June 2007 by Azavodny (Talk | contribs)

Jump to: navigation, search


Project Overview

The goal of this project is to implement a scalable real-time solution for streaming multiple high definition videos to be stitched into a "super high-definition" panoramic video. This is accomplished by streaming feeds from multiple calibrated HD cameras to a centralized location, where they are processed for storage/live viewing on multi-panel displays. To account for perspective distortion that affects displays with large vertical or horizontal fields of view, the videos are passed through a spherical warping algorithm on graphics hardware before being displayed.

Another important aspect of the project is in efficiently streaming the video across a network with unreliable bandwidth: if the bandwidth between the sender and receiver is lessened, we wish to allow the user to choose which aspect of the video to degrade to maintain desired performance (e.g. degrading frame rate to preserve quality, or vice-versa).

Project Status

Currently, we are able to stream video from the two HD cameras we have available to us and display the warped videos simultaneously on the receiving end. The user can then interactively change the bitrate of the stream, change the focal depth of the virtual camera, and manually align the images.

Technical Specs


Andrew Prudhomme: Andrew has worked on the back end of the project, specifically in receiving input from the cameras, performing the necessary steps to obtain the data in a usable format, and compressing and streaming it efficiently across a network.

Alex Zavodny: Alex has worked on the front end of the project, which involves receiving the individual streams from the cameras, processing them, and displaying them in an efficient and timely manner.