Difference between revisions of "Rincon"

From Immersive Visualization Lab Wiki
Jump to: navigation, search
 
(Project Status)
Line 7: Line 7:
 
==Project Status==
 
==Project Status==
  
Currently, we are able to stream video from the three HD cameras we have available to us and display the warped videos simultaneously on the receiving end. At the current moment, technical issues prevent the entirety of all three streams from being displayed on a multi-panel display at once, but having reached this point is a milestone as it evidences functional software on both the sending end (interaction with the cameras) and the receiving end (processing multiple streams at once).
+
Currently, we are able to stream video from the two HD cameras we have available to us and display the warped videos simultaneously on the receiving end. The user can then interactively change the bitrate of the stream, change the focal depth of the virtual camera, and manually align the images.
 
+
The immediate goals are to fix the multi-panel display bug, and to optimize the process by performing certain aspects of the video encoding process in hardware as opposed to software.
+
  
 
==Technical Specs==
 
==Technical Specs==

Revision as of 10:33, 1 June 2007

Contents

Project Overview

The goal of this project is to implement a scalable real-time solution for streaming multiple high definition videos to be stitched into a "super high-definition" panoramic video. This is accomplished by streaming feeds from multiple calibrated HD cameras to a centralized location, where they are processed for storage/live viewing on multi-panel displays. To account for perspective distortion that affects displays with large vertical or horizontal fields of view, the videos are passed through a spherical warping algorithm on graphics hardware before being displayed.

Another important aspect of the project is in efficiently streaming the video across a network with unreliable bandwidth: if the bandwidth between the sender and receiver is lessened, we wish to allow the user to choose which aspect of the video to degrade to maintain desired performance (e.g. degrading frame rate to preserve quality, or vice-versa).

Project Status

Currently, we are able to stream video from the two HD cameras we have available to us and display the warped videos simultaneously on the receiving end. The user can then interactively change the bitrate of the stream, change the focal depth of the virtual camera, and manually align the images.

Technical Specs

Students

Andrew Prudhomme: Andrew has worked on the back end of the project, specifically in receiving input from the cameras, performing the necessary steps to obtain the data in a usable format, and compressing and streaming it efficiently across a network.

Alex Zavodny: Alex has worked on the front end of the project, which involves receiving the individual streams from the cameras, processing them, and displaying them in an efficient and timely manner.