From Immersive Visualization Lab Wiki
Contents
- 1 Past Projects
- 2 Active Projects
- 2.1 Ground Penetrating Radar (Philip Weber, Albert Lin, 2011)
- 2.2 San Diego Wildfires (Philip Weber, Jessica Block, 2011)
- 2.3 PanoView360 (Andrew Prudhomme, Dan Sandin, 2010-)
- 2.4 Android Navigator (Brooklyn Schlamp, Summer 2011)
- 2.5 ArtifactVis (Kyle Knabb, Jurgen Schulze, Connor DeFanti, 2008-)
- 2.6 OpenAL Audio Server (Shreenidhi Chowkwale, Summer 2011)
- 2.7 GreenLight BlackBox 2.0 (John Mangan, Summer 2011)
- 2.8 3D Reconstruction of Photographs (Matthew Religioso, 2011-)
- 2.9 Real-Time Geometry Scanning System (Daniel Tenedorio, 2011-)
- 2.10 Real-Time Meshing of Dynamic Point Clouds (Robert Pardridge, James Lue, 2011-)
- 2.11 LSystems (Sarah Larsen 2011-)
- 2.12 Android Controller (Jeanne Wang 2011-)
- 2.13 Object-Oriented Interaction with Large High Resolution Displays (Lynn Nguyen 2011-)
- 2.14 ScreenMultiViewer (John Mangan, Phi Hung Nguyen 2010-2011)
- 2.15 Kinect UI for 3D Pacman (Tony Lu, 2011)
- 2.16 CaveCAD (Lelin Zhang 2009-)
- 2.17 Neuroscience and Architecture (Daniel Rohrlick, Michael Bajorek, Mabel Zhang, Lelin Zhang 2007)
- 2.18 VOX and Virvo (Jurgen Schulze, 1999-)
|
Active Projects
Ground Penetrating Radar (Philip Weber, Albert Lin, 2011)
|
Calit2's Albert Yu-Min Lin has been named to the 2010 class of National Geographic Emerging Explorers. His current research interest is to find the tomb of Genghis Khan in Mongolia, albeit without physically turning a single rock, but only by analyzing a variety of data modalities such as satellite imagery, aerial photography, and sub-surface radar. This demonstration shows three scanning modalities in one visualization application: magnetic, electro-magnetic, and ground penetrating radar. The latter is able to penetrate the ground the deepest, so we chose to focus on it when it comes to visualizing depth data. Spheres with a depth-based color scheme are used to visualize the data. The user can interactively select how deep they want to go under ground, and then fly around to examine the data and put it in perspective with the other two modalities. |
|
The Cedar Fire was a human-caused wildfire which destroyed a large number of buildings and infrastructure in San Diego County in October 2003. This application shows high resolution aerial data of the areas affected by the fires. The data resolution is extremely high at 0.5 meters for the imagery and 2 meters for elevation. Our demonstration shows this data embedded into an osgEarth-based visualization framework, which allows adding such data to any place on our planet and viewing it in a way similar to Google Earth, but with full support for high-end visualization systems. |
PanoView360 (Andrew Prudhomme, Dan Sandin, 2010-)
|
Researchers at UIC/EVL and UCSD/Calit2 have developed a method to acquire very high resolution, surround and stereo panorama images using dual SLR cameras. This VR application allows viewing these approximately gigabyte sized images in real-time and supports real-time changes of the viewing direction and zooming. |
|
This project implements an Android phone based navigaton tool for the CalVR environment. |
ArtifactVis (Kyle Knabb, Jurgen Schulze, Connor DeFanti, 2008-)
|
For the past ten years, a joint University of California, San Diego and Department of Antiquities of Jordan research team led by Professor Tom Levy and Dr. Mohammad Najjar has been investigating the role of mining and metallurgy on social evolution from the Neolithic period (ca. 7500 BC) to medieval Islamic times (ca. 12th century AD). Kyle Knabb has been working with the IVL as a master's student under Professor Thomas Levy from the archaeology department. He created a 3D visualization for the StarCAVE which displays several excavation sites in Jordan, along with artifacts found there, and radio carbon dating sites. The data resides in a PostgreSQL data bank with the PostGIS extension, which the VR application uses to pull down the data at run-time.
|
|
A Linux-based audio server that uses the OpenAL API to deliver surround sound to virtual visualization environments. |
|
The SUN Mobile Data Center at UCSD has been equipped with a myriad of sensors within the NSF funded Greenlight project. This virtual reality application attempts to convey the sensor information to the user while retaining spatial information inherent in the data by the arrangement of hardware in the data center. The application links on-line to a data base which collects all sensor information as it becomes available and stores a complete history of it. The VR application can show current energy consumption and temperatures of the hardware in the container, but it can also be used to query arbitrary time windows in the past. |
|
|
Reconstruct static objects from photographs into 3D models using the Bundler Algorithm and the Texturing Algorithm developed by prior students. This project's goal is to optimize the Texturing to maximize photorealism and efficiency, and run the resulting application in the StarCAVE. |
|
|
|
This interactive system constructs a 3D model of the environment as a user moves an infrared geometry camera around a room. We display the intermediate representation of the scene in real-time on virtual reality displays ranging from a single computer monitor to immersive, stereoscopic projection systems like the StarCAVE. |
|
|
This project involves generating a triangle mesh in over a point cloud that grows dynamically. The goal is to implement a meshing algorithm that is fast enough to keep up with the streaming input from a scanning device. We are using a CUDA implementation of the Marching Cubes algorithm to triangulate in real-time a point cloud obtained from the Kinect depth-camera. |
LSystems (Sarah Larsen 2011-)
|
Creates an LSystem and displays it with either line or cylinder connections |
|
An Android based controller for a visualization system such as StarCave or a multiscreen grid. |
|
Investigate the practicality of using smartphones to interact with large high resolution displays. To accomplish such a
task, it is not necessary to find the spatial location of the phone relative to the display, rather we can identify the object a user wants to interact with through image recognition. The interaction with the object itself can be done by using the smart-phone as the medium. The feasibility of this concept is investigated by implementing a prototype. |
ScreenMultiViewer (John Mangan, Phi Hung Nguyen 2010-2011)
|
A display mode within CalVR that allows two users to simultaneously use head trackers within either the StarCAVE or Nexcave, with minimal immersion loss. |
|
An experimentation with the Kinect to implement a device free, gesture controlled user interface in the StarCAVE to run a 3D Pacman game. |
CaveCAD (Lelin Zhang 2009-)
|
Calit2 researcher Lelin ZHANG provides architect designers with pure immersive 3D experience in virtual reality environment of StarCAVE.
|
Neuroscience and Architecture (Daniel Rohrlick, Michael Bajorek, Mabel Zhang, Lelin Zhang 2007)
|
This projects started off as a Calit2 seed funded project to do a pilot study with the Swartz Center for Neuroscience in which a human subject has to find their way to specific locations in the Calit2 building while their brain waves are being scanned by a high resolution EEG. Michael's responsibility was the interface between the StarCAVE and the EEG system, to transfer tracker data and other application parameters to allow for the correlation of EEG data with VR parameters. Daniel created the 3D model of the New Media Arts wing of the building using 3ds Max. Mabel refined the Calit2 building geometry. This project has been receiving funding from HMC. |
|
Ongoing development of real-time volume rendering algorithms for interactive display at the desktop (DeskVOX) and in virtual environments (CaveVOX). Virvo is name for the GUI independent, OpenGL based volume rendering library which both DeskVOX and CaveVOX use. |
|