Difference between revisions of "Projects"

From Immersive Visualization Lab Wiki
Jump to: navigation, search
(Kinect UI for 3D Pacman (Tony Lu, 2011))
Line 67: Line 67:
 
<hr>
 
<hr>
  
===[[3D Reconstruction of Photographs ]] (Matthew Religioso, 2011-)===
 
<table>
 
  <tr>
 
  <td>[[Image:Wiki.jpg]]</td>
 
  <td width=20></td>
 
  <td>Reconstruct static objects from photographs into 3D models using the Bundler Algorithm and the Texturing Algorithm developed by prior students. This project's goal is to optimize the Texturing to maximize photorealism and efficiency, and run the resulting application in the StarCAVE. </td>
 
  <td width=20></td>
 
  </tr>
 
</table>
 
<hr>
 
 
===[[Real-Time Geometry Scanning System]] (Daniel Tenedorio, 2011-)===
 
<table>
 
  <tr>
 
  <td>[[Image:dtenedor-wiki-icon.png]]</td>
 
  <td width=20></td>
 
  <td>This interactive system constructs a 3D model of the environment as a user moves an infrared geometry camera around a room. We display the intermediate representation of the scene in real-time on virtual reality displays ranging from a single computer monitor to immersive, stereoscopic projection systems like the StarCAVE.</td>
 
  <td width=20></td>
 
  </tr>
 
</table>
 
<hr>
 
 
===[[Real-Time Meshing of Dynamic Point Clouds]] (Robert Pardridge, James Lue, 2011-)===
 
<table>
 
  <tr>
 
  <td>[[Image:Marchingcubes.png]]</td>
 
  <td>This project involves generating a triangle mesh in over a point cloud that grows dynamically. The goal is to implement a meshing algorithm that is fast enough to keep up with the streaming input from a scanning device. We are using a CUDA implementation of the Marching Cubes algorithm to triangulate in real-time a point cloud obtained from the Kinect depth-camera. </td>
 
  </tr>
 
</table>
 
<hr>
 
 
===[[LSystems]] (Sarah Larsen 2011-)===
 
<table>
 
  <tr>
 
  <td>[[Image:LSystems2.png]]</td>
 
  <td>Creates an LSystem and displays it with either line or cylinder connections</td>
 
  </tr>
 
</table>
 
<hr>
 
 
===[[Android Controller]] (Jeanne Wang 2011-)===
 
<table>
 
  <tr>
 
  <td>[[Image:androidscreenshot.png]]</td>
 
  <td>An Android based controller for a visualization system such as StarCave or a multiscreen grid.</td>
 
  </tr>
 
</table>
 
<hr>
 
 
===[[Object-Oriented Interaction with Large High Resolution Displays]] (Lynn Nguyen 2011-)===
 
<table>
 
  <tr>
 
  <td>[[Image:Lynn_selected.jpg]]</td>
 
  <td>Investigate the practicality of using smartphones to interact with large high resolution displays. To accomplish such a
 
task, it is not necessary to find the spatial location of the phone relative to the display, rather we can identify the object a user wants to interact with through image recognition. The interaction with the object itself can be done by using the smart-phone as the medium. The feasibility of this concept is investigated by implementing a prototype.</td>
 
  </tr>
 
</table>
 
<hr>
 
  
 
===[[ScreenMultiViewer]] (John Mangan, Phi Hung Nguyen 2010-2011)===
 
===[[ScreenMultiViewer]] (John Mangan, Phi Hung Nguyen 2010-2011)===

Revision as of 22:58, 24 October 2011

Contents

Past Projects

Active Projects

Ground Penetrating Radar (Philip Weber, Albert Lin, 2011)

Image-missing.jpg Calit2's Albert Yu-Min Lin has been named to the 2010 class of National Geographic Emerging Explorers. His current research interest is to find the tomb of Genghis Khan in Mongolia, albeit without physically turning a single rock, but only by analyzing a variety of data modalities such as satellite imagery, aerial photography, and sub-surface radar. This demonstration shows three scanning modalities in one visualization application: magnetic, electro-magnetic, and ground penetrating radar. The latter is able to penetrate the ground the deepest, so we chose to focus on it when it comes to visualizing depth data. Spheres with a depth-based color scheme are used to visualize the data. The user can interactively select how deep they want to go under ground, and then fly around to examine the data and put it in perspective with the other two modalities.

San Diego Wildfires (Philip Weber, Jessica Block, 2011)

Image-missing.jpg The Cedar Fire was a human-caused wildfire which destroyed a large number of buildings and infrastructure in San Diego County in October 2003. This application shows high resolution aerial data of the areas affected by the fires. The data resolution is extremely high at 0.5 meters for the imagery and 2 meters for elevation. Our demonstration shows this data embedded into an osgEarth-based visualization framework, which allows adding such data to any place on our planet and viewing it in a way similar to Google Earth, but with full support for high-end visualization systems.

PanoView360 (Andrew Prudhomme, Dan Sandin, 2010-)

Image-missing.jpg Researchers at UIC/EVL and UCSD/Calit2 have developed a method to acquire very high resolution, surround and stereo panorama images using dual SLR cameras. This VR application allows viewing these approximately gigabyte sized images in real-time and supports real-time changes of the viewing direction and zooming.

Android Navigator (Brooklyn Schlamp, Summer 2011)

Image-missing.jpg This project implements an Android phone based navigaton tool for the CalVR environment.

ArtifactVis (Kyle Knabb, Jurgen Schulze, Connor DeFanti, 2008-)

Khirbit.jpg For the past ten years, a joint University of California, San Diego and Department of Antiquities of Jordan research team led by Professor Tom Levy and Dr. Mohammad Najjar has been investigating the role of mining and metallurgy on social evolution from the Neolithic period (ca. 7500 BC) to medieval Islamic times (ca. 12th century AD). Kyle Knabb has been working with the IVL as a master's student under Professor Thomas Levy from the archaeology department. He created a 3D visualization for the StarCAVE which displays several excavation sites in Jordan, along with artifacts found there, and radio carbon dating sites. The data resides in a PostgreSQL data bank with the PostGIS extension, which the VR application uses to pull down the data at run-time.

OpenAL Audio Server (Shreenidhi Chowkwale, Summer 2011)

Image-missing.jpg A Linux-based audio server that uses the OpenAL API to deliver surround sound to virtual visualization environments.

GreenLight BlackBox 2.0 (John Mangan, Summer 2011)

GreenLight.png The SUN Mobile Data Center at UCSD has been equipped with a myriad of sensors within the NSF funded Greenlight project. This virtual reality application attempts to convey the sensor information to the user while retaining spatial information inherent in the data by the arrangement of hardware in the data center. The application links on-line to a data base which collects all sensor information as it becomes available and stores a complete history of it. The VR application can show current energy consumption and temperatures of the hardware in the container, but it can also be used to query arbitrary time windows in the past.


ScreenMultiViewer (John Mangan, Phi Hung Nguyen 2010-2011)

Image-missing.jpg A display mode within CalVR that allows two users to simultaneously use head trackers within either the StarCAVE or Nexcave, with minimal immersion loss.



CaveCAD (Lelin Zhang 2009-)

CaveCAD.jpg Calit2 researcher Lelin ZHANG provides architect designers with pure immersive 3D experience in virtual reality environment of StarCAVE.


Neuroscience and Architecture (Daniel Rohrlick, Michael Bajorek, Mabel Zhang, Lelin Zhang 2007)

Neuroscience.jpg This projects started off as a Calit2 seed funded project to do a pilot study with the Swartz Center for Neuroscience in which a human subject has to find their way to specific locations in the Calit2 building while their brain waves are being scanned by a high resolution EEG. Michael's responsibility was the interface between the StarCAVE and the EEG system, to transfer tracker data and other application parameters to allow for the correlation of EEG data with VR parameters. Daniel created the 3D model of the New Media Arts wing of the building using 3ds Max. Mabel refined the Calit2 building geometry. This project has been receiving funding from HMC.


VOX and Virvo (Jurgen Schulze, 1999-)

Deskvox.jpg Ongoing development of real-time volume rendering algorithms for interactive display at the desktop (DeskVOX) and in virtual environments (CaveVOX). Virvo is name for the GUI independent, OpenGL based volume rendering library which both DeskVOX and CaveVOX use.