Difference between revisions of "Past Projects"
From Immersive Visualization Lab Wiki
(→MatEdit (Khanh Luc, 2011)) |
(→MatEdit (Khanh Luc, 2011)) |
||
Line 1: | Line 1: | ||
+ | ===[[3D Reconstruction of Photographs ]] (Matthew Religioso, 2011-)=== | ||
+ | <table> | ||
+ | <tr> | ||
+ | <td>[[Image:Wiki.jpg]]</td> | ||
+ | <td width=20></td> | ||
+ | <td>Reconstruct static objects from photographs into 3D models using the Bundler Algorithm and the Texturing Algorithm developed by prior students. This project's goal is to optimize the Texturing to maximize photorealism and efficiency, and run the resulting application in the StarCAVE. </td> | ||
+ | <td width=20></td> | ||
+ | </tr> | ||
+ | </table> | ||
+ | <hr> | ||
+ | |||
+ | ===[[Real-Time Geometry Scanning System]] (Daniel Tenedorio, 2011-)=== | ||
+ | <table> | ||
+ | <tr> | ||
+ | <td>[[Image:dtenedor-wiki-icon.png]]</td> | ||
+ | <td width=20></td> | ||
+ | <td>This interactive system constructs a 3D model of the environment as a user moves an infrared geometry camera around a room. We display the intermediate representation of the scene in real-time on virtual reality displays ranging from a single computer monitor to immersive, stereoscopic projection systems like the StarCAVE.</td> | ||
+ | <td width=20></td> | ||
+ | </tr> | ||
+ | </table> | ||
+ | <hr> | ||
+ | |||
+ | ===[[Real-Time Meshing of Dynamic Point Clouds]] (Robert Pardridge, James Lue, 2011-)=== | ||
+ | <table> | ||
+ | <tr> | ||
+ | <td>[[Image:Marchingcubes.png]]</td> | ||
+ | <td>This project involves generating a triangle mesh in over a point cloud that grows dynamically. The goal is to implement a meshing algorithm that is fast enough to keep up with the streaming input from a scanning device. We are using a CUDA implementation of the Marching Cubes algorithm to triangulate in real-time a point cloud obtained from the Kinect depth-camera. </td> | ||
+ | </tr> | ||
+ | </table> | ||
+ | <hr> | ||
+ | |||
+ | ===[[LSystems]] (Sarah Larsen 2011-)=== | ||
+ | <table> | ||
+ | <tr> | ||
+ | <td>[[Image:LSystems2.png]]</td> | ||
+ | <td>Creates an LSystem and displays it with either line or cylinder connections</td> | ||
+ | </tr> | ||
+ | </table> | ||
+ | <hr> | ||
+ | |||
+ | ===[[Android Controller]] (Jeanne Wang 2011-)=== | ||
+ | <table> | ||
+ | <tr> | ||
+ | <td>[[Image:androidscreenshot.png]]</td> | ||
+ | <td>An Android based controller for a visualization system such as StarCave or a multiscreen grid.</td> | ||
+ | </tr> | ||
+ | </table> | ||
+ | <hr> | ||
+ | |||
+ | ===[[Object-Oriented Interaction with Large High Resolution Displays]] (Lynn Nguyen 2011-)=== | ||
+ | <table> | ||
+ | <tr> | ||
+ | <td>[[Image:Lynn_selected.jpg]]</td> | ||
+ | <td>Investigate the practicality of using smartphones to interact with large high resolution displays. To accomplish such a | ||
+ | task, it is not necessary to find the spatial location of the phone relative to the display, rather we can identify the object a user wants to interact with through image recognition. The interaction with the object itself can be done by using the smart-phone as the medium. The feasibility of this concept is investigated by implementing a prototype.</td> | ||
+ | </tr> | ||
+ | </table> | ||
+ | <hr> | ||
+ | |||
===[[MatEdit]] (Khanh Luc, 2011)=== | ===[[MatEdit]] (Khanh Luc, 2011)=== | ||
<table> | <table> |
Revision as of 22:58, 24 October 2011
3D Reconstruction of Photographs (Matthew Religioso, 2011-)
Real-Time Geometry Scanning System (Daniel Tenedorio, 2011-)
Real-Time Meshing of Dynamic Point Clouds (Robert Pardridge, James Lue, 2011-)
LSystems (Sarah Larsen 2011-)
Creates an LSystem and displays it with either line or cylinder connections |
Android Controller (Jeanne Wang 2011-)
An Android based controller for a visualization system such as StarCave or a multiscreen grid. |
Object-Oriented Interaction with Large High Resolution Displays (Lynn Nguyen 2011-)
MatEdit (Khanh Luc, 2011)
Kinect UI for 3D Pacman (Tony Lu, 2011)
An experimentation with the Kinect to implement a device free, gesture controlled user interface in the StarCAVE to run a 3D Pacman game. |
TelePresence (Seth Rotkin, Mabel Zhang, 2010-)
GreenLight Blackbox (Mabel Zhang, Andrew Prudhomme, Seth Rotkin, Philip Weber, Grant van Horn, Connor Worley, Quan Le, Hesler Rodriguez, 2008-2011)
We created a 3D model of the SUN Mobile Data Center which is a core component of the instrument procured by the GreenLight project. We added an on-line connection to the physical container to display the output of the power modules. The project was demonstrated at SIGGRAPH, ISC, and Supercomputing. |
Volumetric Blood Flow Rendering (Yuri Bazilevs, Jurgen Schulze, Alison Marsden, Greg Long, Han Kim 2011)
Meshing and Texturing Point Clouds (Robert Pardridge, Vikash Nandkeshwar, James Lue, 2011)
These students are using data from the previous PhotosynthVR project to create 3D geometry and textures. |
BloodFlow (Yuri Bazilevs, Jurgen Schulze, Alison Marsden, Ming-Chen Hsu, Kenneth Benner, Sasha Koruga; 2009-2010)
In this project, we are working on visualizing the blood flow in an artery, as simulated by Professor Bazilev at UCSD. Read the Blood Flow Manual for usage instructions. Videos and pictures of the visualizations in 2D can be found here, and the corresponding iPhone versions of the videos can be downloaded here. |
PanoView360 (Andrew Prudhomme, 2008-2010)
In collaboration with Professor Dan Sandin from EVL, Andrew created a COVISE plugin to display photographer Richard Ainsworth's panoramic stereo images in the StarCAVE and the Varrier. |
PhotosynthVR (Sasha Koruga, Haili Wang, Phi Nguyen, Velu Ganapathy; 2009)
UCSD Sasha Koruga has created a Photosynth-like system with which he can display a number of photographs in the StarCAVE. The images appear and disappear as the user moves around the photographed object. Read the PhotosynthVR Manual for usage instructions. |
Multi-Volume Rendering (Han Kim, 2009)
The goal of multi-volume rendering is to visualize multiple volume data sets. Each volume has three or more channels. |
How Much Information (Andrew Prudhomme, 2008-2009)
In this project we visualize the data from various collaborating companies which provide us with data stored on harddisks or data transferred over networks. In the first stage, Andrew created an application which can display the directory structures of 70,000 harddisk drives of Microsoft employees, sampled over the course of five years. The visualization uses an interactive hyperbolic 3D graph to visualize the directory trees and to compare different users' trees, and it uses various novel data display methods like wheel graphs to display file sizes, etc. More information about this project can be found at [1]. |
Hotspot Mitigation (Jordan Rhee, 2008-2009)
ATLAS in Parallel (Ruth West, Daniel Tenedorio, Todd Margolis, 2008-2009)
Animated Point Clouds (Daniel Tenedorio, Rachel Chu, Sasha Koruga, 2008)
6DOF Tracking with Wii Remotes (Sage Browning, Philip Weber, 2008)
Spatialized Sound (Toshiro Yamada, Suketu Kamdar, 2008)
Video in Virtual Environments (Han Kim, 2008-2010)
LOOING/ORION (Philip Weber, 2007-2009)
OssimPlanet (Philip Weber, Jurgen Schulze, 2007)
In this project we ported the open source OssimPlanet library to COVISE, so that it can run in our VR environments, including the Varrier tiled display wall and the StarCAVE. |
CineGrid (Leo Liu, 2007)
Virtual Calit2 Building (Daniel Rohrlick, Mabel Zhang, 2006-2009)
Interaction with Multi-Spectral Images (Philip Weber, Praveen Subramani, Andrew Prudhomme, 2006-2009)
Finite Elements Simulation (Fabian Gerold, 2008-2009)
Palazzo Vecchio (Philip Weber, 2008)
Virtual Architectural Walkthroughs (Edward Kezeli, 2008)
NASA (Andrew Prudhomme, 2008)
Digital Lightbox (Philip Weber, 2007-2008)
Research Intelligence Portal (Alex Zavodny, Andrew Prudhomme, 2007-2008)
New San Francisco Bay Bridge (Andre Barbosa, 2007-2008)
Birch Aquarium (Daniel Rohrlick, 2007-2008)
CAMERA Meta-Data Visualization (Sara Richardson, Andrew Prudhomme, 2007-2008)
Depth of Field (Karen Lin, 2007)
HD Camera Array (Alex Zavodny, Andrew Prudhomme, 2007)
Atlas in Silico for Varrier (Ruth West, Iman Mostafavi, Todd Margolis, 2007)
Screen (Noah Wardrip-Fruin, 2007)
Under the guidance of Noah Wardrip-Fruin and Jurgen Schulze, Ava Pierce, David Coughlan, Jeffrey Kuramoto, and Stephen Boyd are adapting the multimedia art installation Screen from the four-wall cave system at Brown University to the StarCAVE. This piece was displayed at SIGGRAPH 2007 and was the first virtual reality application to demoed in the StarCAVE. It was also displayed at the Beall Center at UC Irvine in the fall of 2007. For this purpose, it was ported to a single stereo wall display. |
Children's Hospital (Jurgen Schulze, 2007)
From our collaboration with Dr. Peter Newton from San Diego's Children's Hospital we have a few computer tomography (CT) data sets of childerens' upper bodies, showing irregularities of their spines. |
Super Browser (Vinh Huynh, Andrew Prudhomme, 2006)
Cell Structures (Iman Mostafavi, 2006)
Terashake Volume Visualization (Jurgen Schulze, 2006)
As part of the NSF funded Optiputer project, Jurgen visualized part of the 4.5 terabyte TeraShake earthquake data set on a the 100 megapixel LambdaVision display at Calit2. For this project, he integrated his volume visualization tool VOX into EVL's SAGE. |
Protein Visualization (Philip Weber, Andrew Prudhomme, Krishna Subramanian, Sendhil Panchadsaram, 2005-2009)
A VR application to view protein structures from UCSD Professor Philip Bourne's Protein Data Bank (PDB). The popular molecular biology toolkit PyMol is used to create the 3D models of the PDB files. Our application also supports protein alignment, an aminoacid sequence viewer, integration of TOPSAN annotations, as well as a variety of visualization modes. Among the users of this application are: UC Riverside (Peter Atkinson), UCSD Pharmacy (Zoran Radic), Scripps Research Institute (James Fee/Jon Huntoon). |
Earthquake Visualization (Jurgen Schulze, 2005)
Along with Debi Kilb from the Scripps Institution of Oceanography (SIO) we visualized 3D earthquake locations on a world-wide scale. |