Difference between revisions of "Projects"
From Immersive Visualization Lab Wiki
Line 37: | Line 37: | ||
<hr> | <hr> | ||
− | ===PanoView360(Andrew Prudhomme, 2008)=== | + | ===PanoView360 (Andrew Prudhomme, 2008)=== |
<table> | <table> | ||
<tr> | <tr> | ||
Line 69: | Line 69: | ||
<td>[[Image:image-missing.jpg]]</td> | <td>[[Image:image-missing.jpg]]</td> | ||
<td>UCSD graduate Daniel Tenedorio is working on parallelizing the simulation algorithm of the Atlas in Silico art piece, supported by an NSF funded SGER grant. Daniel's goal is to be able to support the entire CAMERA data set, consisting of 17 million data points (open reading frames). Daniel is going to use the CUDA architecture of NVidia's graphics cards to achieve a considerable performance increase.</td> | <td>UCSD graduate Daniel Tenedorio is working on parallelizing the simulation algorithm of the Atlas in Silico art piece, supported by an NSF funded SGER grant. Daniel's goal is to be able to support the entire CAMERA data set, consisting of 17 million data points (open reading frames). Daniel is going to use the CUDA architecture of NVidia's graphics cards to achieve a considerable performance increase.</td> | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
</tr> | </tr> | ||
</table> | </table> | ||
Line 123: | Line 105: | ||
<td>[[Image:image-missing.jpg]]</td> | <td>[[Image:image-missing.jpg]]</td> | ||
<td>Past research at Calit2 has developed an affordable means of creating large high-resolution computer displays, the OptiPortals. Contemporary input devices for tiled display walls, which go beyond the capabilities a desktop mouse can offer, are either very expensive, or less than satisfactory. Some controllers can cost $40,000, which defeats the benefits of the relatively inexpensive OptiPortal. Alternately, currently available inexpensive methods leave much to be desired: the same image output on the tiled display wall is also shown on a standard sized control monitor using a standard mouse as input, leaving the user without a means of direct interface with the OptiPortal. In order to create an improved interface for the OptiPortal, research must be done in the area of 3D location tracking that is accurate to within a few millimeters, yet maintains a reasonable cost of production and an intuitive design. The focus of this project is to use multiple Wii Remotes for six degree of freedom tracking. We are going to integrate this new input device scheme into the COVISE software, so that existing software applications can directly benefit from this research.</td> | <td>Past research at Calit2 has developed an affordable means of creating large high-resolution computer displays, the OptiPortals. Contemporary input devices for tiled display walls, which go beyond the capabilities a desktop mouse can offer, are either very expensive, or less than satisfactory. Some controllers can cost $40,000, which defeats the benefits of the relatively inexpensive OptiPortal. Alternately, currently available inexpensive methods leave much to be desired: the same image output on the tiled display wall is also shown on a standard sized control monitor using a standard mouse as input, leaving the user without a means of direct interface with the OptiPortal. In order to create an improved interface for the OptiPortal, research must be done in the area of 3D location tracking that is accurate to within a few millimeters, yet maintains a reasonable cost of production and an intuitive design. The focus of this project is to use multiple Wii Remotes for six degree of freedom tracking. We are going to integrate this new input device scheme into the COVISE software, so that existing software applications can directly benefit from this research.</td> | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
</tr> | </tr> | ||
</table> | </table> | ||
Line 168: | Line 141: | ||
<td>[[Image:ossimplanet.jpg]]</td> | <td>[[Image:ossimplanet.jpg]]</td> | ||
<td>In this project we ported the open source OssimPlanet library to COVISE, so that it can run in our VR environments, including the Varrier tiled display wall and the StarCAVE.</td> | <td>In this project we ported the open source OssimPlanet library to COVISE, so that it can run in our VR environments, including the Varrier tiled display wall and the StarCAVE.</td> | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
</tr> | </tr> | ||
</table> | </table> | ||
Line 226: | Line 190: | ||
<td>[[Image:fem-vis.jpg]]</td> | <td>[[Image:fem-vis.jpg]]</td> | ||
<td>In this project, Fabian created a simulation module with attached visualization capability. The simulation calculates the stress on a 3D structure which the user can design directly in the StarCAVE. Then the user can run a pre-recorded earthquake on the structure to see where and how strong the forces are on the various elements of the structure.</td> | <td>In this project, Fabian created a simulation module with attached visualization capability. The simulation calculates the stress on a 3D structure which the user can design directly in the StarCAVE. Then the user can run a pre-recorded earthquake on the structure to see where and how strong the forces are on the various elements of the structure.</td> | ||
+ | </tr> | ||
+ | </table> | ||
+ | <hr> | ||
+ | |||
+ | ===Palazzo Vecchio (Philip Weber, 2008)=== | ||
+ | <table> | ||
+ | <tr> | ||
+ | <td>[[Image:image-missing.jpg]]</td> | ||
+ | <td>Philip Weber created the PointModel viewer, which renders LIDAR point data sets like Palazzo Vecchio in the StarCAVE. Philip implemented a hierarchical rendering algorithm which allows rendering two million points in real-time. This application also supports other point data sets like UCSD's shake table at the Englekirk site.</td> | ||
+ | </tr> | ||
+ | </table> | ||
+ | <hr> | ||
+ | |||
+ | ===Virtual Architectural Walkthroughs (Edward Kezeli, 2008)=== | ||
+ | <table> | ||
+ | <tr> | ||
+ | <td>[[Image:image-missing.jpg]]</td> | ||
+ | <td>We have 3D models of several buildings on and off UCSD's campus which we can bring up in the StarCAVE to view them life size. We got the Structural Engineering and Visual Arts building from Professor Kuester. The architectural firm HMC made available to use the following CAD models: the Rady School of Management at UCSD, HMC's offices in Los Angeles, and a section of the library at San Francisco State University. In collaboration with HMC, Edward created an art piece showing a gigantic Moebius torus floating over Los Angeles.</td> | ||
</tr> | </tr> | ||
</table> | </table> | ||
Line 235: | Line 217: | ||
<td>[[Image:image-missing.jpg]]</td> | <td>[[Image:image-missing.jpg]]</td> | ||
<td>In collaboration with scientists from NASA, we have created several data sets which can be viewed in the StarCAVE: a 3D model of a site Mars rover Spirit took a picture of, a short time after its right front wheel had jammed. Other demonstrations are a 3D model of the International Space Station, a 3D model of a Mars rover, as well as several 2D and 3D surround image panoramas of sites on Mars.</td> | <td>In collaboration with scientists from NASA, we have created several data sets which can be viewed in the StarCAVE: a 3D model of a site Mars rover Spirit took a picture of, a short time after its right front wheel had jammed. Other demonstrations are a 3D model of the International Space Station, a 3D model of a Mars rover, as well as several 2D and 3D surround image panoramas of sites on Mars.</td> | ||
+ | </tr> | ||
+ | </table> | ||
+ | <hr> | ||
+ | |||
+ | ===Digital Lightbox (Philip Weber, 2007-2008)=== | ||
+ | <table> | ||
+ | <tr> | ||
+ | <td>[[Image:image-missing.jpg]]</td> | ||
+ | <td>In collaboration with Professor Jacopo Annese's Brain Observatory, Philip created an application for Jacopo's 5x3 tiled display wall which allows displaying up to 90 different cross sections of monkey brains at once. Selected scans exist at super high resolution of hundreds of millions of pixels and can be magnified across the whole wall.</td> | ||
</tr> | </tr> | ||
</table> | </table> | ||
Line 262: | Line 253: | ||
<td>[[Image:image-missing.jpg]]</td> | <td>[[Image:image-missing.jpg]]</td> | ||
<td>For a project funded by the Birch Aquarium Daniel created a 3D underwater scene with a remote controllable submarine, showing the ocean floor around a hydrothermic vent. This project incorporates sound effects, created in collaboration with Peter Otto's group.</td> | <td>For a project funded by the Birch Aquarium Daniel created a 3D underwater scene with a remote controllable submarine, showing the ocean floor around a hydrothermic vent. This project incorporates sound effects, created in collaboration with Peter Otto's group.</td> | ||
+ | </tr> | ||
+ | </table> | ||
+ | <hr> | ||
+ | |||
+ | ===[[CAMERA Meta-Data Visualization]] (Sara Richardson, Andrew Prudhomme, 2007-2008)=== | ||
+ | <table> | ||
+ | <tr> | ||
+ | <td>[[Image:image-missing.jpg]]</td> | ||
+ | <td>This project started with Sara's Calit2 Undergraduate Scholarship and was later continued by Andrew. The visualization tool they created can display the meta-data of the CAMERA sample sites, and real-time usage data from the CAMERA project's BLAST server on a world map.</td> | ||
</tr> | </tr> | ||
</table> | </table> |
Revision as of 00:46, 6 March 2009
Active Projects
PhotosynthVR (Sasha Koruga, 2009)
Blood Flow (Yuri Bazilevs, 2009)
In this project, we are working on visualizing the blood flow in an artery, as simulated by Professor Bazilev at UCSD. We use COVISE's Tecplot reader to load the data into the StarCAVE. |
How Much Information (Andrew Prudhomme, 2008)
In this project we visualize the data from various collaborating companies which provide us with data stored on harddisks or data transferred over networks. In the first stage, Andrew created an application which can display the directory structures of 70,000 harddisk drives of Microsoft employees, sampled over the course of five years. The visualization uses an interactive hyperbolic 3D graph to visualize the directory trees and to compare different users' trees, and it uses various novel data display methods like wheel graphs to display file sizes, etc. More information about this project can be found at [1]. |
Animated Point Clouds (Daniel Tenedorio, Rachel Chu, Sasha Koruga, 2008)
PanoView360 (Andrew Prudhomme, 2008)
In collaboration with Professor Dan Sandin from EVL, Andrew created a COVISE plugin to display photographer Richard Ainsworth's panoramic stereo images in the StarCAVE and the Varrier. |
SUN Blackbox (Mabel Zhang, Andrew Prudhomme, 2008)
Hotspot Mitigation (Jordan Rhee, 2008)
ATLAS in Parallel (Ruth West, Daniel Tenedorio, Todd Margolis, 2008)
Khirbat en-Nahas (Kyle Knabb, Jurgen Schulze, 2008)
For the past ten years, a joint University of California, San Diego and Department of Antiquities of Jordan research team led by Professor Tom Levy and Dr. Mohammad Najjar has been investigating the role of mining and metallurgy on social evolution from the Neolithic period (ca. 7500 BC) to medieval Islamic times (ca. 12th century AD). Kyle Knabb has been working with the IVL as a master's student under Professor Thomas Levy from the archaeology department. He created a 3D visualization for the StarCAVE which displays several excavation sites in Jordan, along with artifacts found there, and radio carbon dating sites. In a related project we acquired stereo photography from the excavation site in Jordan. |
Spatialized Sound (Toshiro Yamada, Suketu Kamdar, 2008)
Video in Virtual Environments (Han Kim, 2008)
6DOF Tracking with Wii Remotes (Sage Browning, Philip Weber, 2008)
CineGrid (Leo Liu, 2007)
LOOKING/ORION (Philip Weber, 2007)
Neuroscience and Architecture (Daniel Rohrlick, Michael Bajorek, Mabel Zhang, 2007)
OssimPlanet (Philip Weber, Jurgen Schulze, 2007)
In this project we ported the open source OssimPlanet library to COVISE, so that it can run in our VR environments, including the Varrier tiled display wall and the StarCAVE. |
Virtual Calit2 Building (Daniel Rohrlick, Mabel Zhang, 2006)
Interaction with Multi-Spectral Images (Philip Weber, Praveen Subramani, Andrew Prudhomme, 2006)
Protein Visualization (Philip Weber, Andrew Prudhomme, Krishna Subramanian, Sendhil Panchadsaram, 2005)
A VR application to view protein structures from UCSD Professor Philip Bourne's Protein Data Bank (PDB). The popular molecular biology toolkit PyMol is used to create the 3D models of the PDB files. Our application also supports protein alignment, an aminoacid sequence viewer, integration of TOPSAN annotations, as well as a variety of visualization modes. Among the users of this application are: UC Riverside (Peter Atkinson), UCSD Pharmacy (Zoran Radic), Scripps Research Institute (James Fee/Jon Huntoon). |
VOX and Virvo (Jurgen Schulze, 1999)
Inactive Projects
Finite Elements Simulation (Fabian Gerold, 2008-2009)
Palazzo Vecchio (Philip Weber, 2008)
Virtual Architectural Walkthroughs (Edward Kezeli, 2008)
NASA (Andrew Prudhomme, 2008)
Digital Lightbox (Philip Weber, 2007-2008)
Research Intelligence Portal (Alex Zavodny, Andrew Prudhomme, 2007-2008)
New San Francisco Bay Bridge (Andre Barbosa, 2007-2008)
Birch Aquarium (Daniel Rohrlick, 2007-2008)
CAMERA Meta-Data Visualization (Sara Richardson, Andrew Prudhomme, 2007-2008)
Depth of Field (Karen Lin, 2007)
HD Camera Array (Alex Zavodny, Andrew Prudhomme, 2007)
Atlas in Silico for Varrier (Ruth West, Iman Mostafavi, Todd Margolis, 2007)
Screen (Noah Wardrip-Fruin, 2007)
Under the guidance of Noah Wardrip-Fruin and Jurgen Schulze, Ava Pierce, David Coughlan, Jeffrey Kuramoto, and Stephen Boyd are adapting the multimedia art installation Screen from the four-wall cave system at Brown University to the StarCAVE. This piece was displayed at SIGGRAPH 2007 and was the first virtual reality application to demoed in the StarCAVE. It was also displayed at the Beall Center at UC Irvine in the fall of 2007. For this purpose, it was ported to a single stereo wall display. |
Children's Hospital (Jurgen Schulze, 2007)
From our collaboration with Dr. Peter Newton from San Diego's Children's Hospital we have a few computer tomography (CT) data sets of childerens' upper bodies, showing irregularities of their spines. |
Super Browser (Vinh Huynh, Andrew Prudhomme, 2006)
Cell Structures (Iman Mostafavi, 2006)
Terashake Volume Visualization (Jurgen Schulze, 2006)
As part of the NSF funded Optiputer project, Jurgen visualized part of the 4.5 terabyte TeraShake earthquake data set on a the 100 megapixel LambdaVision display at Calit2. For this project, he integrated his volume visualization tool VOX into EVL's SAGE. |
Earthquake Visualization (Jurgen Schulze, 2005)
Along with Debi Kilb from the Scripps Institution of Oceanography (SIO) we visualized 3D earthquake locations on a world-wide scale. |