From Immersive Visualization Lab Wiki
|
|
Line 83: |
Line 83: |
| <td>[[Image:Pacmanscreenshot_small.png]]</td> | | <td>[[Image:Pacmanscreenshot_small.png]]</td> |
| <td>An experimentation with the Kinect to implement a device free, gesture controlled user interface in the StarCAVE to run a 3D Pacman game.</td> | | <td>An experimentation with the Kinect to implement a device free, gesture controlled user interface in the StarCAVE to run a 3D Pacman game.</td> |
− | </tr>
| |
− | </table>
| |
− | <hr>
| |
− |
| |
− | ===[[Meshing and Texturing Point Clouds]] (Robert Pardridge, Vikash Nandkeshwar, James Lue, 2011)===
| |
− | <table>
| |
− | <tr>
| |
− | <td>[[Image:image-missing.jpg]]</td>
| |
− | <td>These students are using data from the previous PhotosynthVR project to create 3D geometry and textures.</td>
| |
| </tr> | | </tr> |
| </table> | | </table> |
Line 143: |
Line 134: |
| =<b>Inactive Projects</b>= | | =<b>Inactive Projects</b>= |
| | | |
| + | |
| + | ===[[Meshing and Texturing Point Clouds]] (Robert Pardridge, Vikash Nandkeshwar, James Lue, 2011)=== |
| + | <table> |
| + | <tr> |
| + | <td>[[Image:image-missing.jpg]]</td> |
| + | <td>These students are using data from the previous PhotosynthVR project to create 3D geometry and textures.</td> |
| + | </tr> |
| + | </table> |
| + | <hr> |
| | | |
| | | |
Revision as of 15:59, 6 June 2011
Contents
- 1 Active Projects
- 1.1 3D Reconstruction of Photographs (Matthew Religioso, 2011-)
- 1.2 Real-Time Geometry Scanning System (Daniel Tenedorio, 2011-)
- 1.3 Real-Time Meshing of Dynamic Point Clouds (Robert Pardridge, James Lue, 2011-)
- 1.4 LSystems (Sarah Larsen 2011-)
- 1.5 Android Controller (Jeanne Wang 2011-)
- 1.6 Object-Oriented Interaction with Large High Resolution Displays (Lynn Nguyen 2011-)
- 1.7 ScreenMultiViewer (John Mangan, Phi Hung Nguyen 2010-)
- 1.8 MatEdit (Khanh Luc, 2011)
- 1.9 Kinect UI for 3D Pacman (Tony Lu, 2011)
- 1.10 TelePresence (Seth Rotkin, Mabel Zhang, 2010-)
- 1.11 CaveCAD (Lelin Zhang 2009-)
- 1.12 GreenLight Blackbox (Mabel Zhang, Andrew Prudhomme, Seth Rotkin, Philip Weber, Grant van Horn, Connor Worley, Quan Le, Hesler Rodriguez, 2008-)
- 1.13 Neuroscience and Architecture (Daniel Rohrlick, Michael Bajorek, Mabel Zhang, Lelin Zhang 2007)
- 1.14 Volumetric Blood Flow Rendering (Yuri Bazilevs, Jurgen Schulze, Alison Marsden, Greg Long, Han Kim 2011)
- 2 Inactive Projects
- 2.1 Meshing and Texturing Point Clouds (Robert Pardridge, Vikash Nandkeshwar, James Lue, 2011)
- 2.2 Khirbat en-Nahas (Kyle Knabb, Jurgen Schulze, Connor DeFanti, 2008-2010)
- 2.3 BloodFlow (Yuri Bazilevs, Jurgen Schulze, Alison Marsden, Ming-Chen Hsu, Kenneth Benner, Sasha Koruga; 2009-2010)
- 2.4 PanoView360 (Andrew Prudhomme, 2008-2010)
- 2.5 PhotosynthVR (Sasha Koruga, Haili Wang, Phi Nguyen, Velu Ganapathy; 2009-)
- 2.6 Multi-Volume Rendering (Han Kim, 2009)
- 2.7 How Much Information (Andrew Prudhomme, 2008-2009)
- 2.8 Hotspot Mitigation (Jordan Rhee, 2008-2009)
- 2.9 ATLAS in Parallel (Ruth West, Daniel Tenedorio, Todd Margolis, 2008-2009)
- 2.10 Animated Point Clouds (Daniel Tenedorio, Rachel Chu, Sasha Koruga, 2008)
- 2.11 6DOF Tracking with Wii Remotes (Sage Browning, Philip Weber, 2008)
- 2.12 Spatialized Sound (Toshiro Yamada, Suketu Kamdar, 2008)
- 2.13 Video in Virtual Environments (Han Kim, 2008-2010)
- 2.14 LOOING/ORION (Philip Weber, 2007-2009)
- 2.15 OssimPlanet (Philip Weber, Jurgen Schulze, 2007)
- 2.16 CineGrid (Leo Liu, 2007)
- 2.17 Virtual Calit2 Building (Daniel Rohrlick, Mabel Zhang, 2006-2009)
- 2.18 Interaction with Multi-Spectral Images (Philip Weber, Praveen Subramani, Andrew Prudhomme, 2006-2009)
- 2.19 Finite Elements Simulation (Fabian Gerold, 2008-2009)
- 2.20 Palazzo Vecchio (Philip Weber, 2008)
- 2.21 Virtual Architectural Walkthroughs (Edward Kezeli, 2008)
- 2.22 NASA (Andrew Prudhomme, 2008)
- 2.23 Digital Lightbox (Philip Weber, 2007-2008)
- 2.24 Research Intelligence Portal (Alex Zavodny, Andrew Prudhomme, 2007-2008)
- 2.25 New San Francisco Bay Bridge (Andre Barbosa, 2007-2008)
- 2.26 Birch Aquarium (Daniel Rohrlick, 2007-2008)
- 2.27 CAMERA Meta-Data Visualization (Sara Richardson, Andrew Prudhomme, 2007-2008)
- 2.28 Depth of Field (Karen Lin, 2007)
- 2.29 HD Camera Array (Alex Zavodny, Andrew Prudhomme, 2007)
- 2.30 Atlas in Silico for Varrier (Ruth West, Iman Mostafavi, Todd Margolis, 2007)
- 2.31 Screen (Noah Wardrip-Fruin, 2007)
- 2.32 Children's Hospital (Jurgen Schulze, 2007)
- 2.33 Super Browser (Vinh Huynh, Andrew Prudhomme, 2006)
- 2.34 Cell Structures (Iman Mostafavi, 2006)
- 2.35 Terashake Volume Visualization (Jurgen Schulze, 2006)
- 2.36 Protein Visualization (Philip Weber, Andrew Prudhomme, Krishna Subramanian, Sendhil Panchadsaram, 2005-2009)
- 2.37 Earthquake Visualization (Jurgen Schulze, 2005)
- 2.38 Geoscience (Jurgen Schulze, 2005)
- 2.39 VOX and Virvo (Jurgen Schulze, 1999)
|
Active Projects
|
|
Reconstruct static objects from photographs into 3D models using the Bundler Algorithm and the Texturing Algorithm developed by prior students. This project's goal is to optimize the Texturing to maximize photorealism and efficiency, and run the resulting application in the StarCAVE. |
|
|
|
This interactive system constructs a 3D model of the environment as a user moves an infrared geometry camera around a room. We display the intermediate representation of the scene in real-time on virtual reality displays ranging from a single computer monitor to immersive, stereoscopic projection systems like the StarCAVE. |
|
|
This project involves generating a triangle mesh in over a point cloud that grows dynamically. The goal is to implement a meshing algorithm that is fast enough to keep up with the streaming input from a scanning device. We are using a CUDA implementation of the Marching Cubes algorithm to triangulate in real-time a point cloud obtained from the Kinect depth-camera. |
LSystems (Sarah Larsen 2011-)
|
Creates an LSystem and displays it with either line or cylinder connections |
|
An Android based controller for a visualization system such as StarCave or a multiscreen grid. |
|
Investigate the practicality of using smartphones to interact with large high resolution displays. To accomplish such a
task, it is not necessary to find the spatial location of the phone relative to the display, rather we can identify the object a user wants to interact with through image recognition. The interaction with the object itself can be done by using the smart-phone as the medium. The feasibility of this concept is investigated by implementing a prototype. |
ScreenMultiViewer (John Mangan, Phi Hung Nguyen 2010-)
|
A display mode within CalVR that allows two users to simultaneously use head trackers within either the StarCAVE or Nexcave, with minimal immersion loss. |
MatEdit (Khanh Luc, 2011)
|
A graphical user interface for programmers to adjust and preview material properties on an object. Once a programmer determines the proper parameters for the look of his material, he can then generate code to achieve that look |
|
An experimentation with the Kinect to implement a device free, gesture controlled user interface in the StarCAVE to run a 3D Pacman game. |
TelePresence (Seth Rotkin, Mabel Zhang, 2010-)
|
A virtual representation of the Cisco TelePresence telecommunications system. The room includes a full 3D reproduction of an actual TelePresence 3000 room, live streaming video footage, pre-recorded stereo videos, a 3D powerpoint projection zone, and a movable floating 3D model. |
CaveCAD (Lelin Zhang 2009-)
|
Calit2 researcher Lelin ZHANG provides architect designers with pure immersive 3D experience in virtual reality environment of StarCAVE.
|
GreenLight Blackbox (Mabel Zhang, Andrew Prudhomme, Seth Rotkin, Philip Weber, Grant van Horn, Connor Worley, Quan Le, Hesler Rodriguez, 2008-)
|
Mabel Zhang has been working on 3D modeling tasks at IVL. Her first task was to model the Calit2 building, which she completed as a Calit2 summer intern. We then hired her as a student worker to create a 3D model of the SUN Mobile Data Center which is a core component of the instrument procured by the GreenLight project. Andrew added an on-line connection to the Blackbox at UCSD to display the output of the power modules. Their project was demonstrated at the Supercomputing Conference 2008 in Austin, Texas. |
Neuroscience and Architecture (Daniel Rohrlick, Michael Bajorek, Mabel Zhang, Lelin Zhang 2007)
|
This projects started off as a Calit2 seed funded project to do a pilot study with the Swartz Center for Neuroscience in which a human subject has to find their way to specific locations in the Calit2 building while their brain waves are being scanned by a high resolution EEG. Michael's responsibility was the interface between the StarCAVE and the EEG system, to transfer tracker data and other application parameters to allow for the correlation of EEG data with VR parameters. Daniel created the 3D model of the New Media Arts wing of the building using 3ds Max. Mabel refined the Calit2 building geometry. This project has been receiving funding from HMC. |
Volumetric Blood Flow Rendering (Yuri Bazilevs, Jurgen Schulze, Alison Marsden, Greg Long, Han Kim 2011)
|
An extension of the BloodFlow (2009-2010) Project. We attempt to create more useful visualizations of blood flow in a blood vessel, through volume rendering techniques, and tackle challenges resulting from volumetric rendering of large datasets. |
Inactive Projects
|
These students are using data from the previous PhotosynthVR project to create 3D geometry and textures. |
Khirbat en-Nahas (Kyle Knabb, Jurgen Schulze, Connor DeFanti, 2008-2010)
|
For the past ten years, a joint University of California, San Diego and Department of Antiquities of Jordan research team led by Professor Tom Levy and Dr. Mohammad Najjar has been investigating the role of mining and metallurgy on social evolution from the Neolithic period (ca. 7500 BC) to medieval Islamic times (ca. 12th century AD). Kyle Knabb has been working with the IVL as a master's student under Professor Thomas Levy from the archaeology department. He created a 3D visualization for the StarCAVE which displays several excavation sites in Jordan, along with artifacts found there, and radio carbon dating sites. In a related project we acquired stereo photography from the excavation site in Jordan. |
BloodFlow (Yuri Bazilevs, Jurgen Schulze, Alison Marsden, Ming-Chen Hsu, Kenneth Benner, Sasha Koruga; 2009-2010)
|
In this project, we are working on visualizing the blood flow in an artery, as simulated by Professor Bazilev at UCSD. Read the Blood Flow Manual for usage instructions. Videos and pictures of the visualizations in 2D can be found here, and the corresponding iPhone versions of the videos can be downloaded here. |
PanoView360 (Andrew Prudhomme, 2008-2010)
|
In collaboration with Professor Dan Sandin from EVL, Andrew created a COVISE plugin to display photographer Richard Ainsworth's panoramic stereo images in the StarCAVE and the Varrier. |
PhotosynthVR (Sasha Koruga, Haili Wang, Phi Nguyen, Velu Ganapathy; 2009-)
|
UCSD Sasha Koruga has created a Photosynth-like system with which he can display a number of photographs in the StarCAVE. The images appear and disappear as the user moves around the photographed object. Read the PhotosynthVR Manual for usage instructions. |
|
The goal of multi-volume rendering is to visualize multiple volume data sets. Each volume has three or more channels. |
How Much Information (Andrew Prudhomme, 2008-2009)
|
In this project we visualize the data from various collaborating companies which provide us with data stored on harddisks or data transferred over networks. In the first stage, Andrew created an application which can display the directory structures of 70,000 harddisk drives of Microsoft employees, sampled over the course of five years. The visualization uses an interactive hyperbolic 3D graph to visualize the directory trees and to compare different users' trees, and it uses various novel data display methods like wheel graphs to display file sizes, etc. More information about this project can be found at [1]. |
|
ECE Undergraduate student Jordan Rhee has been working on algorithms for hotspot mitigation in projected virtual reality environments such as the StarCAVE. His COVISE plugin is based on a GLSL shader which performs the mitigation in real-time, minimizing the performance impact on the application. |
ATLAS in Parallel (Ruth West, Daniel Tenedorio, Todd Margolis, 2008-2009)
|
UCSD graduate Daniel Tenedorio is working on parallelizing the simulation algorithm of the Atlas in Silico art piece, supported by an NSF funded SGER grant. Daniel's goal is to be able to support the entire CAMERA data set, consisting of 17 million data points (open reading frames). Daniel is going to use the CUDA architecture of NVidia's graphics cards to achieve a considerable performance increase. |
Animated Point Clouds (Daniel Tenedorio, Rachel Chu, Sasha Koruga, 2008)
|
UCSD students Daniel Tenedorio and Rachel Chu collaborated as PRIME students between NCHC in Taiwan and Osaka University. They worked on a 3D teleconferencing project, for which they used data from a 3D video scanner at Osaka University and streamed it across the network between NCHC and Osaka. Their VR application now runs in the StarCAVE at UCSD. Sasha Koruga has been continuing this project in the winter quarter 2009. |
6DOF Tracking with Wii Remotes (Sage Browning, Philip Weber, 2008)
|
Past research at Calit2 has developed an affordable means of creating large high-resolution computer displays, the OptiPortals. Contemporary input devices for tiled display walls, which go beyond the capabilities a desktop mouse can offer, are either very expensive, or less than satisfactory. Some controllers can cost $40,000, which defeats the benefits of the relatively inexpensive OptiPortal. Alternately, currently available inexpensive methods leave much to be desired: the same image output on the tiled display wall is also shown on a standard sized control monitor using a standard mouse as input, leaving the user without a means of direct interface with the OptiPortal. In order to create an improved interface for the OptiPortal, research must be done in the area of 3D location tracking that is accurate to within a few millimeters, yet maintains a reasonable cost of production and an intuitive design. The focus of this project is to use multiple Wii Remotes for six degree of freedom tracking. We are going to integrate this new input device scheme into the COVISE software, so that existing software applications can directly benefit from this research. |
Spatialized Sound (Toshiro Yamada, Suketu Kamdar, 2008)
|
Peter Otto's group created a 3D sound system which we can control from the StarCAVE by positioning a sound source in a 3D environment. In our demonstration, the sound source is represented as a cone and can be placed in the pre-function area on the 1st floor of the virtual Calit2 building. |
Video in Virtual Environments (Han Kim, 2008-2010)
|
Computer Science graduate student Han Kim has developed an efficient "mip-map" algorithm that "shrinks" high-resolution video content so that it can be played interactively in VEs. He has also created several optimization solutions for sustaining a stable video playback frame rate when the video is projected onto non-rectangular VE screens. |
LOOING/ORION (Philip Weber, 2007-2009)
|
Under the direction of Matt Arrott, Philip created an interactive 3D exploration tool for simulated ocean currents in Monterey Bay. The software uses OssimPlanet to display terrain and bathymetry of the area to give context to streamlines and iso-surfaces of the vector field representing the ocean currents. |
OssimPlanet (Philip Weber, Jurgen Schulze, 2007)
|
In this project we ported the open source OssimPlanet library to COVISE, so that it can run in our VR environments, including the Varrier tiled display wall and the StarCAVE. |
|
Computer science graduate student Leo Liu has been working on new algorithms for long distance 4k video streaming, as well as 4k movie data management. He has been working in close collaboration with the iRods group at the San Diego Supercomputer Center (SDSC) and has helped create iRods servers for CineGrid content in San Diego, Amsterdam, and Keio University in Japan. |
|
The goal the Virtual Calit2 Building is to provide an accurate and accessible virtual replica of the Calit2 building that can be used by any researcher or scientist to conduct projects or tests that require the need for a detailed architectural layout. |
|
In this project, a versatile high resolution image viewer was created which allows loading multi-gigabyte size image files to display in the StarCAVE and on tiled display walls. Images can consist of multiple layers, like Maurizio Seracini's multi-spectral painting scans. The "Walking into a DaVinci Masterpiece" demonstration in the auditiorium uses this COVISE module. Other images we can display with this technique are the Golden Gate Bridge, an USGS map of La Jolla, NCMIR's mouse cerebellum, the Brain Observatory's monkey brain, and cancer cells from Andy Kummel's lab. |
Finite Elements Simulation (Fabian Gerold, 2008-2009)
|
In this project, Fabian created a simulation module with attached visualization capability. The simulation calculates the stress on a 3D structure which the user can design directly in the StarCAVE. Then the user can run a pre-recorded earthquake on the structure to see where and how strong the forces are on the various elements of the structure. |
Palazzo Vecchio (Philip Weber, 2008)
|
Philip Weber created the PointModel viewer, which renders LIDAR point data sets like Palazzo Vecchio in the StarCAVE. Philip implemented a hierarchical rendering algorithm which allows rendering two million points in real-time. This application also supports other point data sets like UCSD's shake table at the Englekirk site. |
Virtual Architectural Walkthroughs (Edward Kezeli, 2008)
|
We have 3D models of several buildings on and off UCSD's campus which we can bring up in the StarCAVE to view them life size. We got the Structural Engineering and Visual Arts building from Professor Kuester. The architectural firm HMC made available to use the following CAD models: the Rady School of Management at UCSD, HMC's offices in Los Angeles, and a section of the library at San Francisco State University. In collaboration with HMC, Edward created an art piece showing a gigantic Moebius torus floating over Los Angeles. |
NASA (Andrew Prudhomme, 2008)
|
In collaboration with scientists from NASA, we have created several data sets which can be viewed in the StarCAVE: a 3D model of a site Mars rover Spirit took a picture of, a short time after its right front wheel had jammed. Other demonstrations are a 3D model of the International Space Station, a 3D model of a Mars rover, as well as several 2D and 3D surround image panoramas of sites on Mars. |
Digital Lightbox (Philip Weber, 2007-2008)
|
In collaboration with Professor Jacopo Annese's Brain Observatory, Philip created an application for Jacopo's 5x3 tiled display wall which allows displaying up to 90 different cross sections of monkey brains at once. Selected scans exist at super high resolution of hundreds of millions of pixels and can be magnified across the whole wall. |
Research Intelligence Portal (Alex Zavodny, Andrew Prudhomme, 2007-2008)
|
Alex and Andrew created the Research Intelligence kiosk application under a Calit2 grant. They use a visualization technique which is based on smooth transitions between different states of the displayed 2D and 3D geometry. This demonstration runs on a 2x2 tiled display wall on the 5th floor. |
New San Francisco Bay Bridge (Andre Barbosa, 2007-2008)
|
Structural engineering graduate student Andre Barbosa developed a virtual reality application for the StarCAVE. The software allows the user to view and fly through a 3D model of a large part of the new San Francisco bay bridge. His application uses a layered approach to reduce the number of concurrently displayed polygons in order to achieve real-time frame rates. The original data set was created by CalTrans with Bentley's Microstation CAD software. One of Andre's accomplishments was to convert the original, highly detailed CAD data set to a 3D model that could be rendered at interactive frame rates in the StarCAVE. |
Birch Aquarium (Daniel Rohrlick, 2007-2008)
|
For a project funded by the Birch Aquarium Daniel created a 3D underwater scene with a remote controllable submarine, showing the ocean floor around a hydrothermic vent. This project incorporates sound effects, created in collaboration with Peter Otto's group. |
|
This project started with Sara's Calit2 Undergraduate Scholarship and was later continued by Andrew. The visualization tool they created can display the meta-data of the CAMERA sample sites, and real-time usage data from the CAMERA project's BLAST server on a world map. |
Depth of Field (Karen Lin, 2007)
|
Karen Lin wrote her computer science master's thesis under the direction of Professor Matthias Zwicker and Jurgen Schulze. She created a software application which can introduce depth of field into a scene rendered using an image based technique. |
HD Camera Array (Alex Zavodny, Andrew Prudhomme, 2007)
|
In this Rincon funded project, we are able to stream video from the two Ethernet HD cameras and display the warped videos simultaneously on the receiving end. The user can then interactively change the bitrate of the stream, change the focal depth of the virtual camera, and manually align the images. The tiled display wall is accessed through a SAGE module. |
|
As genomics digitizes life, the organism and self are initially lost to data, but ultimately a broader meaning is re-attained. ATLAS in silico blends art, science, dynamic media and emerging technologies to reflect upon humanity's long-standing quest for an understanding of the nature, origins, and unity of life. This multimedia art installation was displayed at SIGGRAPH 2007. |
Screen (Noah Wardrip-Fruin, 2007)
|
Under the guidance of Noah Wardrip-Fruin and Jurgen Schulze, Ava Pierce, David Coughlan, Jeffrey Kuramoto, and Stephen Boyd are adapting the multimedia art installation Screen from the four-wall cave system at Brown University to the StarCAVE. This piece was displayed at SIGGRAPH 2007 and was the first virtual reality application to demoed in the StarCAVE. It was also displayed at the Beall Center at UC Irvine in the fall of 2007. For this purpose, it was ported to a single stereo wall display. |
Children's Hospital (Jurgen Schulze, 2007)
|
From our collaboration with Dr. Peter Newton from San Diego's Children's Hospital we have a few computer tomography (CT) data sets of childerens' upper bodies, showing irregularities of their spines. |
Super Browser (Vinh Huynh, Andrew Prudhomme, 2006)
|
In this project, a gnuhtml based web browser was developed, which can display various types of HTML tags, as well as very large images on web pages. There is no limit to the spatial extent of the web page, because all content is displayed with vector graphics, rather than on a pixel grid. |
Cell Structures (Iman Mostafavi, 2006)
|
NCMIR funded graduate student Iman Mostafavi created an interactive visualization tool for the visualization of mitchondria. The user can click on the various components of a mitochondrion and take them apart to understand what it is composited of. |
Terashake Volume Visualization (Jurgen Schulze, 2006)
|
As part of the NSF funded Optiputer project, Jurgen visualized part of the 4.5 terabyte TeraShake earthquake data set on a the 100 megapixel LambdaVision display at Calit2. For this project, he integrated his volume visualization tool VOX into EVL's SAGE. |
|
Protein Visualization (Philip Weber, Andrew Prudhomme, Krishna Subramanian, Sendhil Panchadsaram, 2005-2009)
|
A VR application to view protein structures from UCSD Professor Philip Bourne's Protein Data Bank (PDB). The popular molecular biology toolkit PyMol is used to create the 3D models of the PDB files. Our application also supports protein alignment, an aminoacid sequence viewer, integration of TOPSAN annotations, as well as a variety of visualization modes. Among the users of this application are: UC Riverside (Peter Atkinson), UCSD Pharmacy (Zoran Radic), Scripps Research Institute (James Fee/Jon Huntoon). |
Earthquake Visualization (Jurgen Schulze, 2005)
Geoscience (Jurgen Schulze, 2005)
|
In collaboration with SIO's research scientist Graham Kent, we created the 3D reconstruction of an area of the floor of the Pacific Ocean. Sonar scans allow us to see the rock formations under the sea floor. The data we used in this project is typical for oil and gas companies looking for oil reservoirs under ground. |
|
Ongoing development of real-time volume rendering algorithms for interactive display at the desktop (DeskVOX) and in virtual environments (CaveVOX). Virvo is name for the GUI independent, OpenGL based volume rendering library which both DeskVOX and CaveVOX use. |
|