Difference between revisions of "Projects"

From Immersive Visualization Lab Wiki
Jump to: navigation, search
Line 1: Line 1:
 +
=<b>[[Past Projects]]</b>=
 +
 
=<b>Active Projects</b>=
 
=<b>Active Projects</b>=
  

Revision as of 17:00, 6 June 2011

Contents

Past Projects

Active Projects

3D Reconstruction of Photographs (Matthew Religioso, 2011-)

Wiki.jpg Reconstruct static objects from photographs into 3D models using the Bundler Algorithm and the Texturing Algorithm developed by prior students. This project's goal is to optimize the Texturing to maximize photorealism and efficiency, and run the resulting application in the StarCAVE.

Real-Time Geometry Scanning System (Daniel Tenedorio, 2011-)

Dtenedor-wiki-icon.png This interactive system constructs a 3D model of the environment as a user moves an infrared geometry camera around a room. We display the intermediate representation of the scene in real-time on virtual reality displays ranging from a single computer monitor to immersive, stereoscopic projection systems like the StarCAVE.

Real-Time Meshing of Dynamic Point Clouds (Robert Pardridge, James Lue, 2011-)

Marchingcubes.png This project involves generating a triangle mesh in over a point cloud that grows dynamically. The goal is to implement a meshing algorithm that is fast enough to keep up with the streaming input from a scanning device. We are using a CUDA implementation of the Marching Cubes algorithm to triangulate in real-time a point cloud obtained from the Kinect depth-camera.

LSystems (Sarah Larsen 2011-)

LSystems2.png Creates an LSystem and displays it with either line or cylinder connections

Android Controller (Jeanne Wang 2011-)

Androidscreenshot.png An Android based controller for a visualization system such as StarCave or a multiscreen grid.

Object-Oriented Interaction with Large High Resolution Displays (Lynn Nguyen 2011-)

Lynn selected.jpg Investigate the practicality of using smartphones to interact with large high resolution displays. To accomplish such a task, it is not necessary to find the spatial location of the phone relative to the display, rather we can identify the object a user wants to interact with through image recognition. The interaction with the object itself can be done by using the smart-phone as the medium. The feasibility of this concept is investigated by implementing a prototype.

ScreenMultiViewer (John Mangan, Phi Hung Nguyen 2010-)

Image-missing.jpg A display mode within CalVR that allows two users to simultaneously use head trackers within either the StarCAVE or Nexcave, with minimal immersion loss.

MatEdit (Khanh Luc, 2011)

Screenshot MaterialEditor small.jpg A graphical user interface for programmers to adjust and preview material properties on an object. Once a programmer determines the proper parameters for the look of his material, he can then generate code to achieve that look

Kinect UI for 3D Pacman (Tony Lu, 2011)

Pacmanscreenshot small.png An experimentation with the Kinect to implement a device free, gesture controlled user interface in the StarCAVE to run a 3D Pacman game.

TelePresence (Seth Rotkin, Mabel Zhang, 2010-)

TelePresenceDualView thumb.png A virtual representation of the Cisco TelePresence telecommunications system. The room includes a full 3D reproduction of an actual TelePresence 3000 room, live streaming video footage, pre-recorded stereo videos, a 3D powerpoint projection zone, and a movable floating 3D model.

CaveCAD (Lelin Zhang 2009-)

CaveCAD.jpg Calit2 researcher Lelin ZHANG provides architect designers with pure immersive 3D experience in virtual reality environment of StarCAVE.

GreenLight Blackbox (Mabel Zhang, Andrew Prudhomme, Seth Rotkin, Philip Weber, Grant van Horn, Connor Worley, Quan Le, Hesler Rodriguez, 2008-)

Sun Blackbox rear view We created a 3D model of the SUN Mobile Data Center which is a core component of the instrument procured by the GreenLight project. We added an on-line connection to the physical container to display the output of the power modules. The project was demonstrated at SIGGRAPH, ISC, and Supercomputing.

Neuroscience and Architecture (Daniel Rohrlick, Michael Bajorek, Mabel Zhang, Lelin Zhang 2007)

Neuroscience.jpg This projects started off as a Calit2 seed funded project to do a pilot study with the Swartz Center for Neuroscience in which a human subject has to find their way to specific locations in the Calit2 building while their brain waves are being scanned by a high resolution EEG. Michael's responsibility was the interface between the StarCAVE and the EEG system, to transfer tracker data and other application parameters to allow for the correlation of EEG data with VR parameters. Daniel created the 3D model of the New Media Arts wing of the building using 3ds Max. Mabel refined the Calit2 building geometry. This project has been receiving funding from HMC.

Volumetric Blood Flow Rendering (Yuri Bazilevs, Jurgen Schulze, Alison Marsden, Greg Long, Han Kim 2011)

VolumeBloodFlow-small.png An extension of the BloodFlow (2009-2010) Project. We attempt to create more useful visualizations of blood flow in a blood vessel, through volume rendering techniques, and tackle challenges resulting from volumetric rendering of large datasets.

Past Projects