Difference between revisions of "Projects"

From Immersive Visualization Lab Wiki
Jump to: navigation, search
(OpenAL Audio Server (Shreenidhi Chowkwala, Summer 2011))
 
(235 intermediate revisions by 24 users not shown)
Line 1: Line 1:
=<b>[[Past Projects]]</b>=
+
=<b>Most Recent Projects</b>=
  
=<b>Active Projects</b>=
+
===Immersive Visualization Center (Jurgen Schulze, 2020-2021)===
 
+
===[[OpenAL Audio Server]] (Shreenidhi Chowkwale, Summer 2011)===
+
 
<table>
 
<table>
 
   <tr>
 
   <tr>
   <td>[[Image:image-missing.jpg]]</td>
+
   <td>[[Image:IVC-Color-Logo-01.png|250px]]</td>
   <td>description missing</td>
+
   <td>Dr. Schulze founded the [http://ivc.ucsd.edu Immersive Visualization Center (IVC)] to bring together immersive visualization researchers and students with researchers from research domains which have a need for immersive visualization using virtual reality or augmented reality technologies. The IVC is located at UCSD's [https://qi.ucsd.edu/ Qualcomm Institute].
 +
  </td>
 
   </tr>
 
   </tr>
 
</table>
 
</table>
 
<hr>
 
<hr>
  
===[[GreenLight BlackBox 2.0]] (John Mangan, Summer 2011)===
+
===3D Medical Imaging Pilot (Larry Smarr, Jurgen Schulze, 2018-2021)===
 
<table>
 
<table>
 
   <tr>
 
   <tr>
   <td>[[Image:GreenLight.png]]</td>
+
   <td>[[Image:3dmip.jpg|250px]]</td>
   <td>Visualization for CalVR of the BlackBox along with all internal components/servers, including power consumption and temperature reading modes.</td>
+
   <td>In this [https://ucsdnews.ucsd.edu/feature/helmsley-charitable-trust-grants-uc-san-diego-4.7m-to-study-crohns-disease $1.7M collaborative grant with Dr. Smarr] we have been working on developing software tools to support surgical planning, training and patient education for Crohn's disease.<br>
 +
'''Publications:'''
 +
* Lucknavalai, K., Schulze, J.P., [http://web.eng.ucsd.edu/~jschulze/publications/Lucknavalai2020.pdf "Real-Time Contrast Enhancement for 3D Medical Images using Histogram Equalization"], In Proceedings of the International Symposium on Visual Computing (ISVC 2020), San Diego, CA, Oct 5, 2020
 +
* Zhang, M., Schulze, J.P., [http://web.eng.ucsd.edu/~jschulze/publications/Zhang2021.pdf "Server-Aided 3D DICOM Image Stack Viewer for Android Devices"], In Proceedings of IS&T The Engineering Reality of Virtual Reality, San Francisco, CA, January 21, 2021
 +
'''Videos:'''
 +
* [https://drive.google.com/file/d/1pi1veISuSlj00y82LfuPEQJ1B8yXWkGi/view?usp=sharing VR Application for Oculus Rift S]
 +
* [https://drive.google.com/file/d/1MH2L6yc5Un1mo4t37PRUTXq8ptFIbQf3/view?usp=sharing Android companion app]
 +
  </td>
 
   </tr>
 
   </tr>
 
</table>
 
</table>
 
<hr>
 
<hr>
  
===[[3D Reconstruction of Photographs ]] (Matthew Religioso, 2011-)===
+
===[https://www.darpa.mil/program/explainable-artificial-intelligence XAI] (Jurgen Schulze, 2017-2021)===
 
<table>
 
<table>
 
   <tr>
 
   <tr>
   <td>[[Image:Wiki.jpg]]</td>
+
   <td>[[Image:xai-icon.jpg|250px]]</td>
   <td width=20></td>
+
   <td>The effectiveness of AI systems is limited by the machine’s current inability to explain their decisions and actions to human users. The Department of Defense (DoD) is facing challenges that demand more intelligent, autonomous, and symbiotic systems. The Explainable AI (XAI) program aims to '''create a suite of machine learning techniques that produce more explainable models''', while maintaining a high level of learning performance (prediction accuracy); and enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners. This project is a collaboration with the [https://www.sri.com/ Stanford Research Institute (SRI) Princeton, NJ]. Our responsibility is the development of the web interface, as well as in-person user studies, for which we recruited over 100 subjects so far.<br>
  <td>Reconstruct static objects from photographs into 3D models using the Bundler Algorithm and the Texturing Algorithm developed by prior students. This project's goal is to optimize the Texturing to maximize photorealism and efficiency, and run the resulting application in the StarCAVE. </td>
+
'''Publications:'''
   <td width=20></td>
+
* Alipour, K., Schulze, J.P., Yao, Y., Ziskind, A., Burachas, G., [http://web.eng.ucsd.edu/~jschulze/publications/Alipour2020a.pdf "A Study on Multimodal and Interactive Explanations for Visual Question Answering"], In Proceedings of the Workshop on Artificial Intelligence Safety (SafeAI 2020), New York, NY, Feb 7, 2020
 +
* Alipour, K., Ray, A., Lin, X, Schulze, J.P., Yao, Y., Burachas, G.T., [http://web.eng.ucsd.edu/~jschulze/publications/Alipour2020b.pdf "The Impact of Explanations on AI Competency Prediction in VQA"], In Proceedings of the IEEE conference on Humanized Computing and Communication with Artificial Intelligence (HCCAI 2020), Sept 22, 2020
 +
   </td>
 
   </tr>
 
   </tr>
 
</table>
 
</table>
 
<hr>
 
<hr>
  
===[[Real-Time Geometry Scanning System]] (Daniel Tenedorio, 2011-)===
+
===DataCube (Jurgen Schulze, Wanze Xie, Nadir Weibel, 2018-2019)===
 
<table>
 
<table>
 
   <tr>
 
   <tr>
   <td>[[Image:dtenedor-wiki-icon.png]]</td>
+
   <td>[[Image:datacube.jpg|250px]]</td>
   <td width=20></td>
+
   <td>In this project for the Microsoft HoloLens we developed a [https://www.pwc.com/us/en/industries/health-industries/library/doublejump/bodylogical-healthcare-assistant.html Bodylogical]-powered augmented reality tool for the Microsoft HoloLens to analyze the health of a population such as the employees of a corporation.<br>
  <td>This interactive system constructs a 3D model of the environment as a user moves an infrared geometry camera around a room. We display the intermediate representation of the scene in real-time on virtual reality displays ranging from a single computer monitor to immersive, stereoscopic projection systems like the StarCAVE.</td>
+
'''Publications:'''
  <td width=20></td>
+
* Xie, W., Liang, Y., Johnson, J., Mower, A., Burns, S., Chelini, C., D'Alessandro, P., Weibel, N., Schulze, J.P., [http://web.eng.ucsd.edu/~jschulze/publications/Xie2020.pdf "Interactive Multi-User 3D Visual Analytics in Augmented Reality"], In Proceedings of IS&T The Engineering Reality of Virtual Reality, San Francisco, CA, January 30, 2020
  </tr>
+
'''Videos:'''
 +
* [https://drive.google.com/file/d/1Ba6uS9PFiHTh4IKcVwQWPgMGERNr-5Fd/view?usp=sharing Live demonstration]
 +
* [https://drive.google.com/file/d/1_DcW8RZJvUhGmYEwIbLgJ2W0lWVdsLw7/view?usp=sharing Feature summary]
 +
  </td> </tr>
 
</table>
 
</table>
 
<hr>
 
<hr>
  
===[[Real-Time Meshing of Dynamic Point Clouds]] (Robert Pardridge, James Lue, 2011-)===
+
===[[Catalyst]] (Tom Levy, Jurgen Schulze, 2017-2019)===
 
<table>
 
<table>
 
   <tr>
 
   <tr>
   <td>[[Image:Marchingcubes.png]]</td>
+
   <td>[[Image:cavekiosk.jpg|250px]]</td>
   <td>This project involves generating a triangle mesh in over a point cloud that grows dynamically. The goal is to implement a meshing algorithm that is fast enough to keep up with the streaming input from a scanning device. We are using a CUDA implementation of the Marching Cubes algorithm to triangulate in real-time a point cloud obtained from the Kinect depth-camera. </td>
+
   <td>This collaborative grant with Prof. Levy is a UCOP funded $1.07M Catalyst project on cyber-archaeology, titled [https://ucsdnews.ucsd.edu/pressrelease/new_3_d_cavekiosk_at_uc_san_diego_brings_cyber_archaeology_to_geisel "3-D Digital Preservation of At-Risk Global Cultural Heritage"]. The goal was the development of software and hardware infrastructure to support the digital preservation and dissemination of 3D cyber-archaeology data, such as point clouds, panoramic images and 3D models.<br>
 +
'''Publications:'''
 +
* Levy, T.E., Smith, C., Agcaoili, K., Kannan, A., Goren, A., Schulze, J.P., Yago, G., [http://web.eng.ucsd.edu/~jschulze/publications/Levy2020.pdf "At-Risk World Heritage and Virtual Reality Visualization for Cyber- Archaeology: The Mar Saba Test Case"], In Forte, M., Murteira, H. (Eds.): "Digital Cities: Between History and Archaeology", April 2020, pp. 151-171, DOI: 10.1093/oso/9780190498900.003.0008
 +
* Schulze, J.P., Williams, G., Smith, C., Weber, P.P., Levy, T.E., "CAVEkiosk: Cultural Heritage Visualization and Dissemination", a chapter in the book "A Sense of Urgency: Preserving At-Risk Cultural Heritage in the Digital Age", Editors: Lercari, N., Wendrich, W., Porter, B., Burton, M.M., Levy, T.E., accepted by Equinox Publishing for publication in 2021</td>
 
   </tr>
 
   </tr>
 
</table>
 
</table>
 
<hr>
 
<hr>
  
===[[LSystems]] (Sarah Larsen 2011-)===
+
===[[CalVR]] (Andrew Prudhomme, Philip Weber, Giovanni Aguirre, Jurgen Schulze, since 2010)===
 
<table>
 
<table>
 
   <tr>
 
   <tr>
   <td>[[Image:LSystems2.png]]</td>
+
   <td>[[Image:Calvr-logo4-200x75.jpg|250px]]</td>
   <td>Creates an LSystem and displays it with either line or cylinder connections</td>
+
   <td>CalVR is our virtual reality middleware (a.k.a. VR engine), which we have been developing for our graphics clusters. It runs on anything from a laptop to a large multi-node CAVE, and builds under Linux, Windows and MacOS. More information about how to obtain the code and build it can be found on our [[CalVR | CalVR page for software developers]].<br>
 +
'''Publications:'''
 +
* J.P. Schulze, A. Prudhomme, P. Weber, T.A. DeFanti, [http://web.eng.ucsd.edu/~jschulze/publications/Schulze2013.pdf "CalVR: An Advanced Open Source Virtual Reality Software Framework"], In Proceedings of IS&T/SPIE Electronic Imaging, The Engineering Reality of Virtual Reality, San Francisco, CA, February 4, 2013, ISBN 9780819494221</td>
 
   </tr>
 
   </tr>
 
</table>
 
</table>
 
<hr>
 
<hr>
  
===[[Android Controller]] (Jeanne Wang 2011-)===
+
=<b>[[Past Projects|Older Projects]]</b>=
<table>
+
  <tr>
+
  <td>[[Image:androidscreenshot.png]]</td>
+
  <td>An Android based controller for a visualization system such as StarCave or a multiscreen grid.</td>
+
  </tr>
+
</table>
+
<hr>
+
 
+
===[[Object-Oriented Interaction with Large High Resolution Displays]] (Lynn Nguyen 2011-)===
+
<table>
+
  <tr>
+
  <td>[[Image:Lynn_selected.jpg]]</td>
+
  <td>Investigate the practicality of using smartphones to interact with large high resolution displays. To accomplish such a
+
task, it is not necessary to find the spatial location of the phone relative to the display, rather we can identify the object a user wants to interact with through image recognition. The interaction with the object itself can be done by using the smart-phone as the medium. The feasibility of this concept is investigated by implementing a prototype.</td>
+
  </tr>
+
</table>
+
<hr>
+
 
+
===[[ScreenMultiViewer]] (John Mangan, Phi Hung Nguyen 2010-2011)===
+
<table>
+
  <tr>
+
  <td>[[Image:image-missing.jpg]]</td>
+
  <td>A display mode within CalVR that allows two users to simultaneously use head trackers within either the StarCAVE or Nexcave, with minimal immersion loss.</td>
+
  </tr>
+
</table>
+
<hr>
+
 
+
===[[MatEdit]] (Khanh Luc, 2011)===
+
<table>
+
  <tr>
+
  <td>[[Image:Screenshot_MaterialEditor_small.jpg]]</td>
+
  <td>A graphical user interface for programmers to adjust and preview material properties on an object. Once a programmer determines the proper parameters for the look of his material, he can then generate code to achieve that look</td>
+
  </tr>
+
</table>
+
<hr>
+
 
+
===[[Kinect UI for 3D Pacman]] (Tony Lu, 2011)===
+
<table>
+
  <tr>
+
  <td>[[Image:Pacmanscreenshot_small.png]]</td>
+
  <td>An experimentation with the Kinect to implement a device free, gesture controlled user interface in the StarCAVE to run a 3D Pacman game.</td>
+
  </tr>
+
</table>
+
<hr>
+
 
+
===[[TelePresence]] (Seth Rotkin, Mabel Zhang, 2010-)===
+
<table>
+
  <tr>
+
    <td>[[Image:TelePresenceDualView_thumb.png]]</td>
+
    <td>A virtual representation of the Cisco TelePresence telecommunications system.  The room includes a full 3D reproduction of an actual TelePresence 3000 room, live streaming video footage, pre-recorded stereo videos, a 3D powerpoint projection zone, and a movable floating 3D model.</td>
+
  </tr>
+
</table> 
+
<hr>
+
 
+
===[[CaveCAD]] (Lelin Zhang 2009-)===
+
<table>
+
  <tr>
+
    <td>[[Image:CaveCAD.jpg]]</td>
+
    <td>Calit2 researcher Lelin ZHANG provides architect designers with pure immersive 3D experience in virtual reality environment of StarCAVE.
+
  </tr>
+
</table>
+
<hr>
+
 
+
===[[GreenLight Blackbox]] (Mabel Zhang, Andrew Prudhomme, Seth Rotkin, Philip Weber, Grant van Horn, Connor Worley, Quan Le, Hesler Rodriguez, 2008-)===
+
<table>
+
  <tr>
+
    <td>[[Image:BlackboxThumb0001.png‎|Sun Blackbox rear view]]</td>
+
    <td>We created a 3D model of the SUN Mobile Data Center which is a core component of the instrument procured by the [http://wiki.greenlight.calit2.net GreenLight project]. We added an on-line connection to the physical container to display the output of the power modules. The project was demonstrated at SIGGRAPH, ISC, and Supercomputing.</td>
+
  </tr>
+
</table> 
+
<hr>
+
 
+
===Neuroscience and Architecture (Daniel Rohrlick, Michael Bajorek, Mabel Zhang, Lelin Zhang 2007)===
+
<table>
+
  <tr>
+
    <td>[[Image:neuroscience.jpg]]</td>
+
    <td>This projects started off as a Calit2 seed funded project to do a pilot study with the Swartz Center for Neuroscience in which a human subject has to find their way to specific locations in the Calit2 building while their brain waves are being scanned by a high resolution EEG. Michael's responsibility was the interface between the StarCAVE and the EEG system, to transfer tracker data and other application parameters to allow for the correlation of EEG data with VR parameters. Daniel created the 3D model of the New Media Arts wing of the building using 3ds Max. Mabel refined the Calit2 building geometry. This project has been receiving funding from HMC.</td>
+
  </tr>
+
</table> 
+
<hr>
+
 
+
===[[Volumetric Blood Flow Rendering]] (Yuri Bazilevs, Jurgen Schulze, Alison Marsden, Greg Long, Han Kim 2011)===
+
<table>
+
  <tr>
+
    <td>[[Image:VolumeBloodFlow-small.png‎]]</td>
+
    <td>An extension of the BloodFlow (2009-2010) Project.  We attempt to create more useful visualizations of blood flow in a blood vessel, through volume rendering techniques, and tackle challenges resulting from volumetric rendering of large datasets.</td>
+
  </tr>
+
</table> 
+
<hr>
+
 
+
===[[VOX and Virvo]] (Jurgen Schulze, 1999-)===
+
<table>
+
  <tr>
+
    <td>[[Image:deskvox.jpg]]</td>
+
    <td>Ongoing development of real-time volume rendering algorithms for interactive display at the desktop (DeskVOX) and in virtual environments (CaveVOX). Virvo is name for the GUI independent, OpenGL based volume rendering library which both DeskVOX and CaveVOX use.<td>
+
  </tr>
+
</table> 
+
<hr>
+

Latest revision as of 21:10, 9 September 2021

Contents

Most Recent Projects

Immersive Visualization Center (Jurgen Schulze, 2020-2021)

IVC-Color-Logo-01.png Dr. Schulze founded the Immersive Visualization Center (IVC) to bring together immersive visualization researchers and students with researchers from research domains which have a need for immersive visualization using virtual reality or augmented reality technologies. The IVC is located at UCSD's Qualcomm Institute.

3D Medical Imaging Pilot (Larry Smarr, Jurgen Schulze, 2018-2021)

3dmip.jpg In this $1.7M collaborative grant with Dr. Smarr we have been working on developing software tools to support surgical planning, training and patient education for Crohn's disease.

Publications:

Videos:


XAI (Jurgen Schulze, 2017-2021)

Xai-icon.jpg The effectiveness of AI systems is limited by the machine’s current inability to explain their decisions and actions to human users. The Department of Defense (DoD) is facing challenges that demand more intelligent, autonomous, and symbiotic systems. The Explainable AI (XAI) program aims to create a suite of machine learning techniques that produce more explainable models, while maintaining a high level of learning performance (prediction accuracy); and enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners. This project is a collaboration with the Stanford Research Institute (SRI) Princeton, NJ. Our responsibility is the development of the web interface, as well as in-person user studies, for which we recruited over 100 subjects so far.

Publications:


DataCube (Jurgen Schulze, Wanze Xie, Nadir Weibel, 2018-2019)

Datacube.jpg In this project for the Microsoft HoloLens we developed a Bodylogical-powered augmented reality tool for the Microsoft HoloLens to analyze the health of a population such as the employees of a corporation.

Publications:

Videos:


Catalyst (Tom Levy, Jurgen Schulze, 2017-2019)

Cavekiosk.jpg This collaborative grant with Prof. Levy is a UCOP funded $1.07M Catalyst project on cyber-archaeology, titled "3-D Digital Preservation of At-Risk Global Cultural Heritage". The goal was the development of software and hardware infrastructure to support the digital preservation and dissemination of 3D cyber-archaeology data, such as point clouds, panoramic images and 3D models.

Publications:

  • Levy, T.E., Smith, C., Agcaoili, K., Kannan, A., Goren, A., Schulze, J.P., Yago, G., "At-Risk World Heritage and Virtual Reality Visualization for Cyber- Archaeology: The Mar Saba Test Case", In Forte, M., Murteira, H. (Eds.): "Digital Cities: Between History and Archaeology", April 2020, pp. 151-171, DOI: 10.1093/oso/9780190498900.003.0008
  • Schulze, J.P., Williams, G., Smith, C., Weber, P.P., Levy, T.E., "CAVEkiosk: Cultural Heritage Visualization and Dissemination", a chapter in the book "A Sense of Urgency: Preserving At-Risk Cultural Heritage in the Digital Age", Editors: Lercari, N., Wendrich, W., Porter, B., Burton, M.M., Levy, T.E., accepted by Equinox Publishing for publication in 2021

CalVR (Andrew Prudhomme, Philip Weber, Giovanni Aguirre, Jurgen Schulze, since 2010)

Calvr-logo4-200x75.jpg CalVR is our virtual reality middleware (a.k.a. VR engine), which we have been developing for our graphics clusters. It runs on anything from a laptop to a large multi-node CAVE, and builds under Linux, Windows and MacOS. More information about how to obtain the code and build it can be found on our CalVR page for software developers.

Publications:


Older Projects