Difference between revisions of "Projects"

From Immersive Visualization Lab Wiki
Jump to: navigation, search
(Active Projects)
 
(206 intermediate revisions by 19 users not shown)
Line 1: Line 1:
=<b>[[Past Projects]]</b>=
+
=<b>Most Recent Projects</b>=
  
=<b>Active Projects</b>=
+
===Immersive Visualization Center (Jurgen Schulze, 2020-2021)===
 
+
===[[CameraFlight - osgEarth Camera Transitions]] (William Seo, 2012-)===
+
 
<table>
 
<table>
 
   <tr>
 
   <tr>
   <td>[[Image:CameraFlight.png]]</td>
+
   <td>[[Image:IVC-Color-Logo-01.png|250px]]</td>
   <td>This project's goal is to create automatic camera flights from one place to another in osgEarth.</td>
+
   <td>Dr. Schulze founded the [http://ivc.ucsd.edu Immersive Visualization Center (IVC)] to bring together immersive visualization researchers and students with researchers from research domains which have a need for immersive visualization using virtual reality or augmented reality technologies. The IVC is located at UCSD's [https://qi.ucsd.edu/ Qualcomm Institute].
 +
  </td>
 
   </tr>
 
   </tr>
 
</table>
 
</table>
 
<hr>
 
<hr>
  
===[[AppGlobe Infrastructure]] (Chris McFarland, 2012-)===
+
===3D Medical Imaging Pilot (Larry Smarr, Jurgen Schulze, 2018-2021)===
 
<table>
 
<table>
 
   <tr>
 
   <tr>
   <td>[[Image:AppSwitchMini.png]]</td>
+
   <td>[[Image:3dmip.jpg|250px]]</td>
   <td>This project's goal is to create an application switcher for osgEarth-based CalVR plugins.</td>
+
   <td>In this [https://ucsdnews.ucsd.edu/feature/helmsley-charitable-trust-grants-uc-san-diego-4.7m-to-study-crohns-disease $1.7M collaborative grant with Dr. Smarr] we have been working on developing software tools to support surgical planning, training and patient education for Crohn's disease.<br>
 +
'''Publications:'''
 +
* Lucknavalai, K., Schulze, J.P., [http://web.eng.ucsd.edu/~jschulze/publications/Lucknavalai2020.pdf "Real-Time Contrast Enhancement for 3D Medical Images using Histogram Equalization"], In Proceedings of the International Symposium on Visual Computing (ISVC 2020), San Diego, CA, Oct 5, 2020
 +
* Zhang, M., Schulze, J.P., [http://web.eng.ucsd.edu/~jschulze/publications/Zhang2021.pdf "Server-Aided 3D DICOM Image Stack Viewer for Android Devices"], In Proceedings of IS&T The Engineering Reality of Virtual Reality, San Francisco, CA, January 21, 2021
 +
'''Videos:'''
 +
* [https://drive.google.com/file/d/1pi1veISuSlj00y82LfuPEQJ1B8yXWkGi/view?usp=sharing VR Application for Oculus Rift S]
 +
* [https://drive.google.com/file/d/1MH2L6yc5Un1mo4t37PRUTXq8ptFIbQf3/view?usp=sharing Android companion app]
 +
  </td>
 
   </tr>
 
   </tr>
 
</table>
 
</table>
 
<hr>
 
<hr>
  
===[[GUI Sketching Tool]] (Cathy Hughes, Andrew Prudhomme, 2012-)===
+
===[https://www.darpa.mil/program/explainable-artificial-intelligence XAI] (Jurgen Schulze, 2017-2021)===
 
<table>
 
<table>
 
   <tr>
 
   <tr>
   <td>[[Image:SketchThumb.jpg]]</td>
+
   <td>[[Image:xai-icon.jpg|250px]]</td>
   <td>This project's goal is to develop a 3D sketching tool for a VR GUI.</td>
+
   <td>The effectiveness of AI systems is limited by the machine’s current inability to explain their decisions and actions to human users. The Department of Defense (DoD) is facing challenges that demand more intelligent, autonomous, and symbiotic systems. The Explainable AI (XAI) program aims to '''create a suite of machine learning techniques that produce more explainable models''', while maintaining a high level of learning performance (prediction accuracy); and enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners. This project is a collaboration with the [https://www.sri.com/ Stanford Research Institute (SRI) Princeton, NJ]. Our responsibility is the development of the web interface, as well as in-person user studies, for which we recruited over 100 subjects so far.<br>
 +
'''Publications:'''
 +
* Alipour, K., Schulze, J.P., Yao, Y., Ziskind, A., Burachas, G., [http://web.eng.ucsd.edu/~jschulze/publications/Alipour2020a.pdf "A Study on Multimodal and Interactive Explanations for Visual Question Answering"], In Proceedings of the Workshop on Artificial Intelligence Safety (SafeAI 2020), New York, NY, Feb 7, 2020
 +
* Alipour, K., Ray, A., Lin, X, Schulze, J.P., Yao, Y., Burachas, G.T., [http://web.eng.ucsd.edu/~jschulze/publications/Alipour2020b.pdf "The Impact of Explanations on AI Competency Prediction in VQA"], In Proceedings of the IEEE conference on Humanized Computing and Communication with Artificial Intelligence (HCCAI 2020), Sept 22, 2020
 +
  </td>
 
   </tr>
 
   </tr>
 
</table>
 
</table>
 
<hr>
 
<hr>
  
===Ground Penetrating Radar (Philip Weber, Albert Lin, 2011)===
+
===DataCube (Jurgen Schulze, Wanze Xie, Nadir Weibel, 2018-2019)===
 
<table>
 
<table>
 
   <tr>
 
   <tr>
   <td>[[Image:image-missing.jpg]]</td>
+
   <td>[[Image:datacube.jpg|250px]]</td>
   <td>Calit2's Albert Yu-Min Lin has been named to the 2010 class of National Geographic Emerging Explorers. His current research interest is to find the tomb of Genghis Khan in Mongolia, albeit without physically turning a single rock, but only by analyzing a variety of data modalities such as satellite imagery, aerial photography, and sub-surface radar. This demonstration shows three scanning modalities in one visualization application: magnetic, electro-magnetic, and ground penetrating radar. The latter is able to penetrate the ground the deepest, so we chose to focus on it when it comes to visualizing depth data. Spheres with a depth-based color scheme are used to visualize the data. The user can interactively select how deep they want to go under ground, and then fly around to examine the data and put it in perspective with the other two modalities.</td>
+
   <td>In this project for the Microsoft HoloLens we developed a [https://www.pwc.com/us/en/industries/health-industries/library/doublejump/bodylogical-healthcare-assistant.html Bodylogical]-powered augmented reality tool for the Microsoft HoloLens to analyze the health of a population such as the employees of a corporation.<br>
  </tr>
+
'''Publications:'''
 +
* Xie, W., Liang, Y., Johnson, J., Mower, A., Burns, S., Chelini, C., D'Alessandro, P., Weibel, N., Schulze, J.P., [http://web.eng.ucsd.edu/~jschulze/publications/Xie2020.pdf "Interactive Multi-User 3D Visual Analytics in Augmented Reality"], In Proceedings of IS&T The Engineering Reality of Virtual Reality, San Francisco, CA, January 30, 2020
 +
'''Videos:'''
 +
* [https://drive.google.com/file/d/1Ba6uS9PFiHTh4IKcVwQWPgMGERNr-5Fd/view?usp=sharing Live demonstration]
 +
* [https://drive.google.com/file/d/1_DcW8RZJvUhGmYEwIbLgJ2W0lWVdsLw7/view?usp=sharing Feature summary]
 +
  </td> </tr>
 
</table>
 
</table>
 
<hr>
 
<hr>
  
===[[San Diego Wildfires]] (Philip Weber, Jessica Block, 2011)===
+
===[[Catalyst]] (Tom Levy, Jurgen Schulze, 2017-2019)===
 
<table>
 
<table>
 
   <tr>
 
   <tr>
   <td>[[Image:image-missing.jpg]]</td>
+
   <td>[[Image:cavekiosk.jpg|250px]]</td>
   <td>The Cedar Fire was a human-caused wildfire which destroyed a large number of buildings and infrastructure in San Diego County in October 2003. This application shows high resolution aerial data of the areas affected by the fires. The data resolution is extremely high at 0.5 meters for the imagery and 2 meters for elevation. Our demonstration shows this data embedded into an osgEarth-based visualization framework, which allows adding such data to any place on our planet and viewing it in a way similar to Google Earth, but with full support for high-end visualization systems.</td>
+
   <td>This collaborative grant with Prof. Levy is a UCOP funded $1.07M Catalyst project on cyber-archaeology, titled [https://ucsdnews.ucsd.edu/pressrelease/new_3_d_cavekiosk_at_uc_san_diego_brings_cyber_archaeology_to_geisel "3-D Digital Preservation of At-Risk Global Cultural Heritage"]. The goal was the development of software and hardware infrastructure to support the digital preservation and dissemination of 3D cyber-archaeology data, such as point clouds, panoramic images and 3D models.<br>
 +
'''Publications:'''
 +
* Levy, T.E., Smith, C., Agcaoili, K., Kannan, A., Goren, A., Schulze, J.P., Yago, G., [http://web.eng.ucsd.edu/~jschulze/publications/Levy2020.pdf "At-Risk World Heritage and Virtual Reality Visualization for Cyber- Archaeology: The Mar Saba Test Case"], In Forte, M., Murteira, H. (Eds.): "Digital Cities: Between History and Archaeology", April 2020, pp. 151-171, DOI: 10.1093/oso/9780190498900.003.0008
 +
* Schulze, J.P., Williams, G., Smith, C., Weber, P.P., Levy, T.E., "CAVEkiosk: Cultural Heritage Visualization and Dissemination", a chapter in the book "A Sense of Urgency: Preserving At-Risk Cultural Heritage in the Digital Age", Editors: Lercari, N., Wendrich, W., Porter, B., Burton, M.M., Levy, T.E., accepted by Equinox Publishing for publication in 2021</td>
 
   </tr>
 
   </tr>
 
</table>
 
</table>
 
<hr>
 
<hr>
  
===[[PanoView360]] (Andrew Prudhomme, Dan Sandin, 2010-)===
+
===[[CalVR]] (Andrew Prudhomme, Philip Weber, Giovanni Aguirre, Jurgen Schulze, since 2010)===
 
<table>
 
<table>
 
   <tr>
 
   <tr>
   <td>[[Image:image-missing.jpg]]</td>
+
   <td>[[Image:Calvr-logo4-200x75.jpg|250px]]</td>
   <td>Researchers at UIC/EVL and UCSD/Calit2 have developed a method to acquire very high resolution, surround and stereo panorama images using dual SLR cameras. This VR application allows viewing these approximately gigabyte sized images in real-time and supports real-time changes of the viewing direction and zooming.</td>
+
   <td>CalVR is our virtual reality middleware (a.k.a. VR engine), which we have been developing for our graphics clusters. It runs on anything from a laptop to a large multi-node CAVE, and builds under Linux, Windows and MacOS. More information about how to obtain the code and build it can be found on our [[CalVR | CalVR page for software developers]].<br>
 +
'''Publications:'''
 +
* J.P. Schulze, A. Prudhomme, P. Weber, T.A. DeFanti, [http://web.eng.ucsd.edu/~jschulze/publications/Schulze2013.pdf "CalVR: An Advanced Open Source Virtual Reality Software Framework"], In Proceedings of IS&T/SPIE Electronic Imaging, The Engineering Reality of Virtual Reality, San Francisco, CA, February 4, 2013, ISBN 9780819494221</td>
 
   </tr>
 
   </tr>
 
</table>
 
</table>
 
<hr>
 
<hr>
  
===[[Android Navigator]] (Brooklyn Schlamp, Summer 2011)===
+
=<b>[[Past Projects|Older Projects]]</b>=
<table>
+
  <tr>
+
  <td>[[Image:image-missing.jpg]]</td>
+
  <td>This project implements an Android phone based navigaton tool for the CalVR environment.</td>
+
  </tr>
+
</table>
+
<hr>
+
 
+
===[[ArtifactVis]] (Kyle Knabb, Jurgen Schulze, Connor DeFanti, 2008-)===
+
<table>
+
  <tr>
+
    <td>[[Image:khirbit.jpg]]</td>
+
    <td>For the past ten years, a joint University of California, San Diego and Department of Antiquities of Jordan research team led by Professor Tom Levy and Dr. Mohammad Najjar has been investigating the role of mining and metallurgy on social evolution from the Neolithic period (ca. 7500 BC) to medieval Islamic times (ca. 12th century AD). Kyle Knabb has been working with the IVL as a master's student under Professor Thomas Levy from the archaeology department. He created a 3D visualization for the StarCAVE which displays several excavation sites in Jordan, along with artifacts found there, and radio carbon dating sites. The data resides in a PostgreSQL data bank with the PostGIS extension, which the VR application uses to pull down the data at run-time.
+
</td>
+
  </tr>
+
</table> 
+
<hr>
+
 
+
===[[OpenAL Audio Server]] (Shreenidhi Chowkwale, Summer 2011)===
+
<table>
+
  <tr>
+
  <td>[[Image:image-missing.jpg]]</td>
+
  <td>A Linux-based audio server that uses the OpenAL API to deliver surround sound to virtual visualization environments.</td>
+
  </tr>
+
</table>
+
<hr>
+
 
+
===[[GreenLight BlackBox 2.0]] (John Mangan, Alfred Tarng, 2011-)===
+
<table>
+
  <tr>
+
  <td>[[Image:GreenLight.png]]</td>
+
  <td>The SUN Mobile Data Center at UCSD has been equipped with a myriad of sensors within the NSF funded Greenlight project. This virtual reality application attempts to convey the sensor information to the user while retaining spatial information inherent in the data by the arrangement of hardware in the data center. The application links on-line to a data base which collects all sensor information as it becomes available and stores a complete history of it. The VR application can show current energy consumption and temperatures of the hardware in the container, but it can also be used to query arbitrary time windows in the past.</td>
+
  </tr>
+
</table>
+
<hr>
+
 
+
===[[ScreenMultiViewer]] (John Mangan 2011)===
+
<table>
+
  <tr>
+
  <td>[[Image:image-missing.jpg]]</td>
+
  <td>A display mode within CalVR that allows two users to simultaneously use head trackers within either the StarCAVE or Nexcave, with minimal immersion loss.</td>
+
  </tr>
+
</table>
+
<hr>
+
 
+
 
+
 
+
 
+
 
+
===[[CaveCAD]] (Lelin Zhang 2009-)===
+
<table>
+
  <tr>
+
    <td>[[Image:CaveCAD.jpg]]</td>
+
    <td>Calit2 researcher Lelin ZHANG provides architect designers with pure immersive 3D experience in virtual reality environment of StarCAVE.
+
  </tr>
+
</table>
+
<hr>
+
 
+
 
+
 
+
===Neuroscience and Architecture (Daniel Rohrlick, Michael Bajorek, Mabel Zhang, Lelin Zhang 2007)===
+
<table>
+
  <tr>
+
    <td>[[Image:neuroscience.jpg]]</td>
+
    <td>This projects started off as a Calit2 seed funded project to do a pilot study with the Swartz Center for Neuroscience in which a human subject has to find their way to specific locations in the Calit2 building while their brain waves are being scanned by a high resolution EEG. Michael's responsibility was the interface between the StarCAVE and the EEG system, to transfer tracker data and other application parameters to allow for the correlation of EEG data with VR parameters. Daniel created the 3D model of the New Media Arts wing of the building using 3ds Max. Mabel refined the Calit2 building geometry. This project has been receiving funding from HMC.</td>
+
  </tr>
+
</table> 
+
<hr>
+
 
+
 
+
 
+
===[[VOX and Virvo]] (Jurgen Schulze, 1999-)===
+
<table>
+
  <tr>
+
    <td>[[Image:deskvox.jpg]]</td>
+
    <td>Ongoing development of real-time volume rendering algorithms for interactive display at the desktop (DeskVOX) and in virtual environments (CaveVOX). Virvo is name for the GUI independent, OpenGL based volume rendering library which both DeskVOX and CaveVOX use.<td>
+
  </tr>
+
</table> 
+
<hr>
+

Latest revision as of 21:10, 9 September 2021

Contents

Most Recent Projects

Immersive Visualization Center (Jurgen Schulze, 2020-2021)

IVC-Color-Logo-01.png Dr. Schulze founded the Immersive Visualization Center (IVC) to bring together immersive visualization researchers and students with researchers from research domains which have a need for immersive visualization using virtual reality or augmented reality technologies. The IVC is located at UCSD's Qualcomm Institute.

3D Medical Imaging Pilot (Larry Smarr, Jurgen Schulze, 2018-2021)

3dmip.jpg In this $1.7M collaborative grant with Dr. Smarr we have been working on developing software tools to support surgical planning, training and patient education for Crohn's disease.

Publications:

Videos:


XAI (Jurgen Schulze, 2017-2021)

Xai-icon.jpg The effectiveness of AI systems is limited by the machine’s current inability to explain their decisions and actions to human users. The Department of Defense (DoD) is facing challenges that demand more intelligent, autonomous, and symbiotic systems. The Explainable AI (XAI) program aims to create a suite of machine learning techniques that produce more explainable models, while maintaining a high level of learning performance (prediction accuracy); and enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners. This project is a collaboration with the Stanford Research Institute (SRI) Princeton, NJ. Our responsibility is the development of the web interface, as well as in-person user studies, for which we recruited over 100 subjects so far.

Publications:


DataCube (Jurgen Schulze, Wanze Xie, Nadir Weibel, 2018-2019)

Datacube.jpg In this project for the Microsoft HoloLens we developed a Bodylogical-powered augmented reality tool for the Microsoft HoloLens to analyze the health of a population such as the employees of a corporation.

Publications:

Videos:


Catalyst (Tom Levy, Jurgen Schulze, 2017-2019)

Cavekiosk.jpg This collaborative grant with Prof. Levy is a UCOP funded $1.07M Catalyst project on cyber-archaeology, titled "3-D Digital Preservation of At-Risk Global Cultural Heritage". The goal was the development of software and hardware infrastructure to support the digital preservation and dissemination of 3D cyber-archaeology data, such as point clouds, panoramic images and 3D models.

Publications:

  • Levy, T.E., Smith, C., Agcaoili, K., Kannan, A., Goren, A., Schulze, J.P., Yago, G., "At-Risk World Heritage and Virtual Reality Visualization for Cyber- Archaeology: The Mar Saba Test Case", In Forte, M., Murteira, H. (Eds.): "Digital Cities: Between History and Archaeology", April 2020, pp. 151-171, DOI: 10.1093/oso/9780190498900.003.0008
  • Schulze, J.P., Williams, G., Smith, C., Weber, P.P., Levy, T.E., "CAVEkiosk: Cultural Heritage Visualization and Dissemination", a chapter in the book "A Sense of Urgency: Preserving At-Risk Cultural Heritage in the Digital Age", Editors: Lercari, N., Wendrich, W., Porter, B., Burton, M.M., Levy, T.E., accepted by Equinox Publishing for publication in 2021

CalVR (Andrew Prudhomme, Philip Weber, Giovanni Aguirre, Jurgen Schulze, since 2010)

Calvr-logo4-200x75.jpg CalVR is our virtual reality middleware (a.k.a. VR engine), which we have been developing for our graphics clusters. It runs on anything from a laptop to a large multi-node CAVE, and builds under Linux, Windows and MacOS. More information about how to obtain the code and build it can be found on our CalVR page for software developers.

Publications:


Older Projects