Difference between revisions of "Projects"

From Immersive Visualization Lab Wiki
Jump to: navigation, search
(FutureOfVR2015)
 
(89 intermediate revisions by 3 users not shown)
Line 1: Line 1:
=<b>[[Past Projects]]</b>=
+
=<b>Most Recent Projects</b>=
 
+
=<b>Active Projects</b>=
+
 
+
===[[FutureOfVR2015]]===
+
  
 +
===Immersive Visualization Center (Jurgen Schulze, 2020-2021)===
 
<table>
 
<table>
 
   <tr>
 
   <tr>
   <td>[[Image:none.jpg|250px]]</td>
+
   <td>[[Image:IVC-Color-Logo-01.png|250px]]</td>
   <td>This is the first annual conference on the future of virtual reality.</td>
+
   <td>Dr. Schulze founded the [http://ivc.ucsd.edu Immersive Visualization Center (IVC)] to bring together immersive visualization researchers and students with researchers from research domains which have a need for immersive visualization using virtual reality or augmented reality technologies. The IVC is located at UCSD's [https://qi.ucsd.edu/ Qualcomm Institute].
 +
  </td>
 
   </tr>
 
   </tr>
 
</table>
 
</table>
 
<hr>
 
<hr>
  
===[[CalVR]] (Andrew Prudhomme, Philip Weber, Jurgen Schulze, since 2010)===
+
===3D Medical Imaging Pilot (Larry Smarr, Jurgen Schulze, 2018-2021)===
 
<table>
 
<table>
 
   <tr>
 
   <tr>
   <td>[[Image:calvr-logo4-200x144.jpg|250px]]</td>
+
   <td>[[Image:3dmip.jpg|250px]]</td>
   <td>CalVR is our virtual reality middleware (a.k.a. VR engine), which we have been developing for our graphics clusters. It runs on anything from a laptop to a large multi-node CAVE, and builds under Linux, Windows and MacOS. More information about how to obtain the code and build it can be found on our [[CalVR | main CalVR page]]. We also wrote a [[Media:CalVR-Paper-SPIE2013.pdf | paper on CalVR]], and gave a [[Media:CalVR-Slides-SPIE2013.pdf | presentation on it]].</td>
+
   <td>In this [https://ucsdnews.ucsd.edu/feature/helmsley-charitable-trust-grants-uc-san-diego-4.7m-to-study-crohns-disease $1.7M collaborative grant with Dr. Smarr] we have been working on developing software tools to support surgical planning, training and patient education for Crohn's disease.<br>
 +
'''Publications:'''
 +
* Lucknavalai, K., Schulze, J.P., [http://web.eng.ucsd.edu/~jschulze/publications/Lucknavalai2020.pdf "Real-Time Contrast Enhancement for 3D Medical Images using Histogram Equalization"], In Proceedings of the International Symposium on Visual Computing (ISVC 2020), San Diego, CA, Oct 5, 2020
 +
* Zhang, M., Schulze, J.P., [http://web.eng.ucsd.edu/~jschulze/publications/Zhang2021.pdf "Server-Aided 3D DICOM Image Stack Viewer for Android Devices"], In Proceedings of IS&T The Engineering Reality of Virtual Reality, San Francisco, CA, January 21, 2021
 +
'''Videos:'''
 +
* [https://drive.google.com/file/d/1pi1veISuSlj00y82LfuPEQJ1B8yXWkGi/view?usp=sharing VR Application for Oculus Rift S]
 +
* [https://drive.google.com/file/d/1MH2L6yc5Un1mo4t37PRUTXq8ptFIbQf3/view?usp=sharing Android companion app]
 +
  </td>
 
   </tr>
 
   </tr>
 
</table>
 
</table>
 
<hr>
 
<hr>
  
===[[Fringe Physics]] (Robert Maloney, 2014-2015)===
+
===[https://www.darpa.mil/program/explainable-artificial-intelligence XAI] (Jurgen Schulze, 2017-2021)===
 
<table>
 
<table>
 
   <tr>
 
   <tr>
   <td>[[Image:PhysicsLab_FinalScene.png|250px]]</td>
+
   <td>[[Image:xai-icon.jpg|250px]]</td>
   <td></td>
+
   <td>The effectiveness of AI systems is limited by the machine’s current inability to explain their decisions and actions to human users. The Department of Defense (DoD) is facing challenges that demand more intelligent, autonomous, and symbiotic systems. The Explainable AI (XAI) program aims to '''create a suite of machine learning techniques that produce more explainable models''', while maintaining a high level of learning performance (prediction accuracy); and enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners. This project is a collaboration with the [https://www.sri.com/ Stanford Research Institute (SRI) Princeton, NJ]. Our responsibility is the development of the web interface, as well as in-person user studies, for which we recruited over 100 subjects so far.<br>
 +
'''Publications:'''
 +
* Alipour, K., Schulze, J.P., Yao, Y., Ziskind, A., Burachas, G., [http://web.eng.ucsd.edu/~jschulze/publications/Alipour2020a.pdf "A Study on Multimodal and Interactive Explanations for Visual Question Answering"], In Proceedings of the Workshop on Artificial Intelligence Safety (SafeAI 2020), New York, NY, Feb 7, 2020
 +
* Alipour, K., Ray, A., Lin, X, Schulze, J.P., Yao, Y., Burachas, G.T., [http://web.eng.ucsd.edu/~jschulze/publications/Alipour2020b.pdf "The Impact of Explanations on AI Competency Prediction in VQA"], In Proceedings of the IEEE conference on Humanized Computing and Communication with Artificial Intelligence (HCCAI 2020), Sept 22, 2020
 +
  </td>
 
   </tr>
 
   </tr>
 
</table>
 
</table>
 
<hr>
 
<hr>
  
===[[Boxing Simulator]] (Russell Larson, 2014)===
+
===DataCube (Jurgen Schulze, Wanze Xie, Nadir Weibel, 2018-2019)===
 
<table>
 
<table>
 
   <tr>
 
   <tr>
   <td>[[Image:Boxer.png|250px]]</td>
+
   <td>[[Image:datacube.jpg|250px]]</td>
   <td></td>
+
   <td>In this project for the Microsoft HoloLens we developed a [https://www.pwc.com/us/en/industries/health-industries/library/doublejump/bodylogical-healthcare-assistant.html Bodylogical]-powered augmented reality tool for the Microsoft HoloLens to analyze the health of a population such as the employees of a corporation.<br>
  </tr>
+
'''Publications:'''
 +
* Xie, W., Liang, Y., Johnson, J., Mower, A., Burns, S., Chelini, C., D'Alessandro, P., Weibel, N., Schulze, J.P., [http://web.eng.ucsd.edu/~jschulze/publications/Xie2020.pdf "Interactive Multi-User 3D Visual Analytics in Augmented Reality"], In Proceedings of IS&T The Engineering Reality of Virtual Reality, San Francisco, CA, January 30, 2020
 +
'''Videos:'''
 +
* [https://drive.google.com/file/d/1Ba6uS9PFiHTh4IKcVwQWPgMGERNr-5Fd/view?usp=sharing Live demonstration]
 +
* [https://drive.google.com/file/d/1_DcW8RZJvUhGmYEwIbLgJ2W0lWVdsLw7/view?usp=sharing Feature summary]
 +
  </td> </tr>
 
</table>
 
</table>
 
<hr>
 
<hr>
  
===[[Parallel Raytracing]] (Rex West, 2014)===
+
===[[Catalyst]] (Tom Levy, Jurgen Schulze, 2017-2019)===
 
<table>
 
<table>
 
   <tr>
 
   <tr>
   <td>[[Image:NightSky_Frame_Introduction_01.png|250px]]</td>
+
   <td>[[Image:cavekiosk.jpg|250px]]</td>
   <td></td>
+
   <td>This collaborative grant with Prof. Levy is a UCOP funded $1.07M Catalyst project on cyber-archaeology, titled [https://ucsdnews.ucsd.edu/pressrelease/new_3_d_cavekiosk_at_uc_san_diego_brings_cyber_archaeology_to_geisel "3-D Digital Preservation of At-Risk Global Cultural Heritage"]. The goal was the development of software and hardware infrastructure to support the digital preservation and dissemination of 3D cyber-archaeology data, such as point clouds, panoramic images and 3D models.<br>
 +
'''Publications:'''
 +
* Levy, T.E., Smith, C., Agcaoili, K., Kannan, A., Goren, A., Schulze, J.P., Yago, G., [http://web.eng.ucsd.edu/~jschulze/publications/Levy2020.pdf "At-Risk World Heritage and Virtual Reality Visualization for Cyber- Archaeology: The Mar Saba Test Case"], In Forte, M., Murteira, H. (Eds.): "Digital Cities: Between History and Archaeology", April 2020, pp. 151-171, DOI: 10.1093/oso/9780190498900.003.0008
 +
* Schulze, J.P., Williams, G., Smith, C., Weber, P.P., Levy, T.E., "CAVEkiosk: Cultural Heritage Visualization and Dissemination", a chapter in the book "A Sense of Urgency: Preserving At-Risk Cultural Heritage in the Digital Age", Editors: Lercari, N., Wendrich, W., Porter, B., Burton, M.M., Levy, T.E., accepted by Equinox Publishing for publication in 2021</td>
 
   </tr>
 
   </tr>
 
</table>
 
</table>
 
<hr>
 
<hr>
  
===[[Altered Reality]] (Jonathan Shamblen, Cody Waite, Zach Lee, Larry Huynh, Liz Cai 2013)===
+
===[[CalVR]] (Andrew Prudhomme, Philip Weber, Giovanni Aguirre, Jurgen Schulze, since 2010)===
 
<table>
 
<table>
 
   <tr>
 
   <tr>
   <td>[[Image:AR.jpg|250px]]</td>
+
   <td>[[Image:Calvr-logo4-200x75.jpg|250px]]</td>
   <td></td>
+
   <td>CalVR is our virtual reality middleware (a.k.a. VR engine), which we have been developing for our graphics clusters. It runs on anything from a laptop to a large multi-node CAVE, and builds under Linux, Windows and MacOS. More information about how to obtain the code and build it can be found on our [[CalVR | CalVR page for software developers]].<br>
 +
'''Publications:'''
 +
* J.P. Schulze, A. Prudhomme, P. Weber, T.A. DeFanti, [http://web.eng.ucsd.edu/~jschulze/publications/Schulze2013.pdf "CalVR: An Advanced Open Source Virtual Reality Software Framework"], In Proceedings of IS&T/SPIE Electronic Imaging, The Engineering Reality of Virtual Reality, San Francisco, CA, February 4, 2013, ISBN 9780819494221</td>
 
   </tr>
 
   </tr>
 
</table>
 
</table>
 
<hr>
 
<hr>
  
===[[Focal Stacks]] (Jurgen Schulze, 2013)===
+
=<b>[[Past Projects|Older Projects]]</b>=
<table>
+
  <tr>
+
  <td>[[Image:Coral-thumbnail.png|250px]]</td>
+
  <td>SIO will soon have a new microscope which can generate focal stacks faster than before. We are working on algorithms to visualize and analyze these focal stacks.</td>
+
  </tr>
+
</table>
+
<hr>
+
 
+
===[[Android Head Tracking]] (Ken Dang, 2013)===
+
<table>
+
  <tr>
+
  <td>[[Image:android-head-tracking2-250.jpg]]</td>
+
  <td>The Project Goal is to create a Android App that would use face detection algorithms to allows head tracking on mobile devices</td>
+
  </tr>
+
</table>
+
<hr>
+
 
+
===[[Zspace Linux Fix | Automatic Stereo Switcher for the ZSpace]] (Matt Kubasak, Thomas Gray, 2013-2014)===
+
<table>
+
  <tr>
+
  <td>[[Image:ZspaceProduct.jpg|250px]]</td>
+
  <td>We created an Arduino-based solution to fix the problem that in Linux the Zspace's left and right views are initially in random order. The Arduino, along with custom software, is used to sense which eye is displayed when, so that CalVR can switch the eyes if necessary, in order to show a correct stereo image.</td>
+
  </tr>
+
</table>
+
<hr>
+
 
+
===[[ZSculpt | ZSculpt - 3D Sculpting with the Leap]] (Thinh Nguyen, 2013)===
+
<table>
+
  <tr>
+
  <td>[[Image:zsculpt-250.jpg]]</td>
+
  <td>The goal of this project is to explore the use of the Leap Motion device for 3D sculpting.</td>
+
  </tr>
+
</table>
+
<hr>
+
 
+
===[[Magic Lens]] (Tony Chan, Michael Chao, 2013)===
+
<table>
+
  <tr>
+
  <td>[[Image:MagicLens2.jpg]]</td>
+
  <td>The goal of this project is to research the use of smart phones in a virtual reality environment.</td>
+
  </tr>
+
</table>
+
<hr>
+
 
+
===[[Pose Estimation for a Mobile Device]] (Kuen-Han Lin, 2013)===
+
<table>
+
  <tr>
+
  <td>[[Image:kitchen-250.jpg]]</td>
+
  <td>The goal of this project is to develop an algorithm which runs on a PC to estimate the pose of a mobile Android device, linked via wifi.</td>
+
  </tr>
+
</table>
+
<hr>
+
 
+
===[[StreamingGraphics | Multi-User Graphics with Interactive Control (MUGIC)]] (Shahrokh Yadegari, Philip Weber, Andy Muehlhausen, 2012-)===
+
<table>
+
  <tr>
+
  <td>[[Image:TD_performance-250.jpg]]</td>
+
  <td>Allow users simplified and versatile access to CalVR systems via network rendering commands. Users can create computer graphics in their own environments and easily display the output on any CalVR wall or system. [http://www.youtube.com/watch?v=7-q8cl9EUD4&noredirect=1 See the project in action], and [http://www.youtube.com/watch?v=8bS0Borb-f8&noredirect=1 a condensed lecture on the mechanisms.]</td>
+
  </tr>
+
</table>
+
<hr>
+
 
+
===[[PanoView360]] (Andrew Prudhomme, Dan Sandin, 2010-)===
+
<table>
+
  <tr>
+
  <td>[[Image:panoview-in-tourcave-250.jpg]]</td>
+
  <td>Researchers at UIC/EVL and UCSD/Calit2 have developed a method to acquire very high resolution, surround and stereo panorama images using dual SLR cameras. This VR application allows viewing these approximately gigabyte sized images in real-time and supports real-time changes of the viewing direction and zooming.</td>
+
  </tr>
+
</table>
+
<hr>
+
 
+
===[[VOX and Virvo]] (Jurgen Schulze, 1999-)===
+
<table>
+
  <tr>
+
    <td>[[Image:deskvox.jpg]]</td>
+
    <td>Ongoing development of real-time volume rendering algorithms for interactive display at the desktop (DeskVOX) and in virtual environments (CaveVOX). Virvo is name for the GUI independent, OpenGL based volume rendering library which both DeskVOX and CaveVOX use.<td>
+
  </tr>
+
</table> 
+
<hr>
+

Latest revision as of 22:10, 9 September 2021

Contents

Most Recent Projects

Immersive Visualization Center (Jurgen Schulze, 2020-2021)

IVC-Color-Logo-01.png Dr. Schulze founded the Immersive Visualization Center (IVC) to bring together immersive visualization researchers and students with researchers from research domains which have a need for immersive visualization using virtual reality or augmented reality technologies. The IVC is located at UCSD's Qualcomm Institute.

3D Medical Imaging Pilot (Larry Smarr, Jurgen Schulze, 2018-2021)

3dmip.jpg In this $1.7M collaborative grant with Dr. Smarr we have been working on developing software tools to support surgical planning, training and patient education for Crohn's disease.

Publications:

Videos:


XAI (Jurgen Schulze, 2017-2021)

Xai-icon.jpg The effectiveness of AI systems is limited by the machine’s current inability to explain their decisions and actions to human users. The Department of Defense (DoD) is facing challenges that demand more intelligent, autonomous, and symbiotic systems. The Explainable AI (XAI) program aims to create a suite of machine learning techniques that produce more explainable models, while maintaining a high level of learning performance (prediction accuracy); and enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners. This project is a collaboration with the Stanford Research Institute (SRI) Princeton, NJ. Our responsibility is the development of the web interface, as well as in-person user studies, for which we recruited over 100 subjects so far.

Publications:


DataCube (Jurgen Schulze, Wanze Xie, Nadir Weibel, 2018-2019)

Datacube.jpg In this project for the Microsoft HoloLens we developed a Bodylogical-powered augmented reality tool for the Microsoft HoloLens to analyze the health of a population such as the employees of a corporation.

Publications:

Videos:


Catalyst (Tom Levy, Jurgen Schulze, 2017-2019)

Cavekiosk.jpg This collaborative grant with Prof. Levy is a UCOP funded $1.07M Catalyst project on cyber-archaeology, titled "3-D Digital Preservation of At-Risk Global Cultural Heritage". The goal was the development of software and hardware infrastructure to support the digital preservation and dissemination of 3D cyber-archaeology data, such as point clouds, panoramic images and 3D models.

Publications:

  • Levy, T.E., Smith, C., Agcaoili, K., Kannan, A., Goren, A., Schulze, J.P., Yago, G., "At-Risk World Heritage and Virtual Reality Visualization for Cyber- Archaeology: The Mar Saba Test Case", In Forte, M., Murteira, H. (Eds.): "Digital Cities: Between History and Archaeology", April 2020, pp. 151-171, DOI: 10.1093/oso/9780190498900.003.0008
  • Schulze, J.P., Williams, G., Smith, C., Weber, P.P., Levy, T.E., "CAVEkiosk: Cultural Heritage Visualization and Dissemination", a chapter in the book "A Sense of Urgency: Preserving At-Risk Cultural Heritage in the Digital Age", Editors: Lercari, N., Wendrich, W., Porter, B., Burton, M.M., Levy, T.E., accepted by Equinox Publishing for publication in 2021

CalVR (Andrew Prudhomme, Philip Weber, Giovanni Aguirre, Jurgen Schulze, since 2010)

Calvr-logo4-200x75.jpg CalVR is our virtual reality middleware (a.k.a. VR engine), which we have been developing for our graphics clusters. It runs on anything from a laptop to a large multi-node CAVE, and builds under Linux, Windows and MacOS. More information about how to obtain the code and build it can be found on our CalVR page for software developers.

Publications:


Older Projects