Difference between revisions of "Projects"

From Immersive Visualization Lab Wiki
Jump to: navigation, search
(Multi-User Graphics with Interactive Control (MUGIC) (Shahrokh Yadegari, Philip Weber, Andy Muehlhausen, 2012-))
(Soft Robotic Glove for Haptic Feedback in Virtual Reality (Saurabh Jadhav, 2017))
(21 intermediate revisions by 3 users not shown)
Line 3: Line 3:
 
=<b>Active Projects</b>=
 
=<b>Active Projects</b>=
  
===[[Rendezvous for Glass]] (Qiwen Gao, 2014)===
+
===[[Soft Robotic Glove for Haptic Feedback in Virtual Reality]] (Saurabh Jadhav, 2017)===
 +
 
 
<table>
 
<table>
 
   <tr>
 
   <tr>
   <td>[[Image:google-glass-250.jpg|250px]]</td>
+
  <td>[[File:Cover1.jpg]]</td>
   <td></td>
+
  <td>User interacting with the haptic glove in Virtual Reality</td>
 +
  </tr>
 +
</table>
 +
<hr>
 +
 
 +
===[[VRDPI | VR Deep Packet Inspector]] (Marlon West, 2017)===
 +
 
 +
<table>
 +
  <tr>
 +
  <td>[[File:Screenshot 2 4-9-17.jpg|thumb|right|250px]]</td>
 +
  <td>Explore the network communications in Virtual Reality</td>
 +
  </tr>
 +
</table>
 +
<hr>
 +
 
 +
===[[Playstation_Move_UDP_API | MoveInVR]] (Lachlan Smith, 2016)===
 +
 
 +
<table>
 +
  <tr>
 +
   <td>[[Image:move-and-eye.jpg|250px]]</td>
 +
   <td>If you want to use the Sony Move as a 3D controller for the PC look here.</td>
 
   </tr>
 
   </tr>
 
</table>
 
</table>
Line 71: Line 92:
 
   <td>[[Image:android-head-tracking2-250.jpg]]</td>
 
   <td>[[Image:android-head-tracking2-250.jpg]]</td>
 
   <td>The Project Goal is to create a Android App that would use face detection algorithms to allows head tracking on mobile devices</td>
 
   <td>The Project Goal is to create a Android App that would use face detection algorithms to allows head tracking on mobile devices</td>
  </tr>
 
</table>
 
<hr>
 
 
===[[Camelot | GoPro Synchronization]] (Thomas Gray, 2013)===
 
<table>
 
  <tr>
 
  <td>[[Image:GoProHero2Stereoscopic.jpg|250px]]</td>
 
  <td>The goal of this project is to create a multi-camera video capture system.</td>
 
 
   </tr>
 
   </tr>
 
</table>
 
</table>
Line 132: Line 144:
 
<table>
 
<table>
 
   <tr>
 
   <tr>
   <td>[[Image:image-missing.jpg]]</td>
+
   <td>[[Image:panoview-in-tourcave-250.jpg]]</td>
 
   <td>Researchers at UIC/EVL and UCSD/Calit2 have developed a method to acquire very high resolution, surround and stereo panorama images using dual SLR cameras. This VR application allows viewing these approximately gigabyte sized images in real-time and supports real-time changes of the viewing direction and zooming.</td>
 
   <td>Researchers at UIC/EVL and UCSD/Calit2 have developed a method to acquire very high resolution, surround and stereo panorama images using dual SLR cameras. This VR application allows viewing these approximately gigabyte sized images in real-time and supports real-time changes of the viewing direction and zooming.</td>
 
   </tr>
 
   </tr>

Revision as of 18:45, 4 August 2017

Contents

Past Projects

Active Projects

Soft Robotic Glove for Haptic Feedback in Virtual Reality (Saurabh Jadhav, 2017)

Cover1.jpg User interacting with the haptic glove in Virtual Reality

VR Deep Packet Inspector (Marlon West, 2017)

Screenshot 2 4-9-17.jpg
Explore the network communications in Virtual Reality

MoveInVR (Lachlan Smith, 2016)

Move-and-eye.jpg If you want to use the Sony Move as a 3D controller for the PC look here.

CalVR (Andrew Prudhomme, Philip Weber, Jurgen Schulze, since 2010)

Calvr-logo4-200x144.jpg CalVR is our virtual reality middleware (a.k.a. VR engine), which we have been developing for our graphics clusters. It runs on anything from a laptop to a large multi-node CAVE, and builds under Linux, Windows and MacOS. More information about how to obtain the code and build it can be found on our main CalVR page. We also wrote a paper on CalVR, and gave a presentation on it.

Fringe Physics (Robert Maloney, 2014-2015)

PhysicsLab FinalScene.png

Boxing Simulator (Russell Larson, 2014)

Boxer.png

Parallel Raytracing (Rex West, 2014)

NightSky Frame Introduction 01.png

Altered Reality (Jonathan Shamblen, Cody Waite, Zach Lee, Larry Huynh, Liz Cai 2013)

AR.jpg

Focal Stacks (Jurgen Schulze, 2013)

Coral-thumbnail.png SIO will soon have a new microscope which can generate focal stacks faster than before. We are working on algorithms to visualize and analyze these focal stacks.

Android Head Tracking (Ken Dang, 2013)

Android-head-tracking2-250.jpg The Project Goal is to create a Android App that would use face detection algorithms to allows head tracking on mobile devices

Automatic Stereo Switcher for the ZSpace (Matt Kubasak, Thomas Gray, 2013-2014)

ZspaceProduct.jpg We created an Arduino-based solution to fix the problem that in Linux the Zspace's left and right views are initially in random order. The Arduino, along with custom software, is used to sense which eye is displayed when, so that CalVR can switch the eyes if necessary, in order to show a correct stereo image.

ZSculpt - 3D Sculpting with the Leap (Thinh Nguyen, 2013)

Zsculpt-250.jpg The goal of this project is to explore the use of the Leap Motion device for 3D sculpting.

Magic Lens (Tony Chan, Michael Chao, 2013)

MagicLens2.jpg The goal of this project is to research the use of smart phones in a virtual reality environment.

Pose Estimation for a Mobile Device (Kuen-Han Lin, 2013)

Kitchen-250.jpg The goal of this project is to develop an algorithm which runs on a PC to estimate the pose of a mobile Android device, linked via wifi.

Multi-User Graphics with Interactive Control (MUGIC) (Shahrokh Yadegari, Philip Weber, Andy Muehlhausen, 2012-)

TD performance-250.jpg Allow users simplified and versatile access to CalVR systems via network rendering commands. Users can create computer graphics in their own environments and easily display the output on any CalVR wall or system. See the project in action, and a condensed lecture on the mechanisms.

PanoView360 (Andrew Prudhomme, Dan Sandin, 2010-)

Panoview-in-tourcave-250.jpg Researchers at UIC/EVL and UCSD/Calit2 have developed a method to acquire very high resolution, surround and stereo panorama images using dual SLR cameras. This VR application allows viewing these approximately gigabyte sized images in real-time and supports real-time changes of the viewing direction and zooming.

VOX and Virvo (Jurgen Schulze, 1999-)

Deskvox.jpg Ongoing development of real-time volume rendering algorithms for interactive display at the desktop (DeskVOX) and in virtual environments (CaveVOX). Virvo is name for the GUI independent, OpenGL based volume rendering library which both DeskVOX and CaveVOX use.