Jobs

From Immersive Visualization Lab Wiki
Revision as of 20:43, 12 May 2009 by Jschulze (Talk | contribs)

Jump to: navigation, search

Students for How Much Information Project

We are looking for one or more undergraduate or graduate students to work on the How Much Information (HMI) project, to create an interactive 3D visualization tool for the data we gather from businesses and other places where data is stored. This job requires a solid foundation in the programming language C++ and prior experience with OpenGL or DirectX. We use the Linux operating system in this work, so knowledge in shell usage and scripting languages are a plus. Also, any interest or experience with human-computer interfaces is welcome.

Reference: a prior study on HMI has been done at UC Berkeley in 2003

Hourly pay: $13-$15 (depending on qualification), or possibly a GSR fellowship

Contact: Jurgen Schulze




The National Center for Microscopy and Imaging Research (NCMIR) located at UCSD and at Calit2 (http://www.ncmir.ucsd.edu) is s looking for several talented, creative and enthusiastic students with programming ability to develop software for a variety of research projects. The positions offer students the opportunity to work with leading edge media and interactive technologies such as HD streams, high-resolution (100 M pixel-plus) displays, ultra-high resolution imaging data, and 3D IR-based optical tracking systems. There is the opportunity for students to develop some of the research in to thesis projects and to participate in authoring of publications.

The range of possible projects includes:

  • approaches for the use of immersive virtual reality and ultra high- resolution displays for real-time interaction with massive data sets, including real-time simulations
  • superimposition and alignment at interactive frame-rates of arbitrary numbers of volume datasets, each volume with an arbitrary number of channels, and of arbitrary sizes
  • visualization and visual analytics approaches for working with massively multi-scale, multi-modal, multi-resolution data and associated contextual metadata
  • approaches for automated or semi-automated segmentation of biomedical imaging data including 3D electron tomography and confocal wide-field light microscopy
  • approaches for up-cycling and improving the quality of meshes to generate geometries suitable for numerical/computational simulations of biological systems constrained by spatial realism
  • computer vision approaches to natural, full-body interfaces, including gesture recognition, for interaction with high-resolution displays (100M pixels +) and auto-stereographic displays

Qualifications required vary depending on the project. Being comfortable designing and developing software collaboratively is important since many of these are collaborative research projects and students will work with faculty and visualization scientists at CALIT2, NCMIR, and CSE.

For more information please contact: Ruth West (rwest at ncmir.ucsd.edu), Iman Mostafavi (imostafa at cs.ucsd.edu) and/or Matthias Zwicker (matthias at graphics.ucsd.edu)