Object-Oriented Interaction with Large High Resolution Displays

From Immersive Visualization Lab Wiki
Revision as of 11:38, 6 June 2011 by Lkn001 (Talk | contribs)

Jump to: navigation, search

Project Overview

Large high resolution displays such as tiled wall displays are a common visualization tool in research laboratories and are becoming common place to the general public such as in presentations or conference halls. In the California Institute for Telecommunications and Information Technology (Calit-2) lab at the University of California at San Diego alone, there are two: The 70-tile HiPerSpace Wall along with the AESOP Wall, a set up of 4x4 46” monitors.

While these displays are great for displaying data, they can be difficult to interact with because of their size. Common methods of interaction include using a desktop control station with a standard USB mouse, an air mouse, or a tracking system [Thelen11]. Unfortunately these methods are not without their shortcomings. Using a control station with a mouse is bothersome because for every pixel on the control station that the cursor moves, it may map to moving 20 pixels on the high resolution display as well as a physical constraint on the user to the station. An air mouse allows the user to move around but is still unable to account for the physical location of the user. The cursor position of the mouse is independent of the location of a user it is easy to lose track of the cursor. Furthermore, in order to move the mouse from one end of the screen to the other, the user would have to ratchet the mouse to the correct position instead of just being able to point the mouse at a desired location. The tracking system is able to address the issues presented by using a mouse, but a tracking system can be even more expensive than the display itself. Another issue to keep in mind is that these displays tend to have multiple viewers at once. However, with many of the current navigation methods in place, there is only one person (the driver) who controls the display while everyone else depends on the driver to navigate the data set. Typically these methods do not promote a collaborative environment and can be described as expensive as well as unintuitive.

Moreover, these methods assume the user wants a precise location on the display. However this level of granularity is not required of all applications. It may be possible to treat the data on the display as objects. In this way, we can think of interaction with the display in two parts: at a coarse grain level where the user needs to determine the object on the display to interact with, and at fine grain level where the interaction is localized to the object.

Participants

Development: Lynn Nguyen

Advisor: Jurgen Schulze

Status

Passed Master's exam!