Difference between revisions of "Object-Oriented Interaction with Large High Resolution Displays"

From Immersive Visualization Lab Wiki
Jump to: navigation, search
(New page: ==Project Overview== Think of CViewer as a magic scanner. Suppose you have a multigrid display with many objects on it. Using your Android device, you can use the device screen as a viewfi...)
 
Line 1: Line 1:
 
==Project Overview==
 
==Project Overview==
Think of CViewer as a magic scanner. Suppose you have a multigrid display with many objects on it. Using your Android device, you can use the device screen as a viewfinder to point at an object on the screen. Information relative to the object that the device is pointed at will then be displayed on the screen. As you move your device along, the information will change depending on what you are pointing at.
+
Large high resolution displays such as tiled wall displays are a common visualization tool in research laboratories and are becoming common place to the general public such as in presentations or conference halls. In the California Institute for Telecommunications and Information Technology (Calit-2) lab at the University of California at San Diego alone, there are two: The 70-tile HiPerSpace Wall along with the AESOP Wall, a set up of 4x4 46” monitors.
  
'''TODO: Which direction to take / focus on?'''
+
While these displays are great for displaying data, they can be difficult to interact with because of their size. Common methods of interaction include using a desktop control station with a standard USB mouse, an air mouse, or a tracking system [Thelen11]. Unfortunately these methods are not without their shortcomings. Using a control station with a mouse is bothersome because for every pixel on the control station that the cursor moves, it may map to moving 20 pixels on the high resolution display as well as a physical constraint on the user to the station. An air mouse allows the user to move around but is still unable to account for the physical location of the user. The cursor position of the mouse is independent of the location of a user it is easy to lose track of the cursor. Furthermore, in order to move the mouse from one end of the screen to the other, the user would have to ratchet the mouse to the correct position instead of just being able to point the mouse at a desired location. The tracking system is able to address the issues presented by using a mouse, but a tracking system can be even more expensive than the display itself. Another issue to keep in mind is that these displays tend to have multiple viewers at once. However, with many of the current navigation methods in place, there is only one person (the driver) who controls the display while everyone else depends
* Single user: Should we be able to then use the phone as a dynamic menu per object? (adjust image brightness/contrast, play / pause for videos)
+
on the driver to navigate the data set. Typically these methods do not promote a collaborative environment and can be described as expensive as well as unintuitive.
* Multi user: Or go multi-client model so that many users can use this with a large display (the advantage being that not every person needs to see the same information on the large display, they only point their phone at what they want to see more data about so then there is no clutter on the large display).
+
 
 +
Moreover, these methods assume the user wants a precise location on the display. However this level of granularity is not required of
 +
all applications. It may be possible to treat the data on the display as objects. In this way, we can think of interaction with the display
 +
in two parts: at a coarse grain level where the user needs to determine the object on the display to interact with, and at fine grain level where the interaction is localized to the object.
  
 
==Participants==
 
==Participants==
Line 12: Line 15:
  
 
==Status==
 
==Status==
===Week 1, Week2===
+
Passed Master's exam!
The first 2 weeks of Spring 2011 were spent hashing details for a Master's project.
+
 
+
===Week 3===
+
* Spent forever getting OpenCV2.2 to play nicely on my laptop. I finally got a development environment set up using OpenCV2.2, Windows7, and Visual Studio 2010.
+
* Figured out what technologies more or less I need to be working with.
+
* Got a naive image detection working so that when a client sends an image to the server, I'll be able to recognize which image in the referene library the client is pointed at (if any). Uses SURF and a naive nearest neighbor algorithm to match features to determine a best match.
+
 
+
{{multiple image
+
| width    = 700
+
| footer    = Matching Magic
+
| image1    = Boston_closeup.jpg
+
| alt1      = Yellow cartouche
+
| caption1  = Caution
+
| image2    = Boston_reference.jpg
+
| alt2      = Red cartouche
+
| caption2  = Ejection
+
| image3    = Closeup_match_boston.jpg
+
| alt3      = Red cartouche
+
| caption3  = Ejection
+
}}
+
 
+
 
+
{|
+
| [[Image:Boston_closeup.jpg|The query image. This image was taken with my phone camera.]]
+
| [[Image:Boston_reference.jpg|The reference image in the library. Taken in Boston myself!]]
+
| [[Image:Closeup match boston.jpg‎|A visual representation of the matches found by the backend.]]
+
|}
+
 
+
'''Goal for next week:''' Figure out how data will be sent over the wire.
+
 
+
===Week 4===
+
* Laptop crashed
+
* Figured out how to send data from the client for the backend to match
+
** java client -> c++ server... sent a jpg byte array to the server, the server had to convert to an IplImage for OpenCV to use. This was actually really hard...
+
* Was working on client-server model... code of which was on aforementioned crashed laptop.
+
* Got OpenCV2.2 library set up with Covise environment (Thanks Jurgen!)
+
 
+
'''Goal for next week:''' Get a client-server model working (preferably in Covise environment)
+
 
+
===Week 5===
+
'''Goal for next week:'''
+
 
+
===Week 6===
+
'''Goal for next week:'''
+
 
+
===Week 7===
+
'''Goal for next week:'''
+
 
+
===Week 8===
+
'''Goal for next week:'''
+
 
+
===Week 9===
+
'''Goal for next week:'''
+
 
+
===Week 10===
+
IVL Demo! Also prep for Master's presentation and paper.
+
 
+
==End of Quarter Goals==
+
A fully functioning and awesome product!
+

Revision as of 12:38, 6 June 2011

Project Overview

Large high resolution displays such as tiled wall displays are a common visualization tool in research laboratories and are becoming common place to the general public such as in presentations or conference halls. In the California Institute for Telecommunications and Information Technology (Calit-2) lab at the University of California at San Diego alone, there are two: The 70-tile HiPerSpace Wall along with the AESOP Wall, a set up of 4x4 46” monitors.

While these displays are great for displaying data, they can be difficult to interact with because of their size. Common methods of interaction include using a desktop control station with a standard USB mouse, an air mouse, or a tracking system [Thelen11]. Unfortunately these methods are not without their shortcomings. Using a control station with a mouse is bothersome because for every pixel on the control station that the cursor moves, it may map to moving 20 pixels on the high resolution display as well as a physical constraint on the user to the station. An air mouse allows the user to move around but is still unable to account for the physical location of the user. The cursor position of the mouse is independent of the location of a user it is easy to lose track of the cursor. Furthermore, in order to move the mouse from one end of the screen to the other, the user would have to ratchet the mouse to the correct position instead of just being able to point the mouse at a desired location. The tracking system is able to address the issues presented by using a mouse, but a tracking system can be even more expensive than the display itself. Another issue to keep in mind is that these displays tend to have multiple viewers at once. However, with many of the current navigation methods in place, there is only one person (the driver) who controls the display while everyone else depends on the driver to navigate the data set. Typically these methods do not promote a collaborative environment and can be described as expensive as well as unintuitive.

Moreover, these methods assume the user wants a precise location on the display. However this level of granularity is not required of all applications. It may be possible to treat the data on the display as objects. In this way, we can think of interaction with the display in two parts: at a coarse grain level where the user needs to determine the object on the display to interact with, and at fine grain level where the interaction is localized to the object.

Participants

Development: Lynn Nguyen

Advisor: Jurgen Schulze

Status

Passed Master's exam!