Object-Oriented Interaction with Large High Resolution Displays

From Immersive Visualization Lab Wiki
Jump to: navigation, search

Contents

Project Overview

Large high resolution displays such as tiled wall displays are a common visualization tool in research laboratories and are becoming common place to the general public such as in presentations or conference halls. In the California Institute for Telecommunications and Information Technology (Calit-2) lab at the University of California at San Diego alone, there are two: The 70-tile HiPerSpace Wall along with the AESOP Wall, a set up of 4x4 46” monitors.

While these displays are great for displaying data, they can be difficult to interact with because of their size. Common methods of interaction include using a desktop control station with a standard USB mouse, an air mouse, or a tracking system. Unfortunately these methods are not without their shortcomings. Using a control station with a mouse is bothersome because for every pixel on the control station that the cursor moves, it may map to moving 20 pixels on the high resolution display as well as a physical constraint on the user to the station. An air mouse allows the user to move around but is still unable to account for the physical location of the user. The cursor position of the mouse is independent of the location of a user it is easy to lose track of the cursor. Furthermore, in order to move the mouse from one end of the screen to the other, the user would have to ratchet the mouse to the correct position instead of just being able to point the mouse at a desired location. The tracking system is able to address the issues presented by using a mouse, but a tracking system can be even more expensive than the display itself. Another issue to keep in mind is that these displays tend to have multiple viewers at once. However, with many of the current navigation methods in place, there is only one person (the driver) who controls the display while everyone else depends on the driver to navigate the data set. Typically these methods do not promote a collaborative environment and can be described as expensive as well as unintuitive.

Moreover, these methods assume the user wants a precise location on the display. However this level of granularity is not required of all applications. It may be possible to treat the data on the display as objects. In this way, we can think of interaction with the display in two parts: at a coarse grain level where the user needs to determine the object on the display to interact with, and at fine grain level where the interaction is localized to the object.

Link to slides: https://docs.google.com/leaf?id=1aES53F7fiC1Y_ixZ06T7akoV91lQsWOG-CdtXRnORp8&authkey=CP2csYIM

Participants

Development: Lynn Nguyen

Advisor: Jurgen Schulze

Status

Passed Master's exam!

Instructions

Source code lives in SRC=/home/covise/covise/src/renderer/OpenCOVER/plugins/calit2/ImagePlane/backend

ImagePlane is the plug in that tiles photos on a large high resolution display

python/c++ modules needed: opencv, sqlite3, pyexiv2

source code hosted:

Running the server

We have two config files:

  • settings.xml // sets server and match configurations
  • database/*.db // table referencing library images and their data (and comments)

in the settings.xml, the library config path is used to create the database defined in the path attribute. the server actually only uses the path attribute. this should be fixed so its more usable since library creation is a preprocessing step and so there is a burden on the user user to make sure the database created is referenced by the path attribute in this xml file.

To build library database

1. modify populate_database.py (see note)

2. run the following in the shell

>> cd $SRC/database

>> python populate_database.py


note on populate_database.py:

  • set db_out to specify path for the sqlite3 database to be created
  • set settings_xml to point to the settings file that the backend uses.
  • The settings xml file specifies where the library images are. This script strips exif data and creates a database with two tables, imagedata and comments. The rows in imagedata are the path to the library image (relative to the server executable) and the columns are the metadata. Comments defines the many-to-one relationship of an image and user comments.

To compile and run

>> cd $SRC

>> make

>> ./server settings.xml

Running the plugin

  • library image directory must be specified by the data directory in the covise config file (I think)
  • make sure ImagePlane plugin is turned on
  • for phone interaction support
    • make sure server is run first so that the shared memory location is created, and then run the plugin
    • server and plugin must run on the same machine

Future Work

Supporting Simultaneous Multiple Users

  • The main code is in server.cpp. This listens for clients and calls the matching service provided by SURFMatcher
  • I know I can connect multiple clients at a time, (used a phone, then telnet), but I don't know what were to happen if 2 clients were to send data. For the most part, interaction would be okay because I had already set up a new thread for each client. The synchronization issues that I can foresee are when writing to a log file as well as using the matching service. I currently have it set up so there's only one matcher that would be called upon by all clients (the variable g_matcher)
  • I didn't actually implement the different log levels yet and I have no idea how it would react in a real multithreaded environment.
  • There is a hardcoded limit for 5 connected clients in network/Sockette.h or Sockette.cpp (#defined at the top) you can run make to compile the code, then ./server <settings-file> to run it. The one that's best configured to work is settings-aesop,xml because all library images are in the right place
  • So when a client connects, you can assign them an id, or you can make them send you their device id and keep that in the database and give them an id based on that. Then when you write to shared memory, each client would have an offset into shared memory. Then when covise reads from it, it should be able to walk along (but it would need to know when to stop walking) the shared memory
  • If you want to not hardcode the image names as we did in covise, you could have covise run an sql query into the database that the backend gets its information from to figure out which id corresponds to which image. (theres example database interaction in server.cpp like getEverything that I never refactored yet).

Improving Matching

  • Instead of playing around with parameters to get a good number of descriptors, we should really sort the descriptors by hessian and then just pull the top 100 or so for all so every image has the same number of descriptors. This is hard because I haven't figured out how to implement it...