Object-Oriented Interaction with Large High Resolution Displays

From Immersive Visualization Lab Wiki
Revision as of 10:48, 6 June 2011 by Lkn001 (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Contents

Project Overview

Think of CViewer as a magic scanner. Suppose you have a multigrid display with many objects on it. Using your Android device, you can use the device screen as a viewfinder to point at an object on the screen. Information relative to the object that the device is pointed at will then be displayed on the screen. As you move your device along, the information will change depending on what you are pointing at.

TODO: Which direction to take / focus on?

  • Single user: Should we be able to then use the phone as a dynamic menu per object? (adjust image brightness/contrast, play / pause for videos)
  • Multi user: Or go multi-client model so that many users can use this with a large display (the advantage being that not every person needs to see the same information on the large display, they only point their phone at what they want to see more data about so then there is no clutter on the large display).

Participants

Development: Lynn Nguyen

Advisor: Jurgen Schulze

Status

Week 1, Week2

The first 2 weeks of Spring 2011 were spent hashing details for a Master's project.

Week 3

  • Spent forever getting OpenCV2.2 to play nicely on my laptop. I finally got a development environment set up using OpenCV2.2, Windows7, and Visual Studio 2010.
  • Figured out what technologies more or less I need to be working with.
  • Got a naive image detection working so that when a client sends an image to the server, I'll be able to recognize which image in the referene library the client is pointed at (if any). Uses SURF and a naive nearest neighbor algorithm to match features to determine a best match.

Template:Multiple image


The query image. This image was taken with my phone camera. The reference image in the library. Taken in Boston myself! A visual representation of the matches found by the backend.

Goal for next week: Figure out how data will be sent over the wire.

Week 4

  • Laptop crashed
  • Figured out how to send data from the client for the backend to match
    • java client -> c++ server... sent a jpg byte array to the server, the server had to convert to an IplImage for OpenCV to use. This was actually really hard...
  • Was working on client-server model... code of which was on aforementioned crashed laptop.
  • Got OpenCV2.2 library set up with Covise environment (Thanks Jurgen!)

Goal for next week: Get a client-server model working (preferably in Covise environment)

Week 5

Goal for next week:

Week 6

Goal for next week:

Week 7

Goal for next week:

Week 8

Goal for next week:

Week 9

Goal for next week:

Week 10

IVL Demo! Also prep for Master's presentation and paper.

End of Quarter Goals

A fully functioning and awesome product!