Difference between revisions of "CViewer"

From Immersive Visualization Lab Wiki
Jump to: navigation, search
(Status)
(Week 3)
 
(5 intermediate revisions by one user not shown)
Line 1: Line 1:
 
==Project Overview==
 
==Project Overview==
 +
Think of CViewer as a magic scanner. Suppose you have a multigrid display with many objects on it. Using your Android device, you can use the device screen as a viewfinder to point at an object on the screen. Information relative to the object that the device is pointed at will then be displayed on the screen. As you move your device along, the information will change depending on what you are pointing at.
 +
 +
'''TODO: Which direction to take / focus on?'''
 +
* Single user: Should we be able to then use the phone as a dynamic menu per object? (adjust image brightness/contrast, play / pause for videos)
 +
* Multi user: Or go multi-client model so that many users can use this with a large display (the advantage being that not every person needs to see the same information on the large display, they only point their phone at what they want to see more data about so then there is no clutter on the large display).
  
 
==Participants==
 
==Participants==
Line 7: Line 12:
  
 
==Status==
 
==Status==
Week 1, Week2:
+
===Week 1, Week2===
* The first 2 weeks of Spring 2011 were spent hashing details for a Master's project.
+
The first 2 weeks of Spring 2011 were spent hashing details for a Master's project.
 +
 
 +
===Week 3===
 +
* Spent forever getting OpenCV2.2 to play nicely on my laptop. I finally got a development environment set up using OpenCV2.2, Windows7, and Visual Studio 2010.
 +
* Figured out what technologies more or less I need to be working with.
 +
* Got a naive image detection working so that when a client sends an image to the server, I'll be able to recognize which image in the referene library the client is pointed at (if any). Uses SURF and a naive nearest neighbor algorithm to match features to determine a best match.
 +
 
 +
{{multiple image
 +
| width    = 700
 +
| footer    = Matching Magic
 +
| image1    = Boston_closeup.jpg
 +
| alt1      = Yellow cartouche
 +
| caption1  = Caution
 +
| image2    = Boston_reference.jpg
 +
| alt2      = Red cartouche
 +
| caption2  = Ejection
 +
| image3    = Closeup_match_boston.jpg
 +
| alt3      = Red cartouche
 +
| caption3  = Ejection
 +
}}
 +
 
 +
 
 +
{|
 +
| [[Image:Boston_closeup.jpg|The query image. This image was taken with my phone camera.]]
 +
| [[Image:Boston_reference.jpg|The reference image in the library. Taken in Boston myself!]]
 +
| [[Image:Closeup match boston.jpg‎|A visual representation of the matches found by the backend.]]
 +
|}
 +
 
 +
'''Goal for next week:''' Figure out how data will be sent over the wire.
  
Week 3:
+
===Week 4===
[[Image: Closeup_match_boston.jpg]]
+
* Laptop crashed
 +
* Figured out how to send data from the client for the backend to match
 +
** java client -> c++ server... sent a jpg byte array to the server, the server had to convert to an IplImage for OpenCV to use. This was actually really hard...
 +
* Was working on client-server model... code of which was on aforementioned crashed laptop.
 +
* Got OpenCV2.2 library set up with Covise environment (Thanks Jurgen!)
  
Week 4:
+
'''Goal for next week:''' Get a client-server model working (preferably in Covise environment)
  
Week 5:
+
===Week 5===
 +
'''Goal for next week:'''
  
Week 6:
+
===Week 6===
 +
'''Goal for next week:'''
  
Week 7:
+
===Week 7===
 +
'''Goal for next week:'''
  
Week 8:
+
===Week 8===
 +
'''Goal for next week:'''
  
Week 9:
+
===Week 9===
 +
'''Goal for next week:'''
  
Week 10:
+
===Week 10===
 +
IVL Demo! Also prep for Master's presentation and paper.
  
 
==End of Quarter Goals==
 
==End of Quarter Goals==
A functioning demo!
+
A fully functioning and awesome product!

Latest revision as of 14:13, 25 April 2011

Contents

Project Overview

Think of CViewer as a magic scanner. Suppose you have a multigrid display with many objects on it. Using your Android device, you can use the device screen as a viewfinder to point at an object on the screen. Information relative to the object that the device is pointed at will then be displayed on the screen. As you move your device along, the information will change depending on what you are pointing at.

TODO: Which direction to take / focus on?

  • Single user: Should we be able to then use the phone as a dynamic menu per object? (adjust image brightness/contrast, play / pause for videos)
  • Multi user: Or go multi-client model so that many users can use this with a large display (the advantage being that not every person needs to see the same information on the large display, they only point their phone at what they want to see more data about so then there is no clutter on the large display).

Participants

Development: Lynn Nguyen

Advisor: Jurgen Schulze

Status

Week 1, Week2

The first 2 weeks of Spring 2011 were spent hashing details for a Master's project.

Week 3

  • Spent forever getting OpenCV2.2 to play nicely on my laptop. I finally got a development environment set up using OpenCV2.2, Windows7, and Visual Studio 2010.
  • Figured out what technologies more or less I need to be working with.
  • Got a naive image detection working so that when a client sends an image to the server, I'll be able to recognize which image in the referene library the client is pointed at (if any). Uses SURF and a naive nearest neighbor algorithm to match features to determine a best match.

Template:Multiple image


The query image. This image was taken with my phone camera. The reference image in the library. Taken in Boston myself! A visual representation of the matches found by the backend.

Goal for next week: Figure out how data will be sent over the wire.

Week 4

  • Laptop crashed
  • Figured out how to send data from the client for the backend to match
    • java client -> c++ server... sent a jpg byte array to the server, the server had to convert to an IplImage for OpenCV to use. This was actually really hard...
  • Was working on client-server model... code of which was on aforementioned crashed laptop.
  • Got OpenCV2.2 library set up with Covise environment (Thanks Jurgen!)

Goal for next week: Get a client-server model working (preferably in Covise environment)

Week 5

Goal for next week:

Week 6

Goal for next week:

Week 7

Goal for next week:

Week 8

Goal for next week:

Week 9

Goal for next week:

Week 10

IVL Demo! Also prep for Master's presentation and paper.

End of Quarter Goals

A fully functioning and awesome product!