Difference between revisions of "Kinect Controlled Pacman"

From Immersive Visualization Lab Wiki
Jump to: navigation, search
(New page: ==Spring 2011== Work done so far: *Completed modeling of maze and ghosts and imported into code. *Navigation with head tracker and wand. *Collision detection implementation that needs t...)
 
 
Line 17: Line 17:
 
*Finish collision detection code.
 
*Finish collision detection code.
 
*Add a simple AI to ghost, at least to have it come after the player / wander around.
 
*Add a simple AI to ghost, at least to have it come after the player / wander around.
 +
 +
==End of Quarter: Description==
 +
 +
Simple Kinect user interface to allow players to control navigation by raising their arms. The Kinect faces the user at a certain distance away, works best if nothing else is around and if player stands at the center. The program simply retrieve depth data from the Kinect and cuts off all data from a certain distance away, so that only a raised arm will be "seen". It then calculates the deviation of the arm from the center and decides whether to navigate left or right, or straight.

Latest revision as of 02:06, 31 May 2011

Spring 2011

Work done so far:

  • Completed modeling of maze and ghosts and imported into code.
  • Navigation with head tracker and wand.
  • Collision detection implementation that needs to be debugged.
  • Installation of Kinect software.
  • Linking Kinect device to code, established a depth buffer with the Kinect's depth camera.

Goals for end of quarter:

  • Evaluate the depth buffer to be able to track the head and arms accurately and correctly.
  • Implement navigation with the corresponding arm movements / head directions.
  • Finish collision detection code.
  • Add a simple AI to ghost, at least to have it come after the player / wander around.

End of Quarter: Description

Simple Kinect user interface to allow players to control navigation by raising their arms. The Kinect faces the user at a certain distance away, works best if nothing else is around and if player stands at the center. The program simply retrieve depth data from the Kinect and cuts off all data from a certain distance away, so that only a raised arm will be "seen". It then calculates the deviation of the arm from the center and decides whether to navigate left or right, or straight.