Homework1W14

From Immersive Visualization Lab Wiki
Jump to: navigation, search

Contents

Homework Assignment 1: Flying around the campus with a Kinect

For this assignment you can obtain 100 points, plus up to 10 points for optional work.

The goal of this assignment is to create an application which uses exclusively the Microsoft Kinect for user interaction.

This assignment is due on Friday, January 24th at 1:30pm. There will be a Q&A session, led by TA Matteo, on Wednesday, January 15th from 4-5pm in CSE basement lab 260.

Microsoft Kinect

Every team is going to get to borrow a Kinect for Windows from the instructor until the end of the quarter. The Kinects can be picked up in the instructor's office at Atkinson Hall, room 2125, during office hours: Tuesday from 3:30-4:30pm. If you cannot make that time let the instructor know to make other arrangements.

The Kinect drivers and SDK should already be installed on the lab machines in room 260, but only on the even-numbered PCs. For your home computer, you can find the SDK on Microsoft's Website.

Update: Matteo created starter code for this project, which you will find here. At the bottom of this page you'll find more links, one of them being code Jonathan is sharing which includes OSG and Kinect setup code.

Grading

On or before the due date, you will need to demonstrate your application to instructor, TA or tutor with a Kinect in CSE lab 260 on a lab computer or your own laptop.

Application (100 Points)

The goal is to create an interactive application which allows the user to fly from the Bear to the top of EBU1 and then to the Price Center's courtyard. I put a campus model on the Ted discussion board (don't want to post it here). Someone posted the full size model, what I posted is a downsized subset of the map. You are not allowed to use any user input other than from the Kinect (i.e., no keyboard or mouse input once the application started).

Your application needs to have the following features:

  • You will need to use the Kinect driver's skeleton extraction functionality to find the positions of the user's limbs. (10 points)
  • You need to develop a strategy to convert body motion to flight commands. (10 points)
  • You will need to be able to fly your avatar around by changing its position and at least heading (pitch and roll are optional). (20 points)
  • There needs to be a way to change the velocity of the flight. (10 points)
  • The camera should provide a third person view. You need to render the user's avatar with at least head, torso, arms and legs using simple shapes such as cylinders (osg::Cylinder). (20 points)
  • You will need to show a small top-down map of the campus somewhere on the screen, with an indicator for where the user is (e.g., a red sphere, or a cone pointing in the direction the user is moving). This can be done by rendering the entire campus in screen space at a small size with a camera pointing down. (10 points)
  • The software needs to detect when the user arrives at EBU1 and at the Price Center and it needs to indicate this to the user. This can be done by using a proximity threshold from known points at these buildings. (10 points)
  • You will need to measure the time it takes from when the user starts flying until they land at the Price Center, and display it somewhere on the screen. Who will get the fastest time? (10 points)

Tips:

  • At the beginning of the application you should have a calibration function which registers a neutral position for the user. This will be needed to parse the navigation gestures more accurately.
  • You don't need to check for collisions with other buildings, i.e., the user can fly through other buildings or the ground as if they were not there.
  • If you're adventurous you can use Quaternions (osg::Quat) to control the avatar's orientation. This will avoid problems with gimbal lock.
  • You'll need to decide on how to implement navigation: move the camera around a fixed world, or move the world in front of a fixed camera. We recommend moving the world and leaving the camera in a fixed place.
  • Besides skeleton extraction, the Kinect can parse voice commands. While you shouldn't use voice commands for any of the tasks required in this project, you are welcome to play around with them and add useful additional commands, such as "restart" which would put the player back to the starting point, etc.

Extra Credit (10 Points)

You can achieve 10 points of extra credit if you display the RGB image the Kinect sees in a corner of the screen and mirror it so that it looks like a mirror image of the user (5 points).

Superimposed on this image you will need to display the skeleton the Kinect driver extracts (5 points).

Useful Links