Homework4W14

From Immersive Visualization Lab Wiki
Revision as of 02:49, 22 February 2014 by Jschulze (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Contents

Homework Assignment 4: Final Project

For this assignment you can obtain 100 points, plus up to 10 points for extra credit.

This homework assignment is due on March 20th, 2014 at 3pm in lab 260. Matteo and Thinh are going to have a Q&A session on Wednesday, February 26th at 4pm in lab 260.

The goal of this project is to create a 3D user interface for any one or a combination of the three input devices used in this course (Kinect, Hydra, Leap). The user interface needs to implement at least one instance of each of the following universal 3D interaction tasks discussed in class: selection, manipulation, navigation, way finding, travel, system control. Your application is not allowed to use input devices other than those above mentioned. Specifically, you are not allowed to use keyboard or mouse.

If you decide to use the Razer Hydra, your application needs to use no more than one button on each controller. This is to encourage you to make the application easier to use.

You are welcome to come up with any 3D interaction task that you get approved by the course staff.

If you do not come up with your own task, the following default task applies.

Default User Interaction Task

Implement a simple software tool for 3D modeling. The task is to model in 3D the scene of an image of your choice. A good approach to find suitable images is to do a Google image search. If you search for "still life" you will find thousands of images. Pick one of a 3D scene with multiple 3D objects.

Write a 3D modeling tool to allow the user to model the objects in the picture with. It is sufficient to offer a few of the standard OSG shapes, as long as their sizes can be changed.

You will need to implement a grouping function, which will group multiple objects together, so that they can be moved around as a unit.

You are encouraged to use textures as images on the 3D geometry you create, but only if the texture was either selected from a list of stored textures, or if the user draws it within the application.

Have a movable point (or spot) light source in the scene that the user can move around; this light source needs to be visually represented somehow, for instance by drawing a sphere in its location.

You need to use a physics engine.

You need to support shadows. OSG has its own shadow rendering feature. Shadows help tremendously with the placement of 3D shapes in a 3D world, if no actual 3D display is used.

You need to have camera (or other navigation) controls, so that the user can place and orient the objects to match their appearance in the picture.

Besides geometry creation and material property functions, another useful function is a replication function to duplicate an object or group of objects. Implement it.

Grading

Besides the implementation of the demo program, this project requires weekly blog updates with at least one paragraph with a progress report, and at least one picture of the state of the program. The deadlines for the blog updates are on Wednesday evenings at 11:59pm. Updates are expected on the following dates: 2/26, 3/5, 3/12, 3/19. The first blog update on 2/26 needs to contain a link to the picture you are going to model in 3D. The last blog update on 3/19 needs to contain a picture showing the 3D-modeled scene representing the image picked in the first week.

The grade for this project will consist of the following components:

Grade: 10% blog 60% technical score 20% creativity score 10% ease of learning

The blog gets graded by how complete it is. Technical and creativity score will be determined by the course staff's objective and subjective opinion of the interface. Ease of learning represents how easy it is to learn the controls. This will be decided by an independent jury member who is familiar with 3D user interfaces in general, but not with your interface specifically.

Tips: - link to video of MakeVR fire hydrant to show an example, including grouping and camera placement


Optional Work

You can achieve extra credit for the following things:

  • Camera path editor:

Editor to define of a series of camera positions/orientations and playback of smooth camera motion along the path by interpolating between camera positions.

  • Save function: allow saving your 3D model to a file on disk, and demonstrate that it can be loaded back in later, with the same editability as before.