Project1S16

From Immersive Visualization Lab Wiki
Revision as of 07:13, 31 March 2016 by Jschulze (Talk | contribs)

Jump to: navigation, search

Note: this is a preliminary description of the project, so you can get started on it. The final version of the project will go on line by 3/31 at 2pm and will also be discussed in class.

We recommend that you start by getting your development software ready. More information on this is here.


The first homework project is aimed at getting you familiar with the tools and libraries that will be your friends throughout the quarter. These will include GLFW and GLew. You are free to use any other graphics or window management library, however you must contact us first for approval, as we want to make sure that you are not using a library that does too much of the homework for you.

For the first homework project, you will be:

   Writing a parser for the .obj file extension (more info to come)
   Rendering the 3D object defined by the .obj file
   Manipulating the object (translation (moving), scaling, rotating (orbiting), etc.) using the keyboard, and
   Implementing your own software rasterizer, which will be discussed next week.

The OBJ file format

For the purposes of keeping your sanity intact, all .obj files we provide will have only information of vertices (lines starting with a v), normals (lines starting with a vn), and faces (lines starting with an f). You won't need to worry about normals and faces until project 2, but for those curious, this is very important when dealing with shading (lighting, colors, that kind of stuff).

The general format for vertex lines is:

v v_x v_y v_z r g b

Where v_x, v_y, v_z are the vertex x, y, and z coordinates and are strictly floats.

The values r, g, b define the color of the vertex, and are optional (i.e. they will be missing from most files). Like the vertex coordinates, they are strictly floats, and can only take on values between 0.0 and 1.0.

All values are delimited by a single whitespace character.

The general format for normal is the same as for vertices, minus the color info.

The general format for face lines for files you will be working with this quarter is

f v1 v2 v3

or

f v1//n1 v2//n2 v3//n3

Where v1, v2, v3 are the indices of the vertices (i.e. the order in which they appear in the .obj file, so the 5th line that starts with v will have index 5), and n1, n2, n3 are the indices of the normals. If the n1, n2, n3 values are missing, then it is assumed that the vertex and normal indices are the same. Please note that obj object indexing starts at 1, not 0.

You can find more info about the .obj file format on Wikipedia. Again, you are only responsible for the v, vn, and f lines, however being able to parse other types of lines can be very helpful, as extra credit portions of the homework may involve more complicated .obj objects.


Reading 3D Points from Files

A point cloud is a 3D collection of points, representing a 3D object. These point clouds are often acquired by laser scanning, but can also be acquired with the Microsoft Kinect and special software, or by processing a large number of photographs of an object and using Structure from Motion techniques (see Microsoft Photosynth or Autodesk 123D Catc).

In this project we're going to render the points defined in OBJ files. Note that OBJ files are normally used to define polygonal models, but for now we're ignoring that and use the vertex definitions to render points, ignoring all connectivity data.

Write a parser for the vertices and normals defined in OBJ files. It should be a simple 'for' loop in which you read each line from the file, for instance with the fscanf command. Your parser does not need to do any error handling - you can assume that the files do not contain errors. Add the parser to the starter code.

Use your parser to load the vertices from the following OBJ files and treat them as point clouds:

Rendering 3D Points

Next you need to write code to display the point cloud, WITHOUT USING OPENGL. This is important: in this project we're not going to use OpenGL to render, but instead we'll render into a block of memory which the starter code displays (using OpenGL).

Add support for keyboard commands to switch between the point models.