Real-Time Meshing of Dynamic Point Clouds

From Immersive Visualization Lab Wiki
Jump to: navigation, search

Contents

Surface Reconstruction

Surface reconstruction from point clouds is generally a two stage process. First, an object in the real world is scanned using some device into a set of 3D points that approximates the surface. Then in a second stage, points are connected together into a triangle mesh to obtain a 3D model of that object. A challenge arises in that the data collection and the actual reconstruction happen in two separate stages. If the initial scan was incomplete, holes in the geometry appear later after the reconstruction. An alternative system where both the scanning and reconstruction occur simultaneously in real-time will assist the scanning process. Holes in the geometry will be revealed during the scanning process rather than afterward.

Real-time Meshing

The challenge then lies in real-time triangle meshing of a point cloud that grows over time as the object is being scanned. Most algorithms assume the entire point cloud is processed at once. Ideally we want our algorithm to only re-triangulate the new portions of the cloud that have changed in order to achieve real-time performance.

Marching Cubes

We are currently using a CUDA implementation of the highly parallelizable Marching Cubes algorithm. We use a simple grid data structure to isolate areas where the point cloud has changed and triangulation needs to be updated. We are able to achieve real-time performance from the 3D point cloud data obtained from the Kinect depth-camera.

Marchingcubeslarge.png

Future Work

To extract the isosurface from the point-cloud, we are using a simple nearest tangent plane approximation to the surface. However, we have found that for noisy data the actual tangent plane becomes ambiguous in some cases. Possible future work involves using a better method for extracting the isosurface.