Large-Scale Multi-resolution Surface Reconstruction from RGB-D Sequences

Frank Steinbrucker, Christian Kerl, Daniel Cremers; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2013, pp. 3264-3271

Abstract


We propose a method to generate highly detailed, textured 3D models of large environments from RGB-D sequences. Our system runs in real-time on a standard desktop PC with a state-of-the-art graphics card. To reduce the memory consumption, we fuse the acquired depth maps and colors in a multi-scale octree representation of a signed distance function. To estimate the camera poses, we construct a pose graph and use dense image alignment to determine the relative pose between pairs of frames. We add edges between nodes when we detect loop-closures and optimize the pose graph to correct for long-term drift. Our implementation is highly parallelized on graphics hardware to achieve real-time performance. More specifically, we can reconstruct, store, and continuously update a colored 3D model of an entire corridor of nine rooms at high levels of detail in real-time on a single GPU with 2.5GB.

Related Material


[pdf]
[bibtex]
@InProceedings{Steinbrucker_2013_ICCV,
author = {Steinbrucker, Frank and Kerl, Christian and Cremers, Daniel},
title = {Large-Scale Multi-resolution Surface Reconstruction from RGB-D Sequences},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2013}
}