Understanding Everyday Hands in Action From RGB-D Images

Gregory Rogez, James S. Supancic III, Deva Ramanan; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2015, pp. 3889-3897

Abstract


We analyze functional manipulations of handheld objects, formalizing the problem as one of fine-grained grasp classification. To do so, we make use of a recently developed fine-grained taxonomy of human-object grasps. We introduce a large dataset of 12000 RGB-D images covering 71 everyday grasps in natural interactions. Our dataset is different from past work (typically addressed from a robotics perspective) in terms of its scale, diversity, and combination of RGB and depth data. From a computer-vision perspective, our dataset allows for exploration of contact and force prediction (crucial concepts in functional grasp analysis) from perceptual cues. We present extensive experimental results with state-of-the-art baselines, illustrating the role of segmentation, object context, and 3D-understanding in functional grasp analysis. We demonstrate a near 2X improvement over prior work and a naive deep baseline, while pointing out important directions for improvement.

Related Material


[pdf]
[bibtex]
@InProceedings{Rogez_2015_ICCV,
author = {Rogez, Gregory and Supancic, III, James S. and Ramanan, Deva},
title = {Understanding Everyday Hands in Action From RGB-D Images},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2015}
}