Articulated Pose Estimation Using Discriminative Armlet Classifiers

Georgia Gkioxari, Pablo Arbelaez, Lubomir Bourdev, Jitendra Malik; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013, pp. 3342-3349


We propose a novel approach for human pose estimation in real-world cluttered scenes, and focus on the challenging problem of predicting the pose of both arms for each person in the image. For this purpose, we build on the notion of poselets [4] and train highly discriminative classifiers to differentiate among arm configurations, which we call armlets. We propose a rich representation which, in addition to standard HOG features, integrates the information of strong contours, skin color and contextual cues in a principled manner. Unlike existing methods, we evaluate our approach on a large subset of images from the PASCAL VOC detection dataset, where critical visual phenomena, such as occlusion, truncation, multiple instances and clutter are the norm. Our approach outperforms Yang and Ramanan [26], the state-of-the-art technique, with an improvement from 29.0% to 37.5% PCP accuracy on the arm keypoint prediction task, on this new pose estimation dataset.

Related Material

author = {Gkioxari, Georgia and Arbelaez, Pablo and Bourdev, Lubomir and Malik, Jitendra},
title = {Articulated Pose Estimation Using Discriminative Armlet Classifiers},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2013}