Watch-n-Patch: Unsupervised Understanding of Actions and Relations

Chenxia Wu, Jiemi Zhang, Silvio Savarese, Ashutosh Saxena; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 4362-4370

Abstract


We focus on modeling human activities comprising multiple actions in a completely unsupervised setting. Our model learns the high-level action co-occurrence and temporal relations between the actions in the activity video. We consider the video as a sequence of short-term action clips, called action-words, and an activity is about a set of action-topics indicating which actions are present in the video. Then we propose a new probabilistic model relating the action-words and the action-topics. It allows us to model long-range action relations that commonly exist in the complex activity, which is challenging to capture in the previous works. We apply our model to unsupervised action segmentation and recognition, and also to a novel application that detects forgotten actions, which we call action patching. For evaluation, we also contribute a new challenging RGB-D activity video dataset recorded by the new Kinect v2, which contains several human daily activities as compositions of multiple actions interacted with different objects. The extensive experiments show the effectiveness of our model.

Related Material


[pdf]
[bibtex]
@InProceedings{Wu_2015_CVPR,
author = {Wu, Chenxia and Zhang, Jiemi and Savarese, Silvio and Saxena, Ashutosh},
title = {Watch-n-Patch: Unsupervised Understanding of Actions and Relations},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2015}
}