Detection of manipulation action consequences (MAC)

Yezhou Yang, Cornelia Fermuller, Yiannis Aloimonos

Research output: Contribution to journalConference article

46 Scopus citations

Abstract

The problem of action recognition and human activity has been an active research area in Computer Vision and Robotics. While full-body motions can be characterized by movement and change of posture, no characterization, that holds invariance, has yet been proposed for the description of manipulation actions. We propose that a fundamental concept in understanding such actions, are the consequences of actions. There is a small set of fundamental primitive action consequences that provides a systematic high-level classification of manipulation actions. In this paper a technique is developed to recognize these action consequences. At the heart of the technique lies a novel active tracking and segmentation method that monitors the changes in appearance and topological structure of the manipulated object. These are then used in a visual semantic graph (VSG) based procedure applied to the time sequence of the monitored object to recognize the action consequence. We provide a new dataset, called anipulation Action Consequences (MAC 1.0), which can serve as test bed for other studies on this topic. Several experiments on this dataset demonstrates that our method can robustly track objects and detect their deformations and division during the manipulation. Quantitative tests prove the effectiveness and efficiency of the method.

Original languageEnglish (US)
Article number6619175
Pages (from-to)2563-2570
Number of pages8
JournalProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
DOIs
StatePublished - Nov 15 2013
Externally publishedYes
Event26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2013 - Portland, OR, United States
Duration: Jun 23 2013Jun 28 2013

    Fingerprint

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition

Cite this