Gesture segmentation in complex motion sequences

Kanav Kahol, Priyamvada Tripathi, Sethuraman Panchanathan, Thanassis Rikakis

Research output: Chapter in Book/Report/Conference proceedingConference contribution

51 Scopus citations

Abstract

Complex human motion sequences (such as dances) are typically analyzed by segmenting them into shorter motion sequences, called gestures. However, this segmentation process is subjective, and varies considerably from one human observer to another. In this paper, we propose an algorithm called Hierarchical Activity Segmentation. This algorithm employs a dynamic hierarchical layered structure to represent the human anatomy, and uses low-level motion parameters to characterize motion in the various layers of this hierarchy, which correspond to different segments of the human body. This characterization is used with a naïve Bayesian classifier to derive creator profiles from empirical data. Then those profiles are used to predict how creators will segment gestures in other motion sequences. When the predictions were tested with a library of 3D motion capture sequences, which were segmented by 2 choreographers they were found to be reasonably accurate.

Original languageEnglish (US)
Title of host publicationIEEE International Conference on Image Processing
Pages105-108
Number of pages4
Volume2
StatePublished - 2003
EventProceedings: 2003 International Conference on Image Processing, ICIP-2003 - Barcelona, Spain
Duration: Sep 14 2003Sep 17 2003

Other

OtherProceedings: 2003 International Conference on Image Processing, ICIP-2003
CountrySpain
CityBarcelona
Period9/14/039/17/03

    Fingerprint

ASJC Scopus subject areas

  • Hardware and Architecture
  • Computer Vision and Pattern Recognition
  • Electrical and Electronic Engineering

Cite this

Kahol, K., Tripathi, P., Panchanathan, S., & Rikakis, T. (2003). Gesture segmentation in complex motion sequences. In IEEE International Conference on Image Processing (Vol. 2, pp. 105-108)