Gesture segmentation in complex motion sequences

Kanav Kahol, Priyamvada Tripathi, Sethuraman Panchanathan, Thanassis Rikakis

Research output: Chapter in Book/Report/Conference proceedingConference contribution

52 Scopus citations


Complex human motion sequences (such as dances) are typically analyzed by segmenting them into shorter motion sequences, called gestures. However, this segmentation process is subjective, and varies considerably from one human observer to another. In this paper, we propose an algorithm called Hierarchical Activity Segmentation. This algorithm employs a dynamic hierarchical layered structure to represent the human anatomy, and uses low-level motion parameters to characterize motion in the various layers of this hierarchy, which correspond to different segments of the human body. This characterization is used with a naïve Bayesian classifier to derive creator profiles from empirical data. Then those profiles are used to predict how creators will segment gestures in other motion sequences. When the predictions were tested with a library of 3D motion capture sequences, which were segmented by 2 choreographers they were found to be reasonably accurate.

Original languageEnglish (US)
Title of host publicationIEEE International Conference on Image Processing
Number of pages4
StatePublished - 2003
EventProceedings: 2003 International Conference on Image Processing, ICIP-2003 - Barcelona, Spain
Duration: Sep 14 2003Sep 17 2003


OtherProceedings: 2003 International Conference on Image Processing, ICIP-2003

ASJC Scopus subject areas

  • Hardware and Architecture
  • Computer Vision and Pattern Recognition
  • Electrical and Electronic Engineering


Dive into the research topics of 'Gesture segmentation in complex motion sequences'. Together they form a unique fingerprint.

Cite this