Gesture segmentation in complex motion sequences

Kanav Kahol, Priyamvada Tripathi, Sethuraman Panchanathan, Thanassis Rikakis

Research output: Chapter in Book/Report/Conference proceedingConference contribution

50 Citations (Scopus)

Abstract

Complex human motion sequences (such as dances) are typically analyzed by segmenting them into shorter motion sequences, called gestures. However, this segmentation process is subjective, and varies considerably from one human observer to another. In this paper, we propose an algorithm called Hierarchical Activity Segmentation. This algorithm employs a dynamic hierarchical layered structure to represent the human anatomy, and uses low-level motion parameters to characterize motion in the various layers of this hierarchy, which correspond to different segments of the human body. This characterization is used with a naïve Bayesian classifier to derive creator profiles from empirical data. Then those profiles are used to predict how creators will segment gestures in other motion sequences. When the predictions were tested with a library of 3D motion capture sequences, which were segmented by 2 choreographers they were found to be reasonably accurate.

Original languageEnglish (US)
Title of host publicationIEEE International Conference on Image Processing
Pages105-108
Number of pages4
Volume2
StatePublished - 2003
EventProceedings: 2003 International Conference on Image Processing, ICIP-2003 - Barcelona, Spain
Duration: Sep 14 2003Sep 17 2003

Other

OtherProceedings: 2003 International Conference on Image Processing, ICIP-2003
CountrySpain
CityBarcelona
Period9/14/039/17/03

Fingerprint

Classifiers

ASJC Scopus subject areas

  • Hardware and Architecture
  • Computer Vision and Pattern Recognition
  • Electrical and Electronic Engineering

Cite this

Kahol, K., Tripathi, P., Panchanathan, S., & Rikakis, T. (2003). Gesture segmentation in complex motion sequences. In IEEE International Conference on Image Processing (Vol. 2, pp. 105-108)

Gesture segmentation in complex motion sequences. / Kahol, Kanav; Tripathi, Priyamvada; Panchanathan, Sethuraman; Rikakis, Thanassis.

IEEE International Conference on Image Processing. Vol. 2 2003. p. 105-108.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Kahol, K, Tripathi, P, Panchanathan, S & Rikakis, T 2003, Gesture segmentation in complex motion sequences. in IEEE International Conference on Image Processing. vol. 2, pp. 105-108, Proceedings: 2003 International Conference on Image Processing, ICIP-2003, Barcelona, Spain, 9/14/03.
Kahol K, Tripathi P, Panchanathan S, Rikakis T. Gesture segmentation in complex motion sequences. In IEEE International Conference on Image Processing. Vol. 2. 2003. p. 105-108
Kahol, Kanav ; Tripathi, Priyamvada ; Panchanathan, Sethuraman ; Rikakis, Thanassis. / Gesture segmentation in complex motion sequences. IEEE International Conference on Image Processing. Vol. 2 2003. pp. 105-108
@inproceedings{ae1ac5bee97c48d89ce334b4742b8947,
title = "Gesture segmentation in complex motion sequences",
abstract = "Complex human motion sequences (such as dances) are typically analyzed by segmenting them into shorter motion sequences, called gestures. However, this segmentation process is subjective, and varies considerably from one human observer to another. In this paper, we propose an algorithm called Hierarchical Activity Segmentation. This algorithm employs a dynamic hierarchical layered structure to represent the human anatomy, and uses low-level motion parameters to characterize motion in the various layers of this hierarchy, which correspond to different segments of the human body. This characterization is used with a na{\"i}ve Bayesian classifier to derive creator profiles from empirical data. Then those profiles are used to predict how creators will segment gestures in other motion sequences. When the predictions were tested with a library of 3D motion capture sequences, which were segmented by 2 choreographers they were found to be reasonably accurate.",
author = "Kanav Kahol and Priyamvada Tripathi and Sethuraman Panchanathan and Thanassis Rikakis",
year = "2003",
language = "English (US)",
volume = "2",
pages = "105--108",
booktitle = "IEEE International Conference on Image Processing",

}

TY - GEN

T1 - Gesture segmentation in complex motion sequences

AU - Kahol, Kanav

AU - Tripathi, Priyamvada

AU - Panchanathan, Sethuraman

AU - Rikakis, Thanassis

PY - 2003

Y1 - 2003

N2 - Complex human motion sequences (such as dances) are typically analyzed by segmenting them into shorter motion sequences, called gestures. However, this segmentation process is subjective, and varies considerably from one human observer to another. In this paper, we propose an algorithm called Hierarchical Activity Segmentation. This algorithm employs a dynamic hierarchical layered structure to represent the human anatomy, and uses low-level motion parameters to characterize motion in the various layers of this hierarchy, which correspond to different segments of the human body. This characterization is used with a naïve Bayesian classifier to derive creator profiles from empirical data. Then those profiles are used to predict how creators will segment gestures in other motion sequences. When the predictions were tested with a library of 3D motion capture sequences, which were segmented by 2 choreographers they were found to be reasonably accurate.

AB - Complex human motion sequences (such as dances) are typically analyzed by segmenting them into shorter motion sequences, called gestures. However, this segmentation process is subjective, and varies considerably from one human observer to another. In this paper, we propose an algorithm called Hierarchical Activity Segmentation. This algorithm employs a dynamic hierarchical layered structure to represent the human anatomy, and uses low-level motion parameters to characterize motion in the various layers of this hierarchy, which correspond to different segments of the human body. This characterization is used with a naïve Bayesian classifier to derive creator profiles from empirical data. Then those profiles are used to predict how creators will segment gestures in other motion sequences. When the predictions were tested with a library of 3D motion capture sequences, which were segmented by 2 choreographers they were found to be reasonably accurate.

UR - http://www.scopus.com/inward/record.url?scp=0344704039&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0344704039&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:0344704039

VL - 2

SP - 105

EP - 108

BT - IEEE International Conference on Image Processing

ER -