Automated gesture segmentation from dance sequences

Kanav Kahol, Priyamvada Tripathi, Sethuraman Panchanathan

Research output: Chapter in Book/Report/Conference proceedingConference contribution

75 Citations (Scopus)

Abstract

Complex human motion (e.g. dance) sequences are typically analyzed by segmenting them into shorter motion sequences, called gestures. However, this segmentation process is subjective, and varies considerably from one choreographer to another. Dance sequences also exhibit a large vocabulary of gestures. In this paper, we propose an algorithm called Hierarchical Activity Segmentation. This algorithm employs a dynamic hierarchical layered structure to represent human anatomy, and uses low-level motion parameters to characterize motion in the various layers of this hierarchy, which correspond to different segments of the human body. This characterization is used with a naïve, Bayesian classifier to derive choreographer profiles from empirical data that are used to predict how particular choreographers will segment gestures in other motion sequences. When the predictions were tested with a library of 45 3D motion capture sequences (with 185 distinct gestures) created by 5 different choreographers, they were found to be 93.3% accurate.

Original languageEnglish (US)
Title of host publicationProceedings - Sixth IEEE International Conference on Automatic Face and Gesture Recognition FGR 2004
Pages883-888
Number of pages6
DOIs
StatePublished - 2004
EventProceedings - Sixth IEEE International Conference on Automatic Face and Gesture Recognition FGR 2004 - Seoul, Korea, Republic of
Duration: May 17 2004May 19 2004

Other

OtherProceedings - Sixth IEEE International Conference on Automatic Face and Gesture Recognition FGR 2004
CountryKorea, Republic of
CitySeoul
Period5/17/045/19/04

Fingerprint

Classifiers

ASJC Scopus subject areas

  • Engineering(all)

Cite this

Kahol, K., Tripathi, P., & Panchanathan, S. (2004). Automated gesture segmentation from dance sequences. In Proceedings - Sixth IEEE International Conference on Automatic Face and Gesture Recognition FGR 2004 (pp. 883-888) https://doi.org/10.1109/AFGR.2004.1301645

Automated gesture segmentation from dance sequences. / Kahol, Kanav; Tripathi, Priyamvada; Panchanathan, Sethuraman.

Proceedings - Sixth IEEE International Conference on Automatic Face and Gesture Recognition FGR 2004. 2004. p. 883-888.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Kahol, K, Tripathi, P & Panchanathan, S 2004, Automated gesture segmentation from dance sequences. in Proceedings - Sixth IEEE International Conference on Automatic Face and Gesture Recognition FGR 2004. pp. 883-888, Proceedings - Sixth IEEE International Conference on Automatic Face and Gesture Recognition FGR 2004, Seoul, Korea, Republic of, 5/17/04. https://doi.org/10.1109/AFGR.2004.1301645
Kahol K, Tripathi P, Panchanathan S. Automated gesture segmentation from dance sequences. In Proceedings - Sixth IEEE International Conference on Automatic Face and Gesture Recognition FGR 2004. 2004. p. 883-888 https://doi.org/10.1109/AFGR.2004.1301645
Kahol, Kanav ; Tripathi, Priyamvada ; Panchanathan, Sethuraman. / Automated gesture segmentation from dance sequences. Proceedings - Sixth IEEE International Conference on Automatic Face and Gesture Recognition FGR 2004. 2004. pp. 883-888
@inproceedings{908f03b89f0e423497d9a28170582296,
title = "Automated gesture segmentation from dance sequences",
abstract = "Complex human motion (e.g. dance) sequences are typically analyzed by segmenting them into shorter motion sequences, called gestures. However, this segmentation process is subjective, and varies considerably from one choreographer to another. Dance sequences also exhibit a large vocabulary of gestures. In this paper, we propose an algorithm called Hierarchical Activity Segmentation. This algorithm employs a dynamic hierarchical layered structure to represent human anatomy, and uses low-level motion parameters to characterize motion in the various layers of this hierarchy, which correspond to different segments of the human body. This characterization is used with a na{\"i}ve, Bayesian classifier to derive choreographer profiles from empirical data that are used to predict how particular choreographers will segment gestures in other motion sequences. When the predictions were tested with a library of 45 3D motion capture sequences (with 185 distinct gestures) created by 5 different choreographers, they were found to be 93.3{\%} accurate.",
author = "Kanav Kahol and Priyamvada Tripathi and Sethuraman Panchanathan",
year = "2004",
doi = "10.1109/AFGR.2004.1301645",
language = "English (US)",
isbn = "0769521223",
pages = "883--888",
booktitle = "Proceedings - Sixth IEEE International Conference on Automatic Face and Gesture Recognition FGR 2004",

}

TY - GEN

T1 - Automated gesture segmentation from dance sequences

AU - Kahol, Kanav

AU - Tripathi, Priyamvada

AU - Panchanathan, Sethuraman

PY - 2004

Y1 - 2004

N2 - Complex human motion (e.g. dance) sequences are typically analyzed by segmenting them into shorter motion sequences, called gestures. However, this segmentation process is subjective, and varies considerably from one choreographer to another. Dance sequences also exhibit a large vocabulary of gestures. In this paper, we propose an algorithm called Hierarchical Activity Segmentation. This algorithm employs a dynamic hierarchical layered structure to represent human anatomy, and uses low-level motion parameters to characterize motion in the various layers of this hierarchy, which correspond to different segments of the human body. This characterization is used with a naïve, Bayesian classifier to derive choreographer profiles from empirical data that are used to predict how particular choreographers will segment gestures in other motion sequences. When the predictions were tested with a library of 45 3D motion capture sequences (with 185 distinct gestures) created by 5 different choreographers, they were found to be 93.3% accurate.

AB - Complex human motion (e.g. dance) sequences are typically analyzed by segmenting them into shorter motion sequences, called gestures. However, this segmentation process is subjective, and varies considerably from one choreographer to another. Dance sequences also exhibit a large vocabulary of gestures. In this paper, we propose an algorithm called Hierarchical Activity Segmentation. This algorithm employs a dynamic hierarchical layered structure to represent human anatomy, and uses low-level motion parameters to characterize motion in the various layers of this hierarchy, which correspond to different segments of the human body. This characterization is used with a naïve, Bayesian classifier to derive choreographer profiles from empirical data that are used to predict how particular choreographers will segment gestures in other motion sequences. When the predictions were tested with a library of 45 3D motion capture sequences (with 185 distinct gestures) created by 5 different choreographers, they were found to be 93.3% accurate.

UR - http://www.scopus.com/inward/record.url?scp=4544385653&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=4544385653&partnerID=8YFLogxK

U2 - 10.1109/AFGR.2004.1301645

DO - 10.1109/AFGR.2004.1301645

M3 - Conference contribution

AN - SCOPUS:4544385653

SN - 0769521223

SN - 9780769521220

SP - 883

EP - 888

BT - Proceedings - Sixth IEEE International Conference on Automatic Face and Gesture Recognition FGR 2004

ER -