Abstract
Complex human motion sequences (such as dances) are typically analyzed by segmenting them into shorter motion sequences, called gestures. However, this segmentation process is subjective, and varies considerably from one human observer to another. In this paper, we propose an algorithm called Hierarchical Activity Segmentation. This algorithm employs a dynamic hierarchical layered structure to represent the human anatomy, and uses low-level motion parameters to characterize motion in the various layers of this hierarchy, which correspond to different segments of the human body. This characterization is used with a naïve Bayesian classifier to derive creator profiles from empirical data. Then those profiles are used to predict how creators will segment gestures in other motion sequences. When the predictions were tested with a library of 3D motion capture sequences, which were segmented by 2 choreographers they were found to be reasonably accurate.
Original language | English (US) |
---|---|
Title of host publication | IEEE International Conference on Image Processing |
Pages | 105-108 |
Number of pages | 4 |
Volume | 2 |
State | Published - 2003 |
Event | Proceedings: 2003 International Conference on Image Processing, ICIP-2003 - Barcelona, Spain Duration: Sep 14 2003 → Sep 17 2003 |
Other
Other | Proceedings: 2003 International Conference on Image Processing, ICIP-2003 |
---|---|
Country/Territory | Spain |
City | Barcelona |
Period | 9/14/03 → 9/17/03 |
ASJC Scopus subject areas
- Hardware and Architecture
- Computer Vision and Pattern Recognition
- Electrical and Electronic Engineering