8 Citations (Scopus)

Abstract

Humans perform various gestures in everyday life. While some of these gestures are typically well understood amongst a community (such as "hello" and "goodbye"), many gestures and movement are typical of an individual's style, body language or mannerisms. Examples of such gestures include the manner is which a person laughs, hand gestures used to converse or the manner in which a person performs a dance sequence. Individuals possess a large vocabulary of mannerism gestures. Conventional modeling of gestures as a series of poses for the purpose of automatically recognizing gestures is inadequate for modeling mannerism gestures. In this paper we propose a novel method to model mannerism gestures. Gestures are modeled as a sequence of events that take place within the segments and the joints of the human body. Each gesture is then represented in an event-driven coupled hidden markov model (HMM) as a sequence of events, occurring in the various segments and joints. The inherent advantage of using an event-driven coupled-HMM (instead of a posedriven HMM) is that there is no need to add states to represent more complex gestures or increase the states for addition of another individual. When this model was tested on a library of 185 gestures, created by 7 subjects, the algorithm achieved an average recognition accuracy of 90.2%.

Original languageEnglish (US)
Title of host publicationProceedings of the 17th International Conference on Pattern Recognition, ICPR 2004
EditorsJ. Kittler, M. Petrou, M. Nixon
Pages946-949
Number of pages4
Volume3
DOIs
StatePublished - 2004
EventProceedings of the 17th International Conference on Pattern Recognition, ICPR 2004 - Cambridge, United Kingdom
Duration: Aug 23 2004Aug 26 2004

Other

OtherProceedings of the 17th International Conference on Pattern Recognition, ICPR 2004
CountryUnited Kingdom
CityCambridge
Period8/23/048/26/04

Fingerprint

Hidden Markov models

ASJC Scopus subject areas

  • Electrical and Electronic Engineering
  • Computer Vision and Pattern Recognition
  • Hardware and Architecture

Cite this

Kahol, K., Tripathi, P., & Panchanathan, S. (2004). Computational analysis of mannerism gestures. In J. Kittler, M. Petrou, & M. Nixon (Eds.), Proceedings of the 17th International Conference on Pattern Recognition, ICPR 2004 (Vol. 3, pp. 946-949) https://doi.org/10.1109/ICPR.2004.1334685

Computational analysis of mannerism gestures. / Kahol, Kanav; Tripathi, Priyamvada; Panchanathan, Sethuraman.

Proceedings of the 17th International Conference on Pattern Recognition, ICPR 2004. ed. / J. Kittler; M. Petrou; M. Nixon. Vol. 3 2004. p. 946-949.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Kahol, K, Tripathi, P & Panchanathan, S 2004, Computational analysis of mannerism gestures. in J Kittler, M Petrou & M Nixon (eds), Proceedings of the 17th International Conference on Pattern Recognition, ICPR 2004. vol. 3, pp. 946-949, Proceedings of the 17th International Conference on Pattern Recognition, ICPR 2004, Cambridge, United Kingdom, 8/23/04. https://doi.org/10.1109/ICPR.2004.1334685
Kahol K, Tripathi P, Panchanathan S. Computational analysis of mannerism gestures. In Kittler J, Petrou M, Nixon M, editors, Proceedings of the 17th International Conference on Pattern Recognition, ICPR 2004. Vol. 3. 2004. p. 946-949 https://doi.org/10.1109/ICPR.2004.1334685
Kahol, Kanav ; Tripathi, Priyamvada ; Panchanathan, Sethuraman. / Computational analysis of mannerism gestures. Proceedings of the 17th International Conference on Pattern Recognition, ICPR 2004. editor / J. Kittler ; M. Petrou ; M. Nixon. Vol. 3 2004. pp. 946-949
@inproceedings{9e0e48e2ba1d4bed88ce187a3567e3fe,
title = "Computational analysis of mannerism gestures",
abstract = "Humans perform various gestures in everyday life. While some of these gestures are typically well understood amongst a community (such as {"}hello{"} and {"}goodbye{"}), many gestures and movement are typical of an individual's style, body language or mannerisms. Examples of such gestures include the manner is which a person laughs, hand gestures used to converse or the manner in which a person performs a dance sequence. Individuals possess a large vocabulary of mannerism gestures. Conventional modeling of gestures as a series of poses for the purpose of automatically recognizing gestures is inadequate for modeling mannerism gestures. In this paper we propose a novel method to model mannerism gestures. Gestures are modeled as a sequence of events that take place within the segments and the joints of the human body. Each gesture is then represented in an event-driven coupled hidden markov model (HMM) as a sequence of events, occurring in the various segments and joints. The inherent advantage of using an event-driven coupled-HMM (instead of a posedriven HMM) is that there is no need to add states to represent more complex gestures or increase the states for addition of another individual. When this model was tested on a library of 185 gestures, created by 7 subjects, the algorithm achieved an average recognition accuracy of 90.2{\%}.",
author = "Kanav Kahol and Priyamvada Tripathi and Sethuraman Panchanathan",
year = "2004",
doi = "10.1109/ICPR.2004.1334685",
language = "English (US)",
isbn = "0769521282",
volume = "3",
pages = "946--949",
editor = "J. Kittler and M. Petrou and M. Nixon",
booktitle = "Proceedings of the 17th International Conference on Pattern Recognition, ICPR 2004",

}

TY - GEN

T1 - Computational analysis of mannerism gestures

AU - Kahol, Kanav

AU - Tripathi, Priyamvada

AU - Panchanathan, Sethuraman

PY - 2004

Y1 - 2004

N2 - Humans perform various gestures in everyday life. While some of these gestures are typically well understood amongst a community (such as "hello" and "goodbye"), many gestures and movement are typical of an individual's style, body language or mannerisms. Examples of such gestures include the manner is which a person laughs, hand gestures used to converse or the manner in which a person performs a dance sequence. Individuals possess a large vocabulary of mannerism gestures. Conventional modeling of gestures as a series of poses for the purpose of automatically recognizing gestures is inadequate for modeling mannerism gestures. In this paper we propose a novel method to model mannerism gestures. Gestures are modeled as a sequence of events that take place within the segments and the joints of the human body. Each gesture is then represented in an event-driven coupled hidden markov model (HMM) as a sequence of events, occurring in the various segments and joints. The inherent advantage of using an event-driven coupled-HMM (instead of a posedriven HMM) is that there is no need to add states to represent more complex gestures or increase the states for addition of another individual. When this model was tested on a library of 185 gestures, created by 7 subjects, the algorithm achieved an average recognition accuracy of 90.2%.

AB - Humans perform various gestures in everyday life. While some of these gestures are typically well understood amongst a community (such as "hello" and "goodbye"), many gestures and movement are typical of an individual's style, body language or mannerisms. Examples of such gestures include the manner is which a person laughs, hand gestures used to converse or the manner in which a person performs a dance sequence. Individuals possess a large vocabulary of mannerism gestures. Conventional modeling of gestures as a series of poses for the purpose of automatically recognizing gestures is inadequate for modeling mannerism gestures. In this paper we propose a novel method to model mannerism gestures. Gestures are modeled as a sequence of events that take place within the segments and the joints of the human body. Each gesture is then represented in an event-driven coupled hidden markov model (HMM) as a sequence of events, occurring in the various segments and joints. The inherent advantage of using an event-driven coupled-HMM (instead of a posedriven HMM) is that there is no need to add states to represent more complex gestures or increase the states for addition of another individual. When this model was tested on a library of 185 gestures, created by 7 subjects, the algorithm achieved an average recognition accuracy of 90.2%.

UR - http://www.scopus.com/inward/record.url?scp=10044294108&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=10044294108&partnerID=8YFLogxK

U2 - 10.1109/ICPR.2004.1334685

DO - 10.1109/ICPR.2004.1334685

M3 - Conference contribution

AN - SCOPUS:10044294108

SN - 0769521282

VL - 3

SP - 946

EP - 949

BT - Proceedings of the 17th International Conference on Pattern Recognition, ICPR 2004

A2 - Kittler, J.

A2 - Petrou, M.

A2 - Nixon, M.

ER -