Humans perform various gestures in everyday life. While some of these gestures are typically well understood amongst a community (such as "hello" and "goodbye"), many gestures and movement are typical of an individual's style, body language or mannerisms. Examples of such gestures include the manner is which a person laughs, hand gestures used to converse or the manner in which a person performs a dance sequence. Individuals possess a large vocabulary of mannerism gestures. Conventional modeling of gestures as a series of poses for the purpose of automatically recognizing gestures is inadequate for modeling mannerism gestures. In this paper we propose a novel method to model mannerism gestures. Gestures are modeled as a sequence of events that take place within the segments and the joints of the human body. Each gesture is then represented in an event-driven coupled hidden markov model (HMM) as a sequence of events, occurring in the various segments and joints. The inherent advantage of using an event-driven coupled-HMM (instead of a posedriven HMM) is that there is no need to add states to represent more complex gestures or increase the states for addition of another individual. When this model was tested on a library of 185 gestures, created by 7 subjects, the algorithm achieved an average recognition accuracy of 90.2%.