Capturing expressive and indicative qualities of conducting gesture: An application of temporal expectancy models

Dilip Swaminathan, Harvey Thornburg, Todd Ingalls, Stjepan Rajko, Jodi James, Ellen Campana, Kathleya Afanador, Randal Leistikow

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Scopus citations

Abstract

Many event sequences in everyday human movement exhibit temporal structure: for instance, footsteps in walking, the striking of balls in a tennis match, the movements of a dancer set to rhythmic music, and the gestures of an orchestra conductor. These events generate prior expectancies regarding the occurrence of future events. Moreover, these expectancies play a critical role in conveying expressive qualities and communicative intent through the movement; thus they are of considerable interest in musical control contexts. To this end, we introduce a novel Bayesian framework which we call the temporal expectancy model and use it to develop an analysis tool for capturing expressive and indicative qualities of the conducting gesture based on temporal expectancies. The temporal expectancy model is a general dynamic Bayesian network (DBN) that can be used to encode prior knowledge regarding temporal structure to improve event segmentation. The conducting analysis tool infers beat and tempo, which are indicative and articulation which is expressive, as well as temporal expectancies regarding beat (ictus and preparation instances) from conducting gesture. Experimental results using our analysis framework reveal a very strong correlation in how significantly the preparation expectancy builds up for staccato vs legato articulation, which bolsters the case for temporal expectancy as cognitive model for event anticipation, and as a key factor in the communication of expressive qualities of conducting gesture. Our system operates on data obtained from a marker based motion capture system, but can be easily adapted for more affordable technologies like video camera arrays.

Original languageEnglish (US)
Title of host publicationComputer Music Modeling and Retrieval
Subtitle of host publicationSense of Sounds - 4th International Symposium, CMMR 2007, Revised Papers
Pages34-55
Number of pages22
DOIs
StatePublished - 2008
Event4th International Symposium on Computer Music Modeling and Retrieval: Sense of Sounds, CMMR 2007 - Copenhagen, Denmark
Duration: Aug 27 2007Aug 31 2007

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume4969 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference4th International Symposium on Computer Music Modeling and Retrieval: Sense of Sounds, CMMR 2007
Country/TerritoryDenmark
CityCopenhagen
Period8/27/078/31/07

ASJC Scopus subject areas

  • Theoretical Computer Science
  • General Computer Science

Fingerprint

Dive into the research topics of 'Capturing expressive and indicative qualities of conducting gesture: An application of temporal expectancy models'. Together they form a unique fingerprint.

Cite this