Capturing expressive and indicative qualities of conducting gesture: An application of temporal expectancy models

Dilip Swaminathan, Harvey Thornburg, Todd Ingalls, Stjepan Rajko, Jodi James, Ellen Campana, Kathleya Afanador, Randal Leistikow

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Many event sequences in everyday human movement exhibit temporal structure: for instance, footsteps in walking, the striking of balls in a tennis match, the movements of a dancer set to rhythmic music, and the gestures of an orchestra conductor. These events generate prior expectancies regarding the occurrence of future events. Moreover, these expectancies play a critical role in conveying expressive qualities and communicative intent through the movement; thus they are of considerable interest in musical control contexts. To this end, we introduce a novel Bayesian framework which we call the temporal expectancy model and use it to develop an analysis tool for capturing expressive and indicative qualities of the conducting gesture based on temporal expectancies. The temporal expectancy model is a general dynamic Bayesian network (DBN) that can be used to encode prior knowledge regarding temporal structure to improve event segmentation. The conducting analysis tool infers beat and tempo, which are indicative and articulation which is expressive, as well as temporal expectancies regarding beat (ictus and preparation instances) from conducting gesture. Experimental results using our analysis framework reveal a very strong correlation in how significantly the preparation expectancy builds up for staccato vs legato articulation, which bolsters the case for temporal expectancy as cognitive model for event anticipation, and as a key factor in the communication of expressive qualities of conducting gesture. Our system operates on data obtained from a marker based motion capture system, but can be easily adapted for more affordable technologies like video camera arrays.

Original languageEnglish (US)
Title of host publicationComputer Music Modeling and Retrieval
Subtitle of host publicationSense of Sounds - 4th International Symposium, CMMR 2007, Revised Papers
Pages34-55
Number of pages22
DOIs
StatePublished - Sep 2 2008
Event4th International Symposium on Computer Music Modeling and Retrieval: Sense of Sounds, CMMR 2007 - Copenhagen, Denmark
Duration: Aug 27 2007Aug 31 2007

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume4969 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference4th International Symposium on Computer Music Modeling and Retrieval: Sense of Sounds, CMMR 2007
CountryDenmark
CityCopenhagen
Period8/27/078/31/07

Fingerprint

Gesture
Beat
Conveying
Bayesian networks
Video cameras
Preparation
Dynamic Bayesian Networks
Cognitive Models
Anticipation
Model
Motion Capture
Conductor
Music
Prior Knowledge
Communication
Ball
Segmentation
Camera
Experimental Results
Movement

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)

Cite this

Swaminathan, D., Thornburg, H., Ingalls, T., Rajko, S., James, J., Campana, E., ... Leistikow, R. (2008). Capturing expressive and indicative qualities of conducting gesture: An application of temporal expectancy models. In Computer Music Modeling and Retrieval: Sense of Sounds - 4th International Symposium, CMMR 2007, Revised Papers (pp. 34-55). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 4969 LNCS). https://doi.org/10.1007/978-3-540-85035-9_3

Capturing expressive and indicative qualities of conducting gesture : An application of temporal expectancy models. / Swaminathan, Dilip; Thornburg, Harvey; Ingalls, Todd; Rajko, Stjepan; James, Jodi; Campana, Ellen; Afanador, Kathleya; Leistikow, Randal.

Computer Music Modeling and Retrieval: Sense of Sounds - 4th International Symposium, CMMR 2007, Revised Papers. 2008. p. 34-55 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 4969 LNCS).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Swaminathan, D, Thornburg, H, Ingalls, T, Rajko, S, James, J, Campana, E, Afanador, K & Leistikow, R 2008, Capturing expressive and indicative qualities of conducting gesture: An application of temporal expectancy models. in Computer Music Modeling and Retrieval: Sense of Sounds - 4th International Symposium, CMMR 2007, Revised Papers. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 4969 LNCS, pp. 34-55, 4th International Symposium on Computer Music Modeling and Retrieval: Sense of Sounds, CMMR 2007, Copenhagen, Denmark, 8/27/07. https://doi.org/10.1007/978-3-540-85035-9_3
Swaminathan D, Thornburg H, Ingalls T, Rajko S, James J, Campana E et al. Capturing expressive and indicative qualities of conducting gesture: An application of temporal expectancy models. In Computer Music Modeling and Retrieval: Sense of Sounds - 4th International Symposium, CMMR 2007, Revised Papers. 2008. p. 34-55. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). https://doi.org/10.1007/978-3-540-85035-9_3
Swaminathan, Dilip ; Thornburg, Harvey ; Ingalls, Todd ; Rajko, Stjepan ; James, Jodi ; Campana, Ellen ; Afanador, Kathleya ; Leistikow, Randal. / Capturing expressive and indicative qualities of conducting gesture : An application of temporal expectancy models. Computer Music Modeling and Retrieval: Sense of Sounds - 4th International Symposium, CMMR 2007, Revised Papers. 2008. pp. 34-55 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
@inproceedings{93386b50794d41c3baca5e85b9acd476,
title = "Capturing expressive and indicative qualities of conducting gesture: An application of temporal expectancy models",
abstract = "Many event sequences in everyday human movement exhibit temporal structure: for instance, footsteps in walking, the striking of balls in a tennis match, the movements of a dancer set to rhythmic music, and the gestures of an orchestra conductor. These events generate prior expectancies regarding the occurrence of future events. Moreover, these expectancies play a critical role in conveying expressive qualities and communicative intent through the movement; thus they are of considerable interest in musical control contexts. To this end, we introduce a novel Bayesian framework which we call the temporal expectancy model and use it to develop an analysis tool for capturing expressive and indicative qualities of the conducting gesture based on temporal expectancies. The temporal expectancy model is a general dynamic Bayesian network (DBN) that can be used to encode prior knowledge regarding temporal structure to improve event segmentation. The conducting analysis tool infers beat and tempo, which are indicative and articulation which is expressive, as well as temporal expectancies regarding beat (ictus and preparation instances) from conducting gesture. Experimental results using our analysis framework reveal a very strong correlation in how significantly the preparation expectancy builds up for staccato vs legato articulation, which bolsters the case for temporal expectancy as cognitive model for event anticipation, and as a key factor in the communication of expressive qualities of conducting gesture. Our system operates on data obtained from a marker based motion capture system, but can be easily adapted for more affordable technologies like video camera arrays.",
author = "Dilip Swaminathan and Harvey Thornburg and Todd Ingalls and Stjepan Rajko and Jodi James and Ellen Campana and Kathleya Afanador and Randal Leistikow",
year = "2008",
month = "9",
day = "2",
doi = "10.1007/978-3-540-85035-9_3",
language = "English (US)",
isbn = "3540850341",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
pages = "34--55",
booktitle = "Computer Music Modeling and Retrieval",

}

TY - GEN

T1 - Capturing expressive and indicative qualities of conducting gesture

T2 - An application of temporal expectancy models

AU - Swaminathan, Dilip

AU - Thornburg, Harvey

AU - Ingalls, Todd

AU - Rajko, Stjepan

AU - James, Jodi

AU - Campana, Ellen

AU - Afanador, Kathleya

AU - Leistikow, Randal

PY - 2008/9/2

Y1 - 2008/9/2

N2 - Many event sequences in everyday human movement exhibit temporal structure: for instance, footsteps in walking, the striking of balls in a tennis match, the movements of a dancer set to rhythmic music, and the gestures of an orchestra conductor. These events generate prior expectancies regarding the occurrence of future events. Moreover, these expectancies play a critical role in conveying expressive qualities and communicative intent through the movement; thus they are of considerable interest in musical control contexts. To this end, we introduce a novel Bayesian framework which we call the temporal expectancy model and use it to develop an analysis tool for capturing expressive and indicative qualities of the conducting gesture based on temporal expectancies. The temporal expectancy model is a general dynamic Bayesian network (DBN) that can be used to encode prior knowledge regarding temporal structure to improve event segmentation. The conducting analysis tool infers beat and tempo, which are indicative and articulation which is expressive, as well as temporal expectancies regarding beat (ictus and preparation instances) from conducting gesture. Experimental results using our analysis framework reveal a very strong correlation in how significantly the preparation expectancy builds up for staccato vs legato articulation, which bolsters the case for temporal expectancy as cognitive model for event anticipation, and as a key factor in the communication of expressive qualities of conducting gesture. Our system operates on data obtained from a marker based motion capture system, but can be easily adapted for more affordable technologies like video camera arrays.

AB - Many event sequences in everyday human movement exhibit temporal structure: for instance, footsteps in walking, the striking of balls in a tennis match, the movements of a dancer set to rhythmic music, and the gestures of an orchestra conductor. These events generate prior expectancies regarding the occurrence of future events. Moreover, these expectancies play a critical role in conveying expressive qualities and communicative intent through the movement; thus they are of considerable interest in musical control contexts. To this end, we introduce a novel Bayesian framework which we call the temporal expectancy model and use it to develop an analysis tool for capturing expressive and indicative qualities of the conducting gesture based on temporal expectancies. The temporal expectancy model is a general dynamic Bayesian network (DBN) that can be used to encode prior knowledge regarding temporal structure to improve event segmentation. The conducting analysis tool infers beat and tempo, which are indicative and articulation which is expressive, as well as temporal expectancies regarding beat (ictus and preparation instances) from conducting gesture. Experimental results using our analysis framework reveal a very strong correlation in how significantly the preparation expectancy builds up for staccato vs legato articulation, which bolsters the case for temporal expectancy as cognitive model for event anticipation, and as a key factor in the communication of expressive qualities of conducting gesture. Our system operates on data obtained from a marker based motion capture system, but can be easily adapted for more affordable technologies like video camera arrays.

UR - http://www.scopus.com/inward/record.url?scp=50349095384&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=50349095384&partnerID=8YFLogxK

U2 - 10.1007/978-3-540-85035-9_3

DO - 10.1007/978-3-540-85035-9_3

M3 - Conference contribution

AN - SCOPUS:50349095384

SN - 3540850341

SN - 9783540850342

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 34

EP - 55

BT - Computer Music Modeling and Retrieval

ER -