A new gestural control paradigm for musical expression: Real-time conducting analysis via temporal expectancy models

Dilip Swaminathan, Harvey Thornburg, Todd Ingalls, Jodi James, Stjepan Rajko, Kathleya Afanador

Research output: Contribution to conferencePaper

Abstract

Most event sequences in everyday human movement exhibit temporal structure: for instance, footsteps in walking, the striking of balls in a tennis match, the movements of a dancer set to rhythmic music, and the gestures of an orchestra conductor. These events generate prior expectancies regarding the occurrence of future events. Moreover, these expectancies play a critical role in conveying expressive qualities and communicative intent through themovement; thus they are of considerable interest in expressive musical control contexts. To this end, we introduce a novel gestural control paradigm for musical expression based on temporal expectancies induced by human movement via a general Bayesian framework called temporal expectancy network. We realize this paradigm in the form of a conducting analysis tool which infers beat, tempo, and articulation jointly with temporal expectancies regarding beat (ictus and preparation instances) from conducting gesture. Our system operates on data obtained from a marker based motion capture system, but can be easily adapted for more affordable technologies combining video cameras and inertial sensors. Using our analysis framework, we observe a significant effect on the patterns of temporal expectancies generated through varying expressive qualities of the gesture (e.g., staccato vs legato articulation) which at least partially confirms the role of temporal expectancies in musical expression.

Original languageEnglish (US)
Pages348-355
Number of pages8
StatePublished - Jan 1 2007
EventInternational Computer Music Conference, ICMC 2007 - Copenhagen, Denmark
Duration: Aug 27 2007Aug 31 2007

Conference

ConferenceInternational Computer Music Conference, ICMC 2007
CountryDenmark
CityCopenhagen
Period8/27/078/31/07

Fingerprint

Conveying
Video cameras
Sensors
Musical Expression
Paradigm
Conducting
Expectancy
Gesture
Expressive
Articulation

ASJC Scopus subject areas

  • Media Technology
  • Computer Science Applications
  • Music

Cite this

Swaminathan, D., Thornburg, H., Ingalls, T., James, J., Rajko, S., & Afanador, K. (2007). A new gestural control paradigm for musical expression: Real-time conducting analysis via temporal expectancy models. 348-355. Paper presented at International Computer Music Conference, ICMC 2007, Copenhagen, Denmark.

A new gestural control paradigm for musical expression : Real-time conducting analysis via temporal expectancy models. / Swaminathan, Dilip; Thornburg, Harvey; Ingalls, Todd; James, Jodi; Rajko, Stjepan; Afanador, Kathleya.

2007. 348-355 Paper presented at International Computer Music Conference, ICMC 2007, Copenhagen, Denmark.

Research output: Contribution to conferencePaper

Swaminathan, D, Thornburg, H, Ingalls, T, James, J, Rajko, S & Afanador, K 2007, 'A new gestural control paradigm for musical expression: Real-time conducting analysis via temporal expectancy models', Paper presented at International Computer Music Conference, ICMC 2007, Copenhagen, Denmark, 8/27/07 - 8/31/07 pp. 348-355.
Swaminathan D, Thornburg H, Ingalls T, James J, Rajko S, Afanador K. A new gestural control paradigm for musical expression: Real-time conducting analysis via temporal expectancy models. 2007. Paper presented at International Computer Music Conference, ICMC 2007, Copenhagen, Denmark.
Swaminathan, Dilip ; Thornburg, Harvey ; Ingalls, Todd ; James, Jodi ; Rajko, Stjepan ; Afanador, Kathleya. / A new gestural control paradigm for musical expression : Real-time conducting analysis via temporal expectancy models. Paper presented at International Computer Music Conference, ICMC 2007, Copenhagen, Denmark.8 p.
@conference{17709c5efc27457ab9dd5e9eb7960007,
title = "A new gestural control paradigm for musical expression: Real-time conducting analysis via temporal expectancy models",
abstract = "Most event sequences in everyday human movement exhibit temporal structure: for instance, footsteps in walking, the striking of balls in a tennis match, the movements of a dancer set to rhythmic music, and the gestures of an orchestra conductor. These events generate prior expectancies regarding the occurrence of future events. Moreover, these expectancies play a critical role in conveying expressive qualities and communicative intent through themovement; thus they are of considerable interest in expressive musical control contexts. To this end, we introduce a novel gestural control paradigm for musical expression based on temporal expectancies induced by human movement via a general Bayesian framework called temporal expectancy network. We realize this paradigm in the form of a conducting analysis tool which infers beat, tempo, and articulation jointly with temporal expectancies regarding beat (ictus and preparation instances) from conducting gesture. Our system operates on data obtained from a marker based motion capture system, but can be easily adapted for more affordable technologies combining video cameras and inertial sensors. Using our analysis framework, we observe a significant effect on the patterns of temporal expectancies generated through varying expressive qualities of the gesture (e.g., staccato vs legato articulation) which at least partially confirms the role of temporal expectancies in musical expression.",
author = "Dilip Swaminathan and Harvey Thornburg and Todd Ingalls and Jodi James and Stjepan Rajko and Kathleya Afanador",
year = "2007",
month = "1",
day = "1",
language = "English (US)",
pages = "348--355",
note = "International Computer Music Conference, ICMC 2007 ; Conference date: 27-08-2007 Through 31-08-2007",

}

TY - CONF

T1 - A new gestural control paradigm for musical expression

T2 - Real-time conducting analysis via temporal expectancy models

AU - Swaminathan, Dilip

AU - Thornburg, Harvey

AU - Ingalls, Todd

AU - James, Jodi

AU - Rajko, Stjepan

AU - Afanador, Kathleya

PY - 2007/1/1

Y1 - 2007/1/1

N2 - Most event sequences in everyday human movement exhibit temporal structure: for instance, footsteps in walking, the striking of balls in a tennis match, the movements of a dancer set to rhythmic music, and the gestures of an orchestra conductor. These events generate prior expectancies regarding the occurrence of future events. Moreover, these expectancies play a critical role in conveying expressive qualities and communicative intent through themovement; thus they are of considerable interest in expressive musical control contexts. To this end, we introduce a novel gestural control paradigm for musical expression based on temporal expectancies induced by human movement via a general Bayesian framework called temporal expectancy network. We realize this paradigm in the form of a conducting analysis tool which infers beat, tempo, and articulation jointly with temporal expectancies regarding beat (ictus and preparation instances) from conducting gesture. Our system operates on data obtained from a marker based motion capture system, but can be easily adapted for more affordable technologies combining video cameras and inertial sensors. Using our analysis framework, we observe a significant effect on the patterns of temporal expectancies generated through varying expressive qualities of the gesture (e.g., staccato vs legato articulation) which at least partially confirms the role of temporal expectancies in musical expression.

AB - Most event sequences in everyday human movement exhibit temporal structure: for instance, footsteps in walking, the striking of balls in a tennis match, the movements of a dancer set to rhythmic music, and the gestures of an orchestra conductor. These events generate prior expectancies regarding the occurrence of future events. Moreover, these expectancies play a critical role in conveying expressive qualities and communicative intent through themovement; thus they are of considerable interest in expressive musical control contexts. To this end, we introduce a novel gestural control paradigm for musical expression based on temporal expectancies induced by human movement via a general Bayesian framework called temporal expectancy network. We realize this paradigm in the form of a conducting analysis tool which infers beat, tempo, and articulation jointly with temporal expectancies regarding beat (ictus and preparation instances) from conducting gesture. Our system operates on data obtained from a marker based motion capture system, but can be easily adapted for more affordable technologies combining video cameras and inertial sensors. Using our analysis framework, we observe a significant effect on the patterns of temporal expectancies generated through varying expressive qualities of the gesture (e.g., staccato vs legato articulation) which at least partially confirms the role of temporal expectancies in musical expression.

UR - http://www.scopus.com/inward/record.url?scp=84924933374&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84924933374&partnerID=8YFLogxK

M3 - Paper

AN - SCOPUS:84924933374

SP - 348

EP - 355

ER -