Computer-Generating emotional music: The design of an affective music algorithm

Isaac Wallis, Todd Ingalls, Ellen Campana

Research output: Chapter in Book/Report/Conference proceedingConference contribution

31 Scopus citations

Abstract

This paper explores one way to use music in the context of affective design. We've made a real-time music generator that is designed around the concepts of valence and arousal, which are two components of certain models of emotion. When set to a desired valence and arousal, the algorithm plays music corresponding to the intersection of these two parameters. We designed our algorithm using psychological theory of emotion and parametrized features of music which have been tested for affect. The results are a modular algorithm design, in which our parameters can be implemented in other affective music algorithms. We describe our implementation of these parameters, and our strategy for manipulating the parameters to generate musical emotion. Finally we discuss possible applications for these techniques in the fields of the arts, medical systems, and research applications. We believe that further work will result in a music generator which can produce music in any of a wide variety of commonly-perceived emotional connotations on command.

Original languageEnglish (US)
Title of host publicationProceedings - 11th International Conference on Digital Audio Effects, DAFx 2008
Pages7-12
Number of pages6
StatePublished - 2008
Event11th International Conference on Digital Audio Effects, DAFx 2008 - Espoo, Finland
Duration: Sep 1 2008Sep 4 2008

Publication series

NameProceedings of the International Conference on Digital Audio Effects, DAFx
ISSN (Print)2413-6700
ISSN (Electronic)2413-6689

Conference

Conference11th International Conference on Digital Audio Effects, DAFx 2008
Country/TerritoryFinland
CityEspoo
Period9/1/089/4/08

ASJC Scopus subject areas

  • Signal Processing

Fingerprint

Dive into the research topics of 'Computer-Generating emotional music: The design of an affective music algorithm'. Together they form a unique fingerprint.

Cite this