Real-time vocal emotion recognition in artistic installations and interactive storytelling: Experiences and lessons learnt from CALLAS and IRIS

Hdl Handle:
http://hdl.handle.net/10149/100195
Title:
Real-time vocal emotion recognition in artistic installations and interactive storytelling: Experiences and lessons learnt from CALLAS and IRIS
Book Title:
Proceedings - 2009 3rd international conference on affective computing and intelligent interaction and workshops, ACII 2009
Authors:
Vogt, T. (Thurid); André, E. (Elisabeth); Wagner, J. (Johannes); Gilroy, S. W. (Stephen); Charles, F. (Fred); Cavazza, M. O. (Marc)
Affiliation:
University of Teesside. School of Computing.
Citation:
Vogt, T. et. al. (2009) 'Real-time vocal emotion recognition in artistic installations and interactive storytelling: Experiences and lessons learnt from CALLAS and IRIS', 2009 3rd international conference on affective computing and intelligent interaction and workshops, ACII 2009, Amsterdam, September 10 - 12, in Proceedings - 2009 3rd international conference on affective computing and intelligent interaction and workshops. ACM, pp.1-8.
Publisher:
IEEE
Conference:
2009 3rd international conference on affective computing and intelligent interaction and workshops, ACII 2009, Amsterdam, September 10 - 12, 2009
Issue Date:
2009
URI:
http://hdl.handle.net/10149/100195
DOI:
10.1109/ACII.2009.5349501
Abstract:
Most emotion recognition systems still rely exclusively on prototypical emotional vocal expressions that may be uniquely assigned to a particular class. In realistic applications, there is, however, no guarantee that emotions are expressed in a prototypical manner. In this paper, we report on challenges that arise when coping with non-prototypical emotions in the context of the CALLAS project and the IRIS network. CALLAS aims to develop interactive art installations that respond to the multimodal emotional input of performers and spectators in real-time. IRIS is concerned with the development of novel technologies for interactive storytelling. Both research initiatives represent an extreme case of non-prototypicality since neither the stimuli nor the emotional responses to stimuli may be considered as prototypical.
Type:
Meetings and Proceedings; Book Chapter
Language:
en
Keywords:
emotion recognition; extreme case; interactive arts; interactive storytelling; multi-modal; realistic applications; research initiatives; vocal expression
ISBN:
9781424447992
Rights:
Author can archive publisher's version/PDF. For full details see http://www.sherpa.ac.uk/romeo/ [Accessed 03/06/2010]
Citation Count:
0 [Scopus, 03/06/2010]

Full metadata record

DC FieldValue Language
dc.contributor.authorVogt, T. (Thurid)en
dc.contributor.authorAndré, E. (Elisabeth)en
dc.contributor.authorWagner, J. (Johannes)en
dc.contributor.authorGilroy, S. W. (Stephen)en
dc.contributor.authorCharles, F. (Fred)en
dc.contributor.authorCavazza, M. O. (Marc)en
dc.date.accessioned2010-06-03T09:44:36Z-
dc.date.available2010-06-03T09:44:36Z-
dc.date.issued2009-
dc.identifier.isbn9781424447992-
dc.identifier.doi10.1109/ACII.2009.5349501-
dc.identifier.urihttp://hdl.handle.net/10149/100195-
dc.description.abstractMost emotion recognition systems still rely exclusively on prototypical emotional vocal expressions that may be uniquely assigned to a particular class. In realistic applications, there is, however, no guarantee that emotions are expressed in a prototypical manner. In this paper, we report on challenges that arise when coping with non-prototypical emotions in the context of the CALLAS project and the IRIS network. CALLAS aims to develop interactive art installations that respond to the multimodal emotional input of performers and spectators in real-time. IRIS is concerned with the development of novel technologies for interactive storytelling. Both research initiatives represent an extreme case of non-prototypicality since neither the stimuli nor the emotional responses to stimuli may be considered as prototypical.en
dc.language.isoenen
dc.publisherIEEEen
dc.rightsAuthor can archive publisher's version/PDF. For full details see http://www.sherpa.ac.uk/romeo/ [Accessed 03/06/2010]en
dc.subjectemotion recognitionen
dc.subjectextreme caseen
dc.subjectinteractive artsen
dc.subjectinteractive storytellingen
dc.subjectmulti-modalen
dc.subjectrealistic applicationsen
dc.subjectresearch initiativesen
dc.subjectvocal expressionen
dc.titleReal-time vocal emotion recognition in artistic installations and interactive storytelling: Experiences and lessons learnt from CALLAS and IRISen
dc.typeMeetings and Proceedingsen
dc.typeBook Chapteren
dc.contributor.departmentUniversity of Teesside. School of Computing.en
dc.title.bookProceedings - 2009 3rd international conference on affective computing and intelligent interaction and workshops, ACII 2009en
dc.identifier.conference2009 3rd international conference on affective computing and intelligent interaction and workshops, ACII 2009, Amsterdam, September 10 - 12, 2009en
ref.citationcount0 [Scopus, 03/06/2010]en
or.citation.harvardVogt, T. et. al. (2009) 'Real-time vocal emotion recognition in artistic installations and interactive storytelling: Experiences and lessons learnt from CALLAS and IRIS', 2009 3rd international conference on affective computing and intelligent interaction and workshops, ACII 2009, Amsterdam, September 10 - 12, in Proceedings - 2009 3rd international conference on affective computing and intelligent interaction and workshops. ACM, pp.1-8.-
prism.startingPage1-
prism.endingPage8-
All Items in TeesRep are protected by copyright, with all rights reserved, unless otherwise indicated.