User interaction in mixed reality interactive storytelling

Hdl Handle:
http://hdl.handle.net/10149/100381
Title:
User interaction in mixed reality interactive storytelling
Book Title:
Proceedings of the 2nd IEEE/ACM international symposium on mixed and augmented reality
Authors:
Cavazza, M. O. (Marc); Martin, O. (Olivier); Charles, F. (Fred); Marichal, X. (Xavier); Mead, S. J. (Steven)
Affiliation:
University of Teesside. School of Computing and Mathematics.
Citation:
Cavazza, M. O. et. al. (2003) 'User interaction in mixed reality interactive storytelling', 2nd IEEE/ACM international symposium on mixed and augmented reality in Proceedings of the 2nd IEEE/ACM international symposium on mixed and augmented reality. Washington: IEEE, p.304.
Publisher:
IEEE
Conference:
2nd IEEE/ACM international symposium on mixed and augmented reality
Issue Date:
2003
URI:
http://hdl.handle.net/10149/100381
Additional Links:
http://portal.acm.org/citation.cfm?id=946847
Abstract:
One promising application of Mixed Reality to entertainment has been the development of “interactive theatre” systems involving synthetic actors. However, these systems do not make use of the most recent advances in interactive storytelling technologies, which use Artificial Intelligence techniques to generate real time narratives featuring synthetic characters and supporting user intervention in the plot. In this paper, the authors describe a Mixed Reality system based on a “magic mirror” model (Figure 1), in which the user’s image is captured in real time by a video camera, extracted from his/her background and mixed with a 3D graphic model of a virtual stage including the synthetic characters taking part in the story. The resulting image is projected on a large screen facing the user, who sees his/her own image embedded in the virtual stage with the synthetic actors. The graphic component of the Mixed Reality world is based on a game engine, Unreal Tournament 2003TM. This engine not only performs graphic rendering and character animation but incorporates a new version of our previously described storytelling engine [1]. A single 2D camera facing the user analyses the image in real-time by segmenting the user’s contours [2]. The objective behind segmentation is twofold. Firstly, it extracts the image silhouette of the user in order to be able to inject it into the virtual setting on the projection screen. Secondly, it can recognise user behaviour, including symbolic gestures, in real-time, hence supporting a new interaction channel.
Type:
Meetings and Proceedings; Book Chapter
Language:
en
Keywords:
mixed reality; interactive storytelling; artificial intelligence; real time
ISBN:
0769520065
Rights:
Author can archive publisher's version/PDF. For full details see http://www.sherpa.ac.uk/romeo/ [Accessed 07/06/2010]
Citation Count:
0 [Web of Science, 07/06/2010]

Full metadata record

DC FieldValue Language
dc.contributor.authorCavazza, M. O. (Marc)en
dc.contributor.authorMartin, O. (Olivier)en
dc.contributor.authorCharles, F. (Fred)en
dc.contributor.authorMarichal, X. (Xavier)en
dc.contributor.authorMead, S. J. (Steven)en
dc.date.accessioned2010-06-07T10:36:28Z-
dc.date.available2010-06-07T10:36:28Z-
dc.date.issued2003-
dc.identifier.isbn0769520065-
dc.identifier.urihttp://hdl.handle.net/10149/100381-
dc.description.abstractOne promising application of Mixed Reality to entertainment has been the development of “interactive theatre” systems involving synthetic actors. However, these systems do not make use of the most recent advances in interactive storytelling technologies, which use Artificial Intelligence techniques to generate real time narratives featuring synthetic characters and supporting user intervention in the plot. In this paper, the authors describe a Mixed Reality system based on a “magic mirror” model (Figure 1), in which the user’s image is captured in real time by a video camera, extracted from his/her background and mixed with a 3D graphic model of a virtual stage including the synthetic characters taking part in the story. The resulting image is projected on a large screen facing the user, who sees his/her own image embedded in the virtual stage with the synthetic actors. The graphic component of the Mixed Reality world is based on a game engine, Unreal Tournament 2003TM. This engine not only performs graphic rendering and character animation but incorporates a new version of our previously described storytelling engine [1]. A single 2D camera facing the user analyses the image in real-time by segmenting the user’s contours [2]. The objective behind segmentation is twofold. Firstly, it extracts the image silhouette of the user in order to be able to inject it into the virtual setting on the projection screen. Secondly, it can recognise user behaviour, including symbolic gestures, in real-time, hence supporting a new interaction channel.en
dc.language.isoenen
dc.publisherIEEEen
dc.relation.urlhttp://portal.acm.org/citation.cfm?id=946847en
dc.rightsAuthor can archive publisher's version/PDF. For full details see http://www.sherpa.ac.uk/romeo/ [Accessed 07/06/2010]en
dc.subjectmixed realityen
dc.subjectinteractive storytellingen
dc.subjectartificial intelligenceen
dc.subjectreal timeen
dc.titleUser interaction in mixed reality interactive storytellingen
dc.typeMeetings and Proceedingsen
dc.typeBook Chapteren
dc.contributor.departmentUniversity of Teesside. School of Computing and Mathematics.en
dc.title.bookProceedings of the 2nd IEEE/ACM international symposium on mixed and augmented realityen
dc.identifier.conference2nd IEEE/ACM international symposium on mixed and augmented realityen
ref.citationcount0 [Web of Science, 07/06/2010]en
or.citation.harvardCavazza, M. O. et. al. (2003) 'User interaction in mixed reality interactive storytelling', 2nd IEEE/ACM international symposium on mixed and augmented reality in Proceedings of the 2nd IEEE/ACM international symposium on mixed and augmented reality. Washington: IEEE, p.304.-
prism.startingPage304-
All Items in TeesRep are protected by copyright, with all rights reserved, unless otherwise indicated.