Integrating Audio and Visual Information for Modelling Communicative Behaviours Perceived as Different

Michelina Savino, Laura Scivetti, Mario Refice


Abstract
In human face-to-face interaction, participants can rely on a number of audio-visual information for interpreting interlocutors’ communicative intentions, such information strongly contributing to the successfulness of communication. Modelling these typical human abilities represents a main objective in human communication research, including technological applications like human-machine interaction. In this pilot study we explore the possibility of using audio-visual parameters for describing/measuring the differences perceived in interlocutor’s communicative behaviours. Preliminary results derived from the multimodal analysis of a single subject seem to indicate that measuring the distribution of some prosodic and hand gesture events which are temporally co-occurring contribute to the accounting of such perceived differences. Moreover, as far as gesture events are concerned, it has been observed that relevant information are not simply to be found in the occurences of single gestures, but mainly in some gesture modalities (for example, ’single stroke’ vs ’multiple stroke’ gestures, one-hand vs both-hands gestures, etc?). In this paper we also introduce and describe a software package, ViSuite, we developed for multimodal processing and used for the work described in his paper.
Anthology ID:
L08-1464
Volume:
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)
Month:
May
Year:
2008
Address:
Marrakech, Morocco
Editors:
Nicoletta Calzolari, Khalid Choukri, Bente Maegaard, Joseph Mariani, Jan Odijk, Stelios Piperidis, Daniel Tapias
Venue:
LREC
SIG:
Publisher:
European Language Resources Association (ELRA)
Note:
Pages:
Language:
URL:
http://www.lrec-conf.org/proceedings/lrec2008/pdf/448_paper.pdf
DOI:
Bibkey:
Cite (ACL):
Michelina Savino, Laura Scivetti, and Mario Refice. 2008. Integrating Audio and Visual Information for Modelling Communicative Behaviours Perceived as Different. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08), Marrakech, Morocco. European Language Resources Association (ELRA).
Cite (Informal):
Integrating Audio and Visual Information for Modelling Communicative Behaviours Perceived as Different (Savino et al., LREC 2008)
Copy Citation:
PDF:
http://www.lrec-conf.org/proceedings/lrec2008/pdf/448_paper.pdf