Sentiment Analysis using Imperfect Views from Spoken Language and Acoustic Modalities

Imran Sheikh, Sri Harsha Dumpala, Rupayan Chakraborty, Sunil Kumar Kopparapu


Abstract
Multimodal sentiment classification in practical applications may have to rely on erroneous and imperfect views, namely (a) language transcription from a speech recognizer and (b) under-performing acoustic views. This work focuses on improving the representations of these views by performing a deep canonical correlation analysis with the representations of the better performing manual transcription view. Enhanced representations of the imperfect views can be obtained even in absence of the perfect views and give an improved performance during test conditions. Evaluations on the CMU-MOSI and CMU-MOSEI datasets demonstrate the effectiveness of the proposed approach.
Anthology ID:
W18-3305
Volume:
Proceedings of Grand Challenge and Workshop on Human Multimodal Language (Challenge-HML)
Month:
July
Year:
2018
Address:
Melbourne, Australia
Editors:
Amir Zadeh, Paul Pu Liang, Louis-Philippe Morency, Soujanya Poria, Erik Cambria, Stefan Scherer
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
35–39
Language:
URL:
https://aclanthology.org/W18-3305
DOI:
10.18653/v1/W18-3305
Bibkey:
Cite (ACL):
Imran Sheikh, Sri Harsha Dumpala, Rupayan Chakraborty, and Sunil Kumar Kopparapu. 2018. Sentiment Analysis using Imperfect Views from Spoken Language and Acoustic Modalities. In Proceedings of Grand Challenge and Workshop on Human Multimodal Language (Challenge-HML), pages 35–39, Melbourne, Australia. Association for Computational Linguistics.
Cite (Informal):
Sentiment Analysis using Imperfect Views from Spoken Language and Acoustic Modalities (Sheikh et al., ACL 2018)
Copy Citation:
PDF:
https://aclanthology.org/W18-3305.pdf
Data
Multimodal Opinionlevel Sentiment Intensity