Real-Time Speech Emotion and Sentiment Recognition for Interactive Dialogue Systems

Dario Bertero1, Farhad Bin Siddique2, Chien-Sheng Wu3, Yan Wan2, Ricky Ho Yin Chan2, Pascale Fung2
1Human Language Technology Center, The Hong Kong University of Science and Technology, 2Human Language Technology Center, Hong Kong University of Science and Technology, 3National Taiwan University


Abstract

In this paper, we describe our approach of enabling an interactive dialogue system to recognize user emotion and sentiment in realtime. These modules allow otherwise conventional dialogue systems to have “empathy” and answer to the user while being aware of their emotion and intent. Emotion recognition from speech previously consists of feature engineering and machine learning where the first stage causes delay in decoding time. We describe a CNN model to extract emotion from raw speech input without feature engineering. This approach even achieves an impressive average of 65.7% accuracy on six emotion categories, a 4.5% improvement when compared to the conventional feature based SVM classification. A separate, CNN-based sentiment analysis module recognizes sentiments from speech recognition results, with 74.8 F-measure on human-machine dialogues when trained with out-of-domain data.