Neural-based Natural Language Generation in Dialogue using RNN Encoder-Decoder with Semantic Aggregation

Van-Khanh Tran, Le-Minh Nguyen, Satoshi Tojo


Abstract
Natural language generation (NLG) is an important component in spoken dialogue systems. This paper presents a model called Encoder-Aggregator-Decoder which is an extension of an Recurrent Neural Network based Encoder-Decoder architecture. The proposed Semantic Aggregator consists of two components: an Aligner and a Refiner. The Aligner is a conventional attention calculated over the encoded input information, while the Refiner is another attention or gating mechanism stacked over the attentive Aligner in order to further select and aggregate the semantic elements. The proposed model can be jointly trained both sentence planning and surface realization to produce natural language utterances. The model was extensively assessed on four different NLG domains, in which the experimental results showed that the proposed generator consistently outperforms the previous methods on all the NLG domains.
Anthology ID:
W17-5528
Volume:
Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue
Month:
August
Year:
2017
Address:
Saarbrücken, Germany
Editors:
Kristiina Jokinen, Manfred Stede, David DeVault, Annie Louis
Venue:
SIGDIAL
SIG:
SIGDIAL
Publisher:
Association for Computational Linguistics
Note:
Pages:
231–240
Language:
URL:
https://aclanthology.org/W17-5528
DOI:
10.18653/v1/W17-5528
Bibkey:
Cite (ACL):
Van-Khanh Tran, Le-Minh Nguyen, and Satoshi Tojo. 2017. Neural-based Natural Language Generation in Dialogue using RNN Encoder-Decoder with Semantic Aggregation. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, pages 231–240, Saarbrücken, Germany. Association for Computational Linguistics.
Cite (Informal):
Neural-based Natural Language Generation in Dialogue using RNN Encoder-Decoder with Semantic Aggregation (Tran et al., SIGDIAL 2017)
Copy Citation:
PDF:
https://aclanthology.org/W17-5528.pdf