Conditional Generation and Snapshot Learning in Neural Dialogue Systems

Tsung-Hsien Wen1, Milica Gasic1, Nikola Mrkšić1, Lina M. Rojas Barahona2, Pei-Hao Su1, Stefan Ultes1, David Vandyke1, Steve Young3
1University of Cambridge, 2University of Cambridge., 3Cambridge University


Abstract

Recently a variety of LSTM-based conditional language models (LM) have been applied across a range of language generation tasks. In this work we study various model architectures and different ways to represent and aggregate the source information in an end-to-end neural dialogue system framework. A method called snapshot learning is also proposed to facilitate learning from supervised sequential signals by applying a {\it companion} cross-entropy objective function to the conditioning vector. The experimental and analytical results demonstrate firstly that competition occurs between the conditioning vector and the LM, and the differing architectures provide different trade-offs between the two. Secondly, the discriminative power and transparency of the conditioning vector is key to providing both model interpretability and better performance. Thirdly, snapshot learning leads to consistent performance improvements independent of which architecture is used.