Improving Context Modelling in Multimodal Dialogue Generation

In this work, we investigate the task of textual response generation in a multimodal task-oriented dialogue system. Our work is based on the recently released Multimodal Dialogue (MMD) dataset (Saha et al., 2017) in the fashion domain. We introduce a multimodal extension to the Hierarchical Recurrent Encoder-Decoder (HRED) model and show that this extension outperforms strong baselines in terms of text-based similarity metrics. We also showcase the shortcomings of current vision and language models by performing an error analysis on our system’s output.


Introduction
This work aims to learn strategies for textual response generation in a multimodal conversation directly from data. Conversational AI has great potential for online retail: It greatly enhances user experience and in turn directly affects user retention (Chai et al., 2000), especially if the interaction is multi-modal in nature. So far, most conversational agents are uni-modal -ranging from opendomain conversation (Ram et al., 2018;Papaioannou et al., 2017;Fang et al., 2017) to task oriented dialogue systems Lemon, 2010, 2011;Young et al., 2013;Singh et al., 2000;Wen et al., 2016). While recent progress in deep learning has unified research at the intersection of vision and language, the availability of open-source multimodal dialogue datasets still remains a bottleneck.
This research makes use of a recently released Multimodal Dialogue (MMD) dataset (Saha et al., 2017), which contains multiple dialogue sessions in the fashion domain. The MMD dataset provides an interesting new challenge, combining recent ef-forts on task-oriented dialogue systems, as well as visually grounded dialogue. In contrast to simple QA tasks in visually grounded dialogue, e.g. (Antol et al., 2015), it contains conversations with a clear end-goal. However, in contrast to previous slot-filling dialogue systems, e.g. (Rieser and Lemon, 2011;Young et al., 2013), it heavily relies on the extra visual modality to drive the conversation forward (see Figure 1).
In the following, we propose a fully data-driven response generation model for this task. Our work is able to ground the system's textual response with language and images by learning the semantic correspondence between them while modelling long-term dialogue context. Figure 1: Example of a user-agent interaction in the fashion domain. In this work, we are interested in the textual response generation for a user query. Both user query and agent response can be multimodal in nature.

Model: Multimodal HRED over multiple images
Our model is an extension of the recently introduced Hierarchical Recurrent Encoder Decoder (HRED) architecture (Serban et al., 2016(Serban et al., , 2017Lu et al., 2016). In contrast to standard sequenceto-sequence models (Cho et al., 2014;Sutskever et al., 2014;Bahdanau et al., 2015), HREDs model the dialogue context by introducing a context Recurrent Neural Network (RNN) over the encoder RNN, thus forming a hierarchical encoder. We build on top of the HRED architecture to include multimodality over multiple images. A simple HRED consists of three RNN modules: encoder, context and decoder. In multimodal HRED, we combine the output representations from the utterance encoder with concatenated multiple image representations and pass them as input to the context encoder (see Figure 2). A dialogue is modelled as a sequence of utterances (turns), which in turn are modelled as sequences of words and images. Formally, a dialogue is generated according to the following: where t n is the n-th utterance in a dialogue. For each m = 1, . . . , M n , we have hidden states of each module defined as: where f text θ ,f cxt θ and f dec θ are GRU cells (Cho et al., 2014). θ represent model parameters, w n,m is the m-th word in the n-th utterance and g enc θ is a Convolutional Neural Network (CNN); here we use VGGnet (Simonyan and Zisserman, 2014). We pass multiple images in a context through the CNN in order to get encoded image representations g enc θ (img k ). Then these are combined together and passed through a linear layer l img to get the aggregated image representation for one turn of context, denoted by h img n above. The textual representation h text n,Mn is given by the encoder RNN f text θ . Both h text n,Mn and h img n are subsequently concatenated and passed as input to the context RNN. h cxt N , the final hidden state of the context RNN, acts as the initial hidden state of the decoder RNN. Finally, output is generated by passing h dec n,m through an affine transformation followed by a softmax activation. The model is trained using cross entropy on next-word prediction. During generation, the decoder conditions on the previous output token. Saha et al. (2017) propose a similar baseline model for the MMD dataset, extending HREDs to include the visual modality. However, for simplicity's sake, they 'unroll' multiple images in a single utterance to include only one image per utterance. While computationally leaner, this approach ultimately loses the objective of capturing multimodality over the context of multiple images and text. In contrast, we combine all the image representations in the utterance using a linear layer. We argue that modelling all images is necessary to answer questions that address previous agent responses. For example in Figure 3, when the user asks "what about the 4th image?", it is impossible to give a correct response without reasoning over all images in the previous response. In the following, we empirically show that our extension leads to better results in terms of text-based similarity measures, as well as quality of generated dialogues.
Our version of the dataset Text Context: Sorry i don't think i have any 100 % acrylic but i can show you in knit | Show me something similar to the 4th image but with the material different Image Context: [Img 1, Img 2, Img 3, Img 4, Img 5] | [0, 0, 0, 0, 0] Target Response: The similar looking ones are Saha et al. (Saha et al., 2017) Text Context: | Image Context: Img 4 | Img 5 Target Response: The similar looking ones are Figure 3: Example contexts for a given system utterance; note the difference in our approach from Saha et al. (2017) when extracting the training data from the original chat logs. For simplicity, in this illustration we consider a context size of 2 previous utterances. '|' differentiates turns for a given context. We concatenate the representation vector of all images in one turn of a dialogue to form the image context. If there is no image in the utterance, we consider a 0 4096 vector to form the image context. In this work, we focus only on the textual response of the agent.

Dataset
The MMD dataset (Saha et al., 2017) consists of 100/11/11k train/validation/test chat sessions comprising 3.5M context-response pairs for the model. Each session contains an average of 40 dialogue turns (average of 8 words per textual response, 4 images per image response). The data contains complex user queries, which pose new challenges for multimodal, task-based dialogue, such as quantitative inference (sorting, counting and filtering): "Show me more images of the 3rd product in some different directions", inference using domain knowledge and long term context: "Will the 5th result go well with a large sized messenger bag?", inference over aggregate of images: "List more in the upper material of the 5th image and style as the 3rd and the 5th", co-reference resolution. Note that we started with the raw transcripts of dialogue sessions to create our own version of the dataset for the model. This is done since the authors originally consider each image as a different context, while we consider all the images in a single turn as one concatenated context (cf. Figure 3).

Implementation
We use the PyTorch 1 framework (Paszke et al., 2017) for our implementation. 2 We used 512 as the word embedding size as well as hidden dimension for all the RNNs using GRUs (Cho et al., 2014) with tied embeddings for the (bidirectional) encoder and decoder. The decoder uses Luong-style attention mechanism (Luong et al., 2015) with input feeding. We trained our model with the Adam optimizer (Kingma and Ba, 2015), with a learning rate of 0.0004 and clipping gradient norm over 5. We perform early stopping by monitoring validation loss. For image representations, we use the FC6 layer representations of the VGG-19 (Simonyan and Zisserman, 2014), pre-trained on ImageNet. 3

Analysis and Results
We report sentence-level BLEU-4 (Papineni et al., 2002), METEOR (Lavie and Agarwal, 2007) and ROUGE-L (Lin and Och, 2004) using the evaluation scripts provided by (Sharma et al., 2017).  Table 1: Sentence-level BLEU-4, METEOR and ROUGE-L results for the response generation task on the MMD corpus. "Cxt" represents context size considered by the model. Our best performing model is M-HRED-attn over a context of 5 turns. *Saha et al. has been trained on a different version of the dataset. Table 1 provides results for different configurations of our model ("T" stands for text-only in the encoder, "M" for multimodal, and "attn" for using attention in the decoder). We experimented with different context sizes and found that output quality improved with increased context size (models with 5-turn context perform better than those with a 2-turn context), confirming the observation by Serban et al. (2016Serban et al. ( , 2017. 5 Using attention clearly helps: even T-HRED-attn outperforms M-HRED (without attention) for the same context size. We also tested whether multimodal input has an impact on the generated outputs. However, there was only a slight increase in BLEU score (M-HRED-attn vs T-HRED-attn).
To summarize, our best performing model (M-HRED-attn) outperforms the model of Saha et al. by 7 BLEU points. 6 This can be primarily attributed to the way we created the input for our model from raw chat logs, as well as incorporating more information during decoding via attention. Figure 4 provides example output utterances using M-HRED-attn with a context size of 5. Our model is able to accurately map the response to previous textual context turns as shown in (a) and (c). In (c), it is able to capture that the user is asking about the style in the 1st and 2nd image. (d) shows an example where our model is able to relate that the corresponding product is 'jeans' from visual features, while it is not able to model finegrained details like in (b) that the style is 'casual fit' but resorts to 'woven'.

Conclusion and Future Work
In this research, we address the novel task of response generation in search-based multimodal dialogue by learning from the recently released Multimodal Dialogue (MMD) dataset (Saha et al., 2017). We introduce a novel extension to the Hierarchical Recurrent Encoder-Decoder (HRED) model (Serban et al., 2016) and show that our implementation significantly outperforms the model of Saha et al. (2017) by modelling the full multimodal context. Contrary to their results, our generation outputs improved by adding attention and increasing context size. However, we also show that multimodal HRED does not improve significantly over text-only HRED, similar to observations by Agrawal et al. (2016) and Qian et al. (2018). Our model learns to handle textual correspondence between the questions and answers, while mostly ignoring the visual context. This indicates that we need better visual models to en-code the image representations when he have multiple similar-looking images, e.g., black hats in Figure 3. We believe that the results should improve with a jointly trained or fine-tuned CNN for generating the image representations, which we plan to implement in future work.