Image-Chat: Engaging Grounded Conversations

To achieve the long-term goal of machines being able to engage humans in conversation, our models should captivate the interest of their speaking partners. Communication grounded in images, whereby a dialogue is conducted based on a given photo, is a setup naturally appealing to humans (Hu et al., 2014). In this work we study large-scale architectures and datasets for this goal. We test a set of neural architectures using state-of-the-art image and text representations, considering various ways to fuse the components. To test such models, we collect a dataset of grounded human-human conversations, where speakers are asked to play roles given a provided emotional mood or style, as the use of such traits is also a key factor in engagingness (Guo et al., 2019). Our dataset, Image-Chat, consists of 202k dialogues over 202k images using 215 possible style traits. Automatic metrics and human evaluations of engagingness show the efficacy of our approach; in particular, we obtain state-of-the-art performance on the existing IGC task, and our best performing model is almost on par with humans on the Image-Chat test set (preferred 47.7% of the time).


Introduction
A key way for machines to exhibit intelligence is for them to be able to perceive the world around them -and to be able to communicate with humans in natural language about that world.To speak naturally with humans it is necessary to understand the natural things that humans say about the world they live in, and to respond in kind.This involves understanding what they perceive, e.g. the images they see, what those images mean semantically for humans, and how mood and style shapes the language and conversations derived from these observations.
In this work we take a step towards these goals by considering grounded dialogue involving open-ended discussion of a given image, a setting that is naturally fun for humans (Hu et al., 2014), and study neural conversational models for task.In particular, we explore both generative and retrieval models that handle multimodal dialogue by fusing Transformer architectures (Vaswani et al., 2017) for encoding dialogue history and responses and ResNet architectures (He et al., 2016) for encoding images.We propose ways to fuse those modalities together and perform a detailed study including both automatic evaluations, ablations and human evaluations of our models using crowdworkers.
To train and evaluate such models, we collect a large set of human-human crowdworker conversations, with the aim of training a model to engage a human in a similar fashion, consisting of 202k diverse images and 401k utterances over the images, with 215 different style traits (e.g., optimistic, skeptical or frivolous) to promote engaging conversation.The dataset is made publicly available in ParlAI (Miller et al., 2017) 1 .
Our results show that there is a significant gap between state-of-the-art retrieval and generative models on this task.Our best fused retrieval models set a strong baseline, being preferred to human conversationalists 47.7% of the time.We show that both large-scale image and text pre-training, and utilization of style traits, are critical for best results.We then consider transfer to the existing Image Grounded Conversations (IGC) task of Mostafazadeh et al. (2017), where we obtain stateof-the-art results.

Related Work
The majority of work in dialogue is not grounded in perception, e.g.much recent work explores sequence-to-sequence models or retrieval models for goal-directed (Henderson et al., 2014) or chit-chat tasks (Vinyals and Le, 2015;Zhang et al., 2018).While these tasks are text-based only, many of the techniques developed can likely be transferred for use in multimodal systems, for example using state-of-the-art Transformer representations for text (Mazare et al., 2018) as a sub-component.
In the area of language and vision, one of the most widely studied areas is image captioning, whereby a single utterance is output given an input image.This typically involves producing a factual, descriptive sentence describing the image, in contrast to producing a conversational utterance as in dialogue.Popular datasets include COCO (Chen et al., 2015) and Flickr30k (Young et al., 2014).Again, a variety of sequence-to-sequence (Vinyals et al., 2015;Xu et al., 2015;Anderson et al., 2018) and retrieval models (Gu et al., 2018;Faghri et al., 2018;Nam et al., 2016) have been applied.These tasks measure the ability of models to understand the content of an image, but not to carry out an engaging conversation grounded in perception.Some works have extended image captioning from being purely factual towards more engaging captions by incorporating style while still being single turn, e.g.(Mathews et al., 2018(Mathews et al., , 2016;;Gan et al., 2017;Guo et al., 2019;Shuster et al., 2019).Our work also applies a style component, but concentrates on image-grounded dialogue, rather than image captioning.
Visual question answering (Antol et al., 2015) and visual dialogue (Das et al., 2017) are another set of tasks which employ vision and language.They require the machine to answer factual questions about the contents of the image, either in single turn or dialogue form.They do not attempt to model natural conversation, but rather assess whether the machine can perform basic perception over the image via a series of questions.
There are some works which directly address dialogue grounded with vision.The work of Pasunuru and Bansal (2018) assesses the ability to execute dialogue given video of computer soccer games.The work of Huber et al. (2018) investigates the use of sentiment-based visual features and facial expressions for emotional image-based dialogue.Perhaps the most related work to ours is Mostafazadeh et al. (2017).Their work considers (visual context, textual context, question, response) tuples, and builds validation and test sets based on 4k eventful images called Image Grounded Conversations (IGC).No training data is provided, but instead the authors use Twitter for that in their experiments.In contrast, we provide training, validation and testing sets over 202k images for our task (that do not overlap with IGC), and consider a general set of images and dialogues, not just events and questions plus responses.
In our experiments we also show strong transfer ability of our models to the IGC task.
While there are many ways to measure dialogue quality, human engagement is a popular metric.Engagement itself can be measured in many ways (Bohus and Horvitz, 2009;Yu et al., 2016) but here we adopt the common approach of simply asking humans which speaker they find more engaging, following other works (Li et al., 2019;Dinan et al., 2020).

Image-Chat
The IMAGE-CHAT dataset is a large collection of (image, style trait for speaker A, style trait for speaker B, dialogue between A & B) tuples that we collected using crowd-workers, Each dialogue consists of consecutive turns by speaker A and B.
No particular constraints are placed on the kinds of utterance, only that we ask the speakers to both use the provided style trait, and to respond to the given image and dialogue history in an engaging way.The goal is not just to build a diagnostic dataset but a basis for training models that humans actually want to engage with.
Style Traits A number of works have shown that style traits for image captioning help provide creative captions (Mathews et al., 2018(Mathews et al., , 2016;;Gan et al., 2017;Shuster et al., 2019).We apply that same principle to image grounded dialogue, considering a set of 215 possible style traits, using an existing set from Shuster et al. (2019).The traits are categorized into three classes: positive (e.g., sweet, happy, eloquent, humble, witty), neutral (e.g., old-fashioned, skeptical, solemn, questioning) and negative (e.g., anxious, childish, critical, fickle, frivolous).We apply these to both speakers A and B, who will be assigned different style traits for each given conversation.
Images The images used in our task are randomly selected from the YFCC100M Dataset2 (Thomee et al., 2016).
Dialogue For each image, we pick at random two style traits, one for speaker A and one for speaker A: What is the difference between the forest and the trees?Oh look, dry pavement.
B: What is it called again?B: It was probably a Wolf coming to eat us because you talk too much.
B: I doubt that's even a forest, it looks like a line of trees.
A: Not sure but fried goodness.A: I would never go camping in the woods for this very reason.
A: There's probably more lame pavement on the other side!
Figure 1: Some samples from the IMAGE-CHAT training set.For each sample we asked humans to engage in a conversation about the given image, where the two speakers, A and B, each have a given provided style.
B, and collect the dialogue using crowdworkers who are asked to both assume those roles, and to be engaging to the other speaker while doing so.It was emphasized in the data collection instructions that the style trait describes a trait of the speaker, not properties of the content of the image they are discussing.Some examples from the training set are given in Figure 1.
Data Quality During data collection crowdsourcers were manually monitored, checking to ensure they were following the instructions.Poor performers were banned, with comments discarded.
A verification process was also conducted on a subset of the data, where separate annotators were asked to choose whether the utterance fit the image, style, or both, and found that 92.8% of the time it clearly fit the image, and 83.1% the style, and 80.5% both.Note, given that not all utterances should directly reference an image property or invoke the style, we do not expect 100%.
Overall Dataset The overall dataset statistics are given in Table 1.This is a fairly large dialogue dataset compared to other existing publicly available datasets.For example, PersonaChat (Zhang et al., 2018) (which is not grounded in images) consists of 162k utterances, while IGC (Mostafazadeh et al., 2017)

Models
We consider two major types of dialogue model: retrieval and generative.Both approaches make use of the same components as building blocks.We use three sub-networks for the three modalities of input: (i) an image encoder, (ii) a dialogue history encoder; and (iii) a style encoder.In the retrieval model these are then fed into a combiner module for combining the three modalities.Finally, there is a response encoder for considering candidate responses and this is scored against the combined input representations.An overview of the retrieval archictecture is shown in Figure 2.For the generative model, the three encoders are used as input, and a further decoder Transformer is used for outputting a token sequence; beam search is applied.

Image Encoder
We build our models on top of pretrained image features, and compare the performance of two types of image encoders.The first is a residual network with 152 layers described in He et al. (2016) trained on ImageNet (Russakovsky et al., 2015) to classify images among 1000 classes, which we refer to in the rest of the pa- per as ResNet152 features.We used the implementation provided in the torchvision project (Marcel and Rodriguez, 2010).The second is a ResNeXt 32 × 48d (Xie et al., 2017) trained on 3.5 billion Instagram pictures following the procedure described by Mahajan et al. (2018), which we refer to in the rest of the paper as ResNeXt-IG-3.5B.The representation r I of an image I is obtained by using the 2048-dimensional output of the image encoder as input to a feed-forward network: a multi-layer perceptron with ReLU activation units and a final layer of 500 dimensions in the retrieval case, and a linear layer in the generative case.
Style Encoder To condition on a given style trait, we embed each trait to an N -dimensional vector to obtain its representation r S .We used N = 500 for retrieval and N = 300 for generation.

Dialogue Encoder
The entire dialogue history D is encoded into a fixed size vector r D using a Transformer architecture (Vaswani et al., 2017), followed by a linear layer.Such Transformers have been shown to perform strongly on a variety of dia-logue tasks previously (Yang et al., 2018;Mazare et al., 2018).We use a Transformer with 4 layers, 300 hidden units, and 6 attention heads.The outputs are pooled (mean) to give a final vectorial encoding.
We pretrain the entire encoder following the setup described in Mazare et al. (2018): we train two encoders on a next-utterance retrieval task on a Reddit dataset of dialogues containing 1.7 billion pairs of utterances, where one encodes the context and another the candidates for the next utterance; their dot product indicates the degree of match, and they are trained with negative log-likelihood and k-negative sampling.We then initialize our system using the weights of the candidate encoder only, and then train on our task in either generative or retrieval mode.

Retrieval Models
Multimodal combiner module We consider two possible combiner modules for the inputs: Multimodal sum combiner (MM-sum): Given an input image, style trait and dialogue (I, S, D), together with a candidate response C, the score of the final combination is computed as s(I, S, D, C) = (r Multimodal attention combiner (MM-att): A more sophisticated approach is to use an attention mechanism to choose which modalities are most relevant for each example by stacking Transformers.We concatenate the three representation vectors r I , r S and r D and feed them to a second Transformer (4 attention heads, 2 layers, 500 hidden units) which performs self-attention over them.The three modalities are thus reweighted by the corresponding attention weights to give the final input representation vector r T , which is used to compute the score for a given candidate using r T • r C .

Response encoder
We employ the same Transformer architecture as in the dialogue encoder for encoding candidate responses.We tried two variants: either sharing or not sharing the weights with the input dialogue encoder.
Training and Inference Given a tuple I, S, D, and a set of candidates (c 1 , .., c N ), at inference time the predicted utterance is the candidate c i that maximizes the score s(I, S, D, c i ).At training time we pass a set of scores through a softmax and train to maximize the log-likelihood of the correct responses.We use mini-batches of 500 training examples; for each example, we use the gold responses of the other examples of the batch as negatives.During final human evaluation all candidates from the training set are considered to produce a response (356k candidates in our experiments).

Generative Models
Dialogue Decoder The encoding from the image encoder has a final linear layer of dimension 2048 × 300.This projects it to the same size of the token encoding of the dialogue decoder.We thus add it as an extra token at the end of the Transformers encoder output.For style, we simply prepend the style to the beginning of the dialogue history, and it is thus encoded in the dialogue encoder.We then treat this as a standard seq2seq Transformer in order to generate dialogue responses.

Training and Inference
We train with a batch size of 32 and learning rate of .0001using adam, and apply beam search with a beam of size 2 and trigram blocking at inference time.Hyperparameters are chosen on the validation set.

Experiments
We test our models on the IMAGE-CHAT and IGC datasets using automatic metrics and human evaluations.We analyze the performance of the different module and architecture choices, as well as ablation studies to determine the importance of each of the model's inputs.

Automatic Evaluation on IMAGE-CHAT Module Choices
We first compare various module configurations of our TRANSRESNET RET model, and additionally show the results for a simple information retrieval baseline, in which the candidates are ranked according to their weighted word overlap to the input message.We measure recall at 1 and 5 (R@1/100 and R@5/100) retrieval metrics, where for each sample there are 100 candidates to rank: 99 random candidates chosen from the test set, and the true label.Note that in human evaluations we use all the train set candidates.
The results are shown in Table 2.We report the average metrics for the total task, as well as the breakdown of the performance on each turn of dialogue (turns 1, 2 and 3).The average metrics indicate that using the ResNeXt-IG-3.5Bimage encoder features improves performance significantly across the whole task, as we obtain 50.3%R@1 for our best ResNeXt-IG-3.5Bmodel and only 40.6% for our best ResNet152 model.When broken down by turn, it appears that the ResNeXt-IG-3.5Bfeatures are particularly important in the first round of dialogue, in which only the image and style are considered, as the difference between their best models increases from 9.7% in the full task to 19.5% in the first turn.Our baseline multimodal sum combiner (MM-Sum) outperforms the more sophisticated self-attention (MM-Att) combiner, with the latter scoring 49.3% on the full task.Having separate candidate and dialogue history text encoders also works better than sharing weights.
In subsequent experiments we use the best performing system for our retrieval model.As ResNeXt-IG-3.5Bperforms best we use that for our generative model going forward as well.
Full & Ablation Study We now perform experiments for both retrieval and generative models for the full system, and additionally we remove modalities (image, style, and dialogue history).For the generative models we report the ROUGE-L metric.The results are shown in Table 3, which we now analyze.
Turn 1: In the first round of dialogue the models produce utterances given the image and style only, as there is no dialogue history yet.For both models, image is more important than style, but using both together helps.
Turn 2: In the second turn, in which a model produces a response to a first utterance, the models perform similarly when using only the image or only the dialogue history, while performing poorly with just the style.Any combination of two modalities improves the results, with the style + dialogue combination performing slightly higher than the other two.Using all modalities works best.
Turn 3: By the third turn of dialogue, the conversation history proves to be by far the most important in isolation compared to the other two modalities in isolation.Conditioning on the style+dialogue is the most effective of any combination of two modalities.Again, using all modalities still proves best.

Human Evaluations on IMAGE-CHAT
We test our final models using human evaluation.
Evaluation Setup We use a set of 500 images from YFCC-100M that are not present in IMAGE-CHAT to build a set of three-round dialogues pairing humans with models in conversation.We then

Model Combiner Text Encoders Image Encoder
Turn 1 Turn 2 Turn 3 Table 3: Ablations on IMAGE-CHAT.We compare variants of our best TRANSRESNET generative and retrieval models (ResNeXt-IG-3.5Bimage encoder, and MM-Sum + separate text encoders for retrieval) where we remove modalities: image, dialogue history and style conditioning, reporting R@1/100 for retrieval and ROUGE-L for generation for dialogue turns 1, 2 and 3 independently, as well as the average over all turns.
conduct evaluations at each round of dialogue for each example in the evaluation set; we have a separate set of human evaluators look at the provided conversation turns, and ask them to compare two possible utterances for the next turn of conversation, given the image, dialogue history and relevant style (which is the same for both human author and model, so there is no advantage).We ask the evaluators in a blind test to choose the "more engaging" of the two possible utterances: one from a human, and the other from a model.

Human annotation vs. TRANSRESNET model
We compare human-authored utterances to those produced by our models.The human conversations are collected in the same fashion as in IMAGE-CHAT but on test images.As for humans, the model outputs are conditioned on the image, style and previous dialogue history.
TRANSRESNET GEN simply generates a response, whereas TRANSRESNET RET retrieves candidate utterances from the IMAGE-CHAT training set.The latter is given a separate set of candidates corresponding to the round of dialogue -e.g. when producing a response to turn 1, the model retrieves from all possible round 1 utterances from the train set (in that case 186,858 possible choices).
The results are shown in Fig. 4, comparing all models on the first round (left): TRANSRESNET GEN and TRANSRESNET RET us-ing ResNeXt-IG-3.5B,and TRANSRESNET RET using ResNet152 features.As in automatic evaluations, ResNet152 features performed more poorly.The retrieval model outperformed the generative model, a result that has been observed in other (text-only) dialogue tasks (Dinan et al., 2019;Zhang et al., 2018).In turn 1, TRANSRESNET RET (ResNeXt-IG-3.5B) has a win rate against humans of 49.4% (difference not significant using a binomial two-tailed test, p > 0.5), while both other models are significantly outperformed by humans (p < 2 × 10 −7 compared to ResNet152 features), showing the importance of our retrieval architecture and image feature choices.We thus compare only TRANSRESNET RET (ResNeXt-IG-3.5B) to humans in all three turns (Fig. 4, right).That model performs well, with an overall win rate against humans of 47.7% (difference is significant, p < 7 × 10 −5 ).Example predictions of TRANSRESNET RET (ResNeXt-IG-3.5B) are given in Figure 3.

Transfer to the IGC Task
To test the strength of our task and models we consider transfer to the IGC of task of Mostafazadeh et al. (2017).In particular, we focus on their response task, which provides an image and a dialogue history of two utterances: a context utterance, followed by a question.The task is to then pro-  duce a response.This is clearly related to our task, except it focuses on answering questions, which our task does not.Our task is more varied as it was collected in an unconstrained way, unlike in IGC where they were asked to write a question.Nevertheless, assuming a question contains a ?or starts with who, what, when, where, why or how, our dataset contains 40,076 training utterances that are questions (11.3% of the data) and so it could be possible to produce responses to them.Without any fine-tuning at all, we thus simply took exactly the same best trained models and used them for their question response task as well.
Unfortunately, after contacting the authors of Mostafazadeh et al. (2017) they no longer have the predictions of their model available, nor have they made available the code for their human evaluation setup.However, the test set is available.We therefore attempted to reproduce the same setup as in their experiments, which we will also make publicly available upon acceptance.
Automatic Evaluation We measure our best TRANSRESNET GEN model's performance on the IGC test set in terms of BLEU-4.The results are shown in Fig. 5 (right).We find that our model outperforms the model from Mostafazadeh et al. (2017), achieving a score of 2.30 compared to 1.49.

Human Evaluation
We compare the provided human response (from the test set) with 7 variants of our TRANSRESNET RET model (mimicking their setup), whereby we have our model condition on 7 styles for which it performed well on evaluations in section 5.2.Annotators rated the quality of responses on a scale from 1 to 3, where 3 is the highest, reporting the mean over ∼2k questions.We then scale that by the score of human authored responses, to give a percentage.The results are shown in Fig. 5 (left).Our model narrows the gap between human and model performance, yielding a higher percentage of the human score (62.9% vs. 54.2%).More detailed results and example predictions of our model can be found in Appendices E and F, including examples of highly rated and poorly rated outputs from our model.

Conclusion
This paper presents an approach for improving the way machines can generate grounded conversations that humans find engaging.Focusing on the case of chit-chatting about a given image, a naturally useful application for end-users of social dialogue agents, this work shows that our best proposed model can generate grounded dialogues that humans prefer over dialogues with other fellow humans almost half of the time (47.7%).This result is made possible by the creation of a new dataset IMAGE-CHAT3 .
Our work shows that we are close to having models that humans can relate to in chit-chat conversations, which could set new ground for social dialogue agents.However, our retrieval models outperformed their generative versions; closing that gap is an important challenge for the community.While our human evaluations were on short conversations, initial investigations indicate the model as is can extend to longer chats, see Appendix G, which should be studied in future work.The next challenge will also be to combine this engagingness with other skills, such as world knowledge (Antol et al., 2015) relation to personal interests (Zhang et al., 2018), and task proficiency.

A More Details of IGC Evaluations
In this section we describe a few choices we made and implementation details regarding the IGC human evaluation in the section regarding Transfer to the IGC Task.
Multiple Traits In the IGC human evaluation setup from (Mostafazadeh et al., 2017), human annotators were shown eight choices when rating the quality of responses to questions: seven responses from various models, and one human response.To mirror this setup as closely as possible, we chose seven of our highest performing style traits to condition on to display in addition to the human response.We show the results of each trait in Table 4.
Automatic Evaluation In (Mostafazadeh et al., 2017), the authors provide BLEU scores for their models in an attempt to evaluate their effectiveness via automated metrics.The authors note that the scores are very low, "as is characteristic for tasks with intrinsically diverse outputs."Additionally, it has been shown in (Shuster et al., 2019) that BLEU scores for image captioning retrieval models are generally far lower than those of generative models (as retrieval models do not optimize for such a metric), and yet human evaluations can show the complete opposite results.In fact, in that work retrieval models were shown to be superior to generative models in human evaluations, which is why we adopted them here.For these reasons we omit BLEU scores of our retrieval models on the IGC test set as uninteresting.We do however compare BLEU scores with our generative model in the main paper.
Test Set Size The IGC test set provides the urls to all 2591 images for which (context, question, response) tuples were collected.We were only able to recover 2195 images from this initial set, as some of the urls provided are no longer associated with the corresponding images.Thus, our human evaluations are conducted on this subset.C IMAGE-CHAT Human Evaluation Setup

D IGC Human Evaluation Setup
just heard something out there and I have no idea what it was.

Figure 2 :
Figure 2: The TRANSRESNET RET multimodal architecture for grounded dialogue.There are several options: different image encoders (ResNet152 or ResNeXt-IG-3.5B),text encoders (shared or separate Transformers for history and response), and different multimodal combiners (sum or attention-based).

Figure 3 :
Figure 3: Example predictions from our TRANSRESNET RET (MM-Sum) model on the evaluation set using all candidates for turns 1-3 .Two speakers A & B with given style traits discuss a photo.The dialogue context before the model prediction is completed by humans, followed by one or more possible model responses, given different style conditioning.The model clearly uses the image, given style and dialogue history in formulating its response.

Figure 4 :
Figure 4: Human evaluations on IMAGE-CHAT.Engagingness win rates of pairwise comparisons between human utterances and TRANSRESNET RET (ResNet152 or ResNeXt-IG-3.5B)or TRANSRESNET GEN , comparing over the rounds of dialogue.

Figure 5 :
Figure 5: IGC Evaluations.The best model from Mostafazadeh et al. (2017) is compared to our best TRANSRESNET RET and TRASNRESNET GEN models.On the left, annotator's ratings of responses from the models are shown as a percentage of the annotator's ratings of human responses.On the right, BLEU-4 scores on the response task are shown.

Figure 6 :
Figure 6: Instructions pane for crowdworkers when collecting the second round of dialogue.

Figure 7 :
Figure 7: Instructions pane for crowdworkers when collecting the third round of dialogue.

Figure 8 :
Figure 8: Instructions pane for crowdworkers when collecting the IMAGE-CHAT Evaluations.

Figure 9 :
Figure 9: Instructions pane for crowdworkers when collecting the IGC Evaluations.

Table 2 :
Module choices on IMAGE-CHAT.We compare different module variations for TRANSRESNET RET .

Table 4 :
IGC Human Evaluation on responses from our TRANSRESNET MM-SUM model conditioned on various personalities.Responses were rated on a quality scale from 1 to 3, where 3 is the highest.