Visually grounded generation of entailments from premises

Somayeh Jafaritazehjani, Albert Gatt, Marc Tanti


Abstract
Natural Language Inference (NLI) is the task of determining the semantic relationship between a premise and a hypothesis. In this paper, we focus on the generation of hypotheses from premises in a multimodal setting, to generate a sentence (hypothesis) given an image and/or its description (premise) as the input. The main goals of this paper are (a) to investigate whether it is reasonable to frame NLI as a generation task; and (b) to consider the degree to which grounding textual premises in visual information is beneficial to generation. We compare different neural architectures, showing through automatic and human evaluation that entailments can indeed be generated successfully. We also show that multimodal models outperform unimodal models in this task, albeit marginally
Anthology ID:
W19-8625
Volume:
Proceedings of the 12th International Conference on Natural Language Generation
Month:
October–November
Year:
2019
Address:
Tokyo, Japan
Editors:
Kees van Deemter, Chenghua Lin, Hiroya Takamura
Venue:
INLG
SIG:
SIGGEN
Publisher:
Association for Computational Linguistics
Note:
Pages:
178–188
Language:
URL:
https://aclanthology.org/W19-8625
DOI:
10.18653/v1/W19-8625
Bibkey:
Cite (ACL):
Somayeh Jafaritazehjani, Albert Gatt, and Marc Tanti. 2019. Visually grounded generation of entailments from premises. In Proceedings of the 12th International Conference on Natural Language Generation, pages 178–188, Tokyo, Japan. Association for Computational Linguistics.
Cite (Informal):
Visually grounded generation of entailments from premises (Jafaritazehjani et al., INLG 2019)
Copy Citation:
PDF:
https://aclanthology.org/W19-8625.pdf