Imagination Improves Multimodal Translation

Desmond Elliott, Ákos Kádár


Abstract
We decompose multimodal translation into two sub-tasks: learning to translate and learning visually grounded representations. In a multitask learning framework, translations are learned in an attention-based encoder-decoder, and grounded representations are learned through image representation prediction. Our approach improves translation performance compared to the state of the art on the Multi30K dataset. Furthermore, it is equally effective if we train the image prediction task on the external MS COCO dataset, and we find improvements if we train the translation model on the external News Commentary parallel text.
Anthology ID:
I17-1014
Volume:
Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers)
Month:
November
Year:
2017
Address:
Taipei, Taiwan
Editors:
Greg Kondrak, Taro Watanabe
Venue:
IJCNLP
SIG:
Publisher:
Asian Federation of Natural Language Processing
Note:
Pages:
130–141
Language:
URL:
https://aclanthology.org/I17-1014
DOI:
Bibkey:
Cite (ACL):
Desmond Elliott and Ákos Kádár. 2017. Imagination Improves Multimodal Translation. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 130–141, Taipei, Taiwan. Asian Federation of Natural Language Processing.
Cite (Informal):
Imagination Improves Multimodal Translation (Elliott & Kádár, IJCNLP 2017)
Copy Citation:
PDF:
https://aclanthology.org/I17-1014.pdf
Data
Multi30K