Sparse and Constrained Attention for Neural Machine Translation

Chaitanya Malaviya, Pedro Ferreira, André F. T. Martins


Abstract
In neural machine translation, words are sometimes dropped from the source or generated repeatedly in the translation. We explore novel strategies to address the coverage problem that change only the attention transformation. Our approach allocates fertilities to source words, used to bound the attention each word can receive. We experiment with various sparse and constrained attention transformations and propose a new one, constrained sparsemax, shown to be differentiable and sparse. Empirical evaluation is provided in three languages pairs.
Anthology ID:
P18-2059
Volume:
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Month:
July
Year:
2018
Address:
Melbourne, Australia
Editors:
Iryna Gurevych, Yusuke Miyao
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
370–376
Language:
URL:
https://aclanthology.org/P18-2059
DOI:
10.18653/v1/P18-2059
Bibkey:
Cite (ACL):
Chaitanya Malaviya, Pedro Ferreira, and André F. T. Martins. 2018. Sparse and Constrained Attention for Neural Machine Translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 370–376, Melbourne, Australia. Association for Computational Linguistics.
Cite (Informal):
Sparse and Constrained Attention for Neural Machine Translation (Malaviya et al., ACL 2018)
Copy Citation:
PDF:
https://aclanthology.org/P18-2059.pdf
Note:
 P18-2059.Notes.pdf
Presentation:
 P18-2059.Presentation.pdf
Video:
 https://aclanthology.org/P18-2059.mp4