Adaptive Attention Span in Transformers

Sainbayar Sukhbaatar, Edouard Grave, Piotr Bojanowski, Armand Joulin


Abstract
We propose a novel self-attention mechanism that can learn its optimal attention span. This allows us to extend significantly the maximum context size used in Transformer, while maintaining control over their memory footprint and computational time. We show the effectiveness of our approach on the task of character level language modeling, where we achieve state-of-the-art performances on text8 and enwiki8 by using a maximum context of 8k characters.
Anthology ID:
P19-1032
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2019
Address:
Florence, Italy
Editors:
Anna Korhonen, David Traum, Lluís Màrquez
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
331–335
Language:
URL:
https://aclanthology.org/P19-1032
DOI:
10.18653/v1/P19-1032
Bibkey:
Cite (ACL):
Sainbayar Sukhbaatar, Edouard Grave, Piotr Bojanowski, and Armand Joulin. 2019. Adaptive Attention Span in Transformers. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 331–335, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Adaptive Attention Span in Transformers (Sukhbaatar et al., ACL 2019)
Copy Citation:
PDF:
https://aclanthology.org/P19-1032.pdf
Video:
 https://aclanthology.org/P19-1032.mp4
Code
 facebookresearch/adaptive-span +  additional community code
Data
Text8