Adaptive Transformers for Learning Multimodal Representations

Prajjwal Bhargava


Abstract
The usage of transformers has grown from learning about language semantics to forming meaningful visiolinguistic representations. These architectures are often over-parametrized, requiring large amounts of computation. In this work, we extend adaptive approaches to learn more about model interpretability and computational efficiency. Specifically, we study attention spans, sparse, and structured dropout methods to help understand how their attention mechanism extends for vision and language tasks. We further show that these approaches can help us learn more about how the network perceives the complexity of input sequences, sparsity preferences for different modalities, and other related phenomena.
Anthology ID:
2020.acl-srw.1
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop
Month:
July
Year:
2020
Address:
Online
Editors:
Shruti Rijhwani, Jiangming Liu, Yizhong Wang, Rotem Dror
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1–7
Language:
URL:
https://aclanthology.org/2020.acl-srw.1
DOI:
10.18653/v1/2020.acl-srw.1
Bibkey:
Cite (ACL):
Prajjwal Bhargava. 2020. Adaptive Transformers for Learning Multimodal Representations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, pages 1–7, Online. Association for Computational Linguistics.
Cite (Informal):
Adaptive Transformers for Learning Multimodal Representations (Bhargava, ACL 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.acl-srw.1.pdf
Video:
 http://slideslive.com/38928637
Code
 prajjwal1/adaptive_transformer +  additional community code
Data
Visual Question Answering