Adaptive Fusion Techniques for Multimodal Data

Gaurav Sahu, Olga Vechtomova


Abstract
Effective fusion of data from multiple modalities, such as video, speech, and text, is challenging due to the heterogeneous nature of multimodal data. In this paper, we propose adaptive fusion techniques that aim to model context from different modalities effectively. Instead of defining a deterministic fusion operation, such as concatenation, for the network, we let the network decide “how” to combine a given set of multimodal features more effectively. We propose two networks: 1) Auto-Fusion, which learns to compress information from different modalities while preserving the context, and 2) GAN-Fusion, which regularizes the learned latent space given context from complementing modalities. A quantitative evaluation on the tasks of multimodal machine translation and emotion recognition suggests that our lightweight, adaptive networks can better model context from other modalities than existing methods, many of which employ massive transformer-based networks.
Anthology ID:
2021.eacl-main.275
Volume:
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume
Month:
April
Year:
2021
Address:
Online
Editors:
Paola Merlo, Jorg Tiedemann, Reut Tsarfaty
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3156–3166
Language:
URL:
https://aclanthology.org/2021.eacl-main.275
DOI:
10.18653/v1/2021.eacl-main.275
Bibkey:
Cite (ACL):
Gaurav Sahu and Olga Vechtomova. 2021. Adaptive Fusion Techniques for Multimodal Data. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 3156–3166, Online. Association for Computational Linguistics.
Cite (Informal):
Adaptive Fusion Techniques for Multimodal Data (Sahu & Vechtomova, EACL 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.eacl-main.275.pdf
Data
How2IEMOCAP