Latent Alignment of Procedural Concepts in Multimodal Recipes

Hossein Rajaby Faghihi, Roshanak Mirzaee, Sudarshan Paliwal, Parisa Kordjamshidi


Abstract
We propose a novel alignment mechanism to deal with procedural reasoning on a newly released multimodal QA dataset, named RecipeQA. Our model is solving the textual cloze task which is a reading comprehension on a recipe containing images and instructions. We exploit the power of attention networks, cross-modal representations, and a latent alignment space between instructions and candidate answers to solve the problem. We introduce constrained max-pooling which refines the max pooling operation on the alignment matrix to impose disjoint constraints among the outputs of the model. Our evaluation result indicates a 19% improvement over the baselines.
Anthology ID:
2020.alvr-1.5
Volume:
Proceedings of the First Workshop on Advances in Language and Vision Research
Month:
July
Year:
2020
Address:
Online
Editors:
Xin Wang, Jesse Thomason, Ronghang Hu, Xinlei Chen, Peter Anderson, Qi Wu, Asli Celikyilmaz, Jason Baldridge, William Yang Wang
Venue:
ALVR
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
26–31
Language:
URL:
https://aclanthology.org/2020.alvr-1.5
DOI:
10.18653/v1/2020.alvr-1.5
Bibkey:
Cite (ACL):
Hossein Rajaby Faghihi, Roshanak Mirzaee, Sudarshan Paliwal, and Parisa Kordjamshidi. 2020. Latent Alignment of Procedural Concepts in Multimodal Recipes. In Proceedings of the First Workshop on Advances in Language and Vision Research, pages 26–31, Online. Association for Computational Linguistics.
Cite (Informal):
Latent Alignment of Procedural Concepts in Multimodal Recipes (Rajaby Faghihi et al., ALVR 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.alvr-1.5.pdf
Video:
 http://slideslive.com/38929759
Code
 HLR/LatentAlignmentProcedural
Data
RecipeQA