Sequential Attention: A Context-Aware Alignment Function for Machine Reading

Sebastian Brarda, Philip Yeres, Samuel Bowman


Abstract
In this paper we propose a neural network model with a novel Sequential Attention layer that extends soft attention by assigning weights to words in an input sequence in a way that takes into account not just how well that word matches a query, but how well surrounding words match. We evaluate this approach on the task of reading comprehension (on the Who did What and CNN datasets) and show that it dramatically improves a strong baseline—the Stanford Reader—and is competitive with the state of the art.
Anthology ID:
W17-2610
Volume:
Proceedings of the 2nd Workshop on Representation Learning for NLP
Month:
August
Year:
2017
Address:
Vancouver, Canada
Editors:
Phil Blunsom, Antoine Bordes, Kyunghyun Cho, Shay Cohen, Chris Dyer, Edward Grefenstette, Karl Moritz Hermann, Laura Rimell, Jason Weston, Scott Yih
Venue:
RepL4NLP
SIG:
SIGREP
Publisher:
Association for Computational Linguistics
Note:
Pages:
75–80
Language:
URL:
https://aclanthology.org/W17-2610
DOI:
10.18653/v1/W17-2610
Bibkey:
Cite (ACL):
Sebastian Brarda, Philip Yeres, and Samuel Bowman. 2017. Sequential Attention: A Context-Aware Alignment Function for Machine Reading. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 75–80, Vancouver, Canada. Association for Computational Linguistics.
Cite (Informal):
Sequential Attention: A Context-Aware Alignment Function for Machine Reading (Brarda et al., RepL4NLP 2017)
Copy Citation:
PDF:
https://aclanthology.org/W17-2610.pdf