Unsupervised Natural Question Answering with a Small Model

Martin Andrews, Sam Witteveen


Abstract
The recent demonstration of the power of huge language models such as GPT-2 to memorise the answers to factoid questions raises questions about the extent to which knowledge is being embedded directly within these large models. This short paper describes an architecture through which much smaller models can also answer such questions - by making use of ‘raw’ external knowledge. The contribution of this work is that the methods presented here rely on unsupervised learning techniques, complementing the unsupervised training of the Language Model. The goal of this line of research is to be able to add knowledge explicitly, without extensive training.
Anthology ID:
D19-6606
Volume:
Proceedings of the Second Workshop on Fact Extraction and VERification (FEVER)
Month:
November
Year:
2019
Address:
Hong Kong, China
Editors:
James Thorne, Andreas Vlachos, Oana Cocarascu, Christos Christodoulopoulos, Arpit Mittal
Venue:
WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
34–38
Language:
URL:
https://aclanthology.org/D19-6606
DOI:
10.18653/v1/D19-6606
Bibkey:
Cite (ACL):
Martin Andrews and Sam Witteveen. 2019. Unsupervised Natural Question Answering with a Small Model. In Proceedings of the Second Workshop on Fact Extraction and VERification (FEVER), pages 34–38, Hong Kong, China. Association for Computational Linguistics.
Cite (Informal):
Unsupervised Natural Question Answering with a Small Model (Andrews & Witteveen, 2019)
Copy Citation:
PDF:
https://aclanthology.org/D19-6606.pdf
Data
Natural QuestionsSQuAD