Transfer Learning for Neural Semantic Parsing

Xing Fan, Emilio Monti, Lambert Mathias, Markus Dreyer


Abstract
The goal of semantic parsing is to map natural language to a machine interpretable meaning representation language (MRL). One of the constraints that limits full exploration of deep learning technologies for semantic parsing is the lack of sufficient annotation training data. In this paper, we propose using sequence-to-sequence in a multi-task setup for semantic parsing with focus on transfer learning. We explore three multi-task architectures for sequence-to-sequence model and compare their performance with the independently trained model. Our experiments show that the multi-task setup aids transfer learning from an auxiliary task with large labeled data to the target task with smaller labeled data. We see an absolute accuracy gain ranging from 1.0% to 4.4% in in our in-house data set and we also see good gains ranging from 2.5% to 7.0% on the ATIS semantic parsing tasks with syntactic and semantic auxiliary tasks.
Anthology ID:
W17-2607
Volume:
Proceedings of the 2nd Workshop on Representation Learning for NLP
Month:
August
Year:
2017
Address:
Vancouver, Canada
Editors:
Phil Blunsom, Antoine Bordes, Kyunghyun Cho, Shay Cohen, Chris Dyer, Edward Grefenstette, Karl Moritz Hermann, Laura Rimell, Jason Weston, Scott Yih
Venue:
RepL4NLP
SIG:
SIGREP
Publisher:
Association for Computational Linguistics
Note:
Pages:
48–56
Language:
URL:
https://aclanthology.org/W17-2607
DOI:
10.18653/v1/W17-2607
Bibkey:
Cite (ACL):
Xing Fan, Emilio Monti, Lambert Mathias, and Markus Dreyer. 2017. Transfer Learning for Neural Semantic Parsing. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 48–56, Vancouver, Canada. Association for Computational Linguistics.
Cite (Informal):
Transfer Learning for Neural Semantic Parsing (Fan et al., RepL4NLP 2017)
Copy Citation:
PDF:
https://aclanthology.org/W17-2607.pdf