Learning Structured Natural Language Representations for Semantic Parsing

Jianpeng Cheng, Siva Reddy, Vijay Saraswat, Mirella Lapata


Abstract
We introduce a neural semantic parser which is interpretable and scalable. Our model converts natural language utterances to intermediate, domain-general natural language representations in the form of predicate-argument structures, which are induced with a transition system and subsequently mapped to target domains. The semantic parser is trained end-to-end using annotated logical forms or their denotations. We achieve the state of the art on SPADES and GRAPHQUESTIONS and obtain competitive results on GEOQUERY and WEBQUESTIONS. The induced predicate-argument structures shed light on the types of representations useful for semantic parsing and how these are different from linguistically motivated ones.
Anthology ID:
P17-1005
Volume:
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2017
Address:
Vancouver, Canada
Editors:
Regina Barzilay, Min-Yen Kan
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
44–55
Language:
URL:
https://aclanthology.org/P17-1005
DOI:
10.18653/v1/P17-1005
Bibkey:
Cite (ACL):
Jianpeng Cheng, Siva Reddy, Vijay Saraswat, and Mirella Lapata. 2017. Learning Structured Natural Language Representations for Semantic Parsing. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 44–55, Vancouver, Canada. Association for Computational Linguistics.
Cite (Informal):
Learning Structured Natural Language Representations for Semantic Parsing (Cheng et al., ACL 2017)
Copy Citation:
PDF:
https://aclanthology.org/P17-1005.pdf
Video:
 https://aclanthology.org/P17-1005.mp4
Code
 cheng6076/scanner