AutoQA: From Databases To QA Semantic Parsers With Only Synthetic Training Data

Silei Xu, Sina Semnani, Giovanni Campagna, Monica Lam


Abstract
We propose AutoQA, a methodology and toolkit to generate semantic parsers that answer questions on databases, with no manual effort. Given a database schema and its data, AutoQA automatically generates a large set of high-quality questions for training that covers different database operations. It uses automatic paraphrasing combined with template-based parsing to find alternative expressions of an attribute in different parts of speech. It also uses a novel filtered auto-paraphraser to generate correct paraphrases of entire sentences. We apply AutoQA to the Schema2QA dataset and obtain an average logical form accuracy of 62.9% when tested on natural questions, which is only 6.4% lower than a model trained with expert natural language annotations and paraphrase data collected from crowdworkers. To demonstrate the generality of AutoQA, we also apply it to the Overnight dataset. AutoQA achieves 69.8% answer accuracy, 16.4% higher than the state-of-the-art zero-shot models and only 5.2% lower than the same model trained with human data.
Anthology ID:
2020.emnlp-main.31
Volume:
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Month:
November
Year:
2020
Address:
Online
Editors:
Bonnie Webber, Trevor Cohn, Yulan He, Yang Liu
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
422–434
Language:
URL:
https://aclanthology.org/2020.emnlp-main.31
DOI:
10.18653/v1/2020.emnlp-main.31
Bibkey:
Cite (ACL):
Silei Xu, Sina Semnani, Giovanni Campagna, and Monica Lam. 2020. AutoQA: From Databases To QA Semantic Parsers With Only Synthetic Training Data. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 422–434, Online. Association for Computational Linguistics.
Cite (Informal):
AutoQA: From Databases To QA Semantic Parsers With Only Synthetic Training Data (Xu et al., EMNLP 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.emnlp-main.31.pdf
Video:
 https://slideslive.com/38939351
Code
 stanford-oval/genie-toolkit +  additional community code
Data
Stanford Schema2QA Dataset