Improving Semantic Parsing via Answer Type Inference

Semih Yavuz1, Izzeddin Gur2, Yu Su2, Mudhakar Srivatsa3, Xifeng Yan2
1University of California Santa Barbara, 2UC Santa Barbara, 3IBM Research


Abstract

In this work, we show the possibility of inferring the answer type before solving a factoid question and leveraging the type information to improve semantic parsing. By replacing the topic entity in a question with its type, we are able to generate an abstract form of the question, whose answer corresponds to the answer type of the original question. A bidirectional LSTM model is built to train over the abstract form of questions and infer their answer types. It is also observed that if we convert a question into a statement form, our LSTM model achieves better accuracy. Using the predicted type information to rerank the logical forms returned by AgendaIL, one of the leading semantic parsers, we are able to improve the F1-score from 49.7 to 52.6 on the WEBQUESTIONS data.