Semantic Parsing with Semi-Supervised Sequential Autoencoders

Tomáš Kočiský1, Gábor Melis1, Edward Grefenstette1, Chris Dyer1, Wang Ling1, Phil Blunsom2, Karl Moritz Hermann1
1Google DeepMind, 2University of Oxford


Abstract

We present a novel semi-supervised approach for sequence transduction and apply it to semantic parsing. The unsupervised component is based on a generative model in which latent sentences generate the unpaired logical forms. We apply this method to a number of semantic parsing tasks focusing on domains with limited access to labelled training data and extend those datasets with synthetically generated logical forms.