Proceedings of the IWCS Shared Task on Semantic Parsing

Lasha Abzianidze, Rik van Noord, Hessel Haagsma, Johan Bos (Editors)


Anthology ID:
W19-12
Month:
May
Year:
2019
Address:
Gothenburg, Sweden
Venue:
IWCS
SIG:
SIGSEM
Publisher:
Association for Computational Linguistics
URL:
https://aclanthology.org/W19-12
DOI:
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
https://aclanthology.org/W19-12.pdf

pdf bib
Proceedings of the IWCS Shared Task on Semantic Parsing
Lasha Abzianidze | Rik van Noord | Hessel Haagsma | Johan Bos

pdf bib
The First Shared Task on Discourse Representation Structure Parsing
Lasha Abzianidze | Rik van Noord | Hessel Haagsma | Johan Bos

The paper presents the IWCS 2019 shared task on semantic parsing where the goal is to produce Discourse Representation Structures (DRSs) for English sentences. DRSs originate from Discourse Representation Theory and represent scoped meaning representations that capture the semantics of negation, modals, quantification, and presupposition triggers. Additionally, concepts and event-participants in DRSs are described with WordNet synsets and the thematic roles from VerbNet. To measure similarity between two DRSs, they are represented in a clausal form, i.e. as a set of tuples. Participant systems were expected to produce DRSs in this clausal form. Taking into account the rich lexical information, explicit scope marking, a high number of shared variables among clauses, and highly-constrained format of valid DRSs, all these makes the DRS parsing a challenging NLP task. The results of the shared task displayed improvements over the existing state-of-the-art parser.

pdf bib
Transition-based DRS Parsing Using Stack-LSTMs
Kilian Evang

We present our submission to the IWCS 2019 shared task on semantic parsing, a transition-based parser that uses explicit word-meaning pairings, but no explicit representation of syntax. Parsing decisions are made based on vector representations of parser states, encoded via stack-LSTMs (Ballesteros et al., 2017), as well as some heuristic rules. Our system reaches 70.88% f-score in the competition.

pdf bib
Discourse Representation Structure Parsing with Recurrent Neural Networks and the Transformer Model
Jiangming Liu | Shay B. Cohen | Mirella Lapata

We describe the systems we developed for Discourse Representation Structure (DRS) parsing as part of the IWCS-2019 Shared Task of DRS Parsing.1 Our systems are based on sequence-to-sequence modeling. To implement our model, we use the open-source neural machine translation system implemented in PyTorch, OpenNMT-py. We experimented with a variety of encoder-decoder models based on recurrent neural networks and the Transformer model. We conduct experiments on the standard benchmark of the Parallel Meaning Bank (PMB 2.2). Our best system achieves a score of 84.8% F1 in the DRS parsing shared task.

pdf bib
Neural Boxer at the IWCS Shared Task on DRS Parsing
Rik van Noord

This paper describes our participation in the shared task of Discourse Representation Structure parsing. It follows the work of Van Noord et al. (2018), who employed a neural sequence-to-sequence model to produce DRSs, also exploiting linguistic information with multiple encoders. We provide a detailed look in the performance of this model and show that (i) the benefit of the linguistic features is evident across a number of experiments which vary the amount of training data and (ii) the model can be improved by applying a number of postprocessing methods to fix ill-formed output. Our model ended up in second place in the competition, with an F-score of 84.5.