Discourse Representation Structure Parsing with Recurrent Neural Networks and the Transformer Model

Jiangming Liu, Shay B. Cohen, Mirella Lapata


Abstract
We describe the systems we developed for Discourse Representation Structure (DRS) parsing as part of the IWCS-2019 Shared Task of DRS Parsing.1 Our systems are based on sequence-to-sequence modeling. To implement our model, we use the open-source neural machine translation system implemented in PyTorch, OpenNMT-py. We experimented with a variety of encoder-decoder models based on recurrent neural networks and the Transformer model. We conduct experiments on the standard benchmark of the Parallel Meaning Bank (PMB 2.2). Our best system achieves a score of 84.8% F1 in the DRS parsing shared task.
Anthology ID:
W19-1203
Volume:
Proceedings of the IWCS Shared Task on Semantic Parsing
Month:
May
Year:
2019
Address:
Gothenburg, Sweden
Editors:
Lasha Abzianidze, Rik van Noord, Hessel Haagsma, Johan Bos
Venue:
IWCS
SIG:
SIGSEM
Publisher:
Association for Computational Linguistics
Note:
Pages:
Language:
URL:
https://aclanthology.org/W19-1203
DOI:
10.18653/v1/W19-1203
Bibkey:
Cite (ACL):
Jiangming Liu, Shay B. Cohen, and Mirella Lapata. 2019. Discourse Representation Structure Parsing with Recurrent Neural Networks and the Transformer Model. In Proceedings of the IWCS Shared Task on Semantic Parsing, Gothenburg, Sweden. Association for Computational Linguistics.
Cite (Informal):
Discourse Representation Structure Parsing with Recurrent Neural Networks and the Transformer Model (Liu et al., IWCS 2019)
Copy Citation:
PDF:
https://aclanthology.org/W19-1203.pdf