DRTS Parsing with Structure-Aware Encoding and Decoding

Qiankun Fu, Yue Zhang, Jiangming Liu, Meishan Zhang


Abstract
Discourse representation tree structure (DRTS) parsing is a novel semantic parsing task which has been concerned most recently. State-of-the-art performance can be achieved by a neural sequence-to-sequence model, treating the tree construction as an incremental sequence generation problem. Structural information such as input syntax and the intermediate skeleton of the partial output has been ignored in the model, which could be potentially useful for the DRTS parsing. In this work, we propose a structural-aware model at both the encoder and decoder phase to integrate the structural information, where graph attention network (GAT) is exploited for effectively modeling. Experimental results on a benchmark dataset show that our proposed model is effective and can obtain the best performance in the literature.
Anthology ID:
2020.acl-main.609
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Editors:
Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6818–6828
Language:
URL:
https://aclanthology.org/2020.acl-main.609
DOI:
10.18653/v1/2020.acl-main.609
Bibkey:
Cite (ACL):
Qiankun Fu, Yue Zhang, Jiangming Liu, and Meishan Zhang. 2020. DRTS Parsing with Structure-Aware Encoding and Decoding. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6818–6828, Online. Association for Computational Linguistics.
Cite (Informal):
DRTS Parsing with Structure-Aware Encoding and Decoding (Fu et al., ACL 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.acl-main.609.pdf
Video:
 http://slideslive.com/38929443