Neural Data-to-Text Generation via Jointly Learning the Segmentation and Correspondence

Xiaoyu Shen, Ernie Chang, Hui Su, Cheng Niu, Dietrich Klakow


Abstract
The neural attention model has achieved great success in data-to-text generation tasks. Though usually excelling at producing fluent text, it suffers from the problem of information missing, repetition and “hallucination”. Due to the black-box nature of the neural attention architecture, avoiding these problems in a systematic way is non-trivial. To address this concern, we propose to explicitly segment target text into fragment units and align them with their data correspondences. The segmentation and correspondence are jointly learned as latent variables without any human annotations. We further impose a soft statistical constraint to regularize the segmental granularity. The resulting architecture maintains the same expressive power as neural attention models, while being able to generate fully interpretable outputs with several times less computational cost. On both E2E and WebNLG benchmarks, we show the proposed model consistently outperforms its neural attention counterparts.
Anthology ID:
2020.acl-main.641
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Editors:
Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7155–7165
Language:
URL:
https://aclanthology.org/2020.acl-main.641
DOI:
10.18653/v1/2020.acl-main.641
Bibkey:
Cite (ACL):
Xiaoyu Shen, Ernie Chang, Hui Su, Cheng Niu, and Dietrich Klakow. 2020. Neural Data-to-Text Generation via Jointly Learning the Segmentation and Correspondence. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7155–7165, Online. Association for Computational Linguistics.
Cite (Informal):
Neural Data-to-Text Generation via Jointly Learning the Segmentation and Correspondence (Shen et al., ACL 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.acl-main.641.pdf
Video:
 http://slideslive.com/38929190