Semantic Noise Matters for Neural Natural Language Generation

Ondřej Dušek, David M. Howcroft, Verena Rieser


Abstract
Neural natural language generation (NNLG) systems are known for their pathological outputs, i.e. generating text which is unrelated to the input specification. In this paper, we show the impact of semantic noise on state-of-the-art NNLG models which implement different semantic control mechanisms. We find that cleaned data can improve semantic correctness by up to 97%, while maintaining fluency. We also find that the most common error is omitting information, rather than hallucination.
Anthology ID:
W19-8652
Volume:
Proceedings of the 12th International Conference on Natural Language Generation
Month:
October–November
Year:
2019
Address:
Tokyo, Japan
Editors:
Kees van Deemter, Chenghua Lin, Hiroya Takamura
Venue:
INLG
SIG:
SIGGEN
Publisher:
Association for Computational Linguistics
Note:
Pages:
421–426
Language:
URL:
https://aclanthology.org/W19-8652
DOI:
10.18653/v1/W19-8652
Bibkey:
Cite (ACL):
Ondřej Dušek, David M. Howcroft, and Verena Rieser. 2019. Semantic Noise Matters for Neural Natural Language Generation. In Proceedings of the 12th International Conference on Natural Language Generation, pages 421–426, Tokyo, Japan. Association for Computational Linguistics.
Cite (Informal):
Semantic Noise Matters for Neural Natural Language Generation (Dušek et al., INLG 2019)
Copy Citation:
PDF:
https://aclanthology.org/W19-8652.pdf
Supplementary attachment:
 W19-8652.Supplementary_Attachment.pdf
Code
 tuetschek/e2e-cleaning
Data
E2E