Improving Adversarial Text Generation by Modeling the Distant Future

Ruiyi Zhang, Changyou Chen, Zhe Gan, Wenlin Wang, Dinghan Shen, Guoyin Wang, Zheng Wen, Lawrence Carin


Abstract
Auto-regressive text generation models usually focus on local fluency, and may cause inconsistent semantic meaning in long text generation. Further, automatically generating words with similar semantics is challenging, and hand-crafted linguistic rules are difficult to apply. We consider a text planning scheme and present a model-based imitation-learning approach to alleviate the aforementioned issues. Specifically, we propose a novel guider network to focus on the generative process over a longer horizon, which can assist next-word prediction and provide intermediate rewards for generator optimization. Extensive experiments demonstrate that the proposed method leads to improved performance.
Anthology ID:
2020.acl-main.227
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Editors:
Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2516–2531
Language:
URL:
https://aclanthology.org/2020.acl-main.227
DOI:
10.18653/v1/2020.acl-main.227
Bibkey:
Cite (ACL):
Ruiyi Zhang, Changyou Chen, Zhe Gan, Wenlin Wang, Dinghan Shen, Guoyin Wang, Zheng Wen, and Lawrence Carin. 2020. Improving Adversarial Text Generation by Modeling the Distant Future. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2516–2531, Online. Association for Computational Linguistics.
Cite (Informal):
Improving Adversarial Text Generation by Modeling the Distant Future (Zhang et al., ACL 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.acl-main.227.pdf
Video:
 http://slideslive.com/38928708