DGST: a Dual-Generator Network for Text Style Transfer

Xiao Li, Guanyi Chen, Chenghua Lin, Ruizhe Li


Abstract
We propose DGST, a novel and simple Dual-Generator network architecture for text Style Transfer. Our model employs two generators only, and does not rely on any discriminators or parallel corpus for training. Both quantitative and qualitative experiments on the Yelp and IMDb datasets show that our model gives competitive performance compared to several strong baselines with more complicated architecture designs.
Anthology ID:
2020.emnlp-main.578
Volume:
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Month:
November
Year:
2020
Address:
Online
Editors:
Bonnie Webber, Trevor Cohn, Yulan He, Yang Liu
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7131–7136
Language:
URL:
https://aclanthology.org/2020.emnlp-main.578
DOI:
10.18653/v1/2020.emnlp-main.578
Bibkey:
Cite (ACL):
Xiao Li, Guanyi Chen, Chenghua Lin, and Ruizhe Li. 2020. DGST: a Dual-Generator Network for Text Style Transfer. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7131–7136, Online. Association for Computational Linguistics.
Cite (Informal):
DGST: a Dual-Generator Network for Text Style Transfer (Li et al., EMNLP 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.emnlp-main.578.pdf
Video:
 https://slideslive.com/38939386