Sentence Simplification with Deep Reinforcement Learning

Xingxing Zhang, Mirella Lapata


Abstract
Sentence simplification aims to make sentences easier to read and understand. Most recent approaches draw on insights from machine translation to learn simplification rewrites from monolingual corpora of complex and simple sentences. We address the simplification problem with an encoder-decoder model coupled with a deep reinforcement learning framework. Our model, which we call DRESS (as shorthand for Deep REinforcement Sentence Simplification), explores the space of possible simplifications while learning to optimize a reward function that encourages outputs which are simple, fluent, and preserve the meaning of the input. Experiments on three datasets demonstrate that our model outperforms competitive simplification systems.
Anthology ID:
D17-1062
Volume:
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
Month:
September
Year:
2017
Address:
Copenhagen, Denmark
Editors:
Martha Palmer, Rebecca Hwa, Sebastian Riedel
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
584–594
Language:
URL:
https://aclanthology.org/D17-1062
DOI:
10.18653/v1/D17-1062
Bibkey:
Cite (ACL):
Xingxing Zhang and Mirella Lapata. 2017. Sentence Simplification with Deep Reinforcement Learning. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 584–594, Copenhagen, Denmark. Association for Computational Linguistics.
Cite (Informal):
Sentence Simplification with Deep Reinforcement Learning (Zhang & Lapata, EMNLP 2017)
Copy Citation:
PDF:
https://aclanthology.org/D17-1062.pdf
Video:
 https://aclanthology.org/D17-1062.mp4
Data
WikiLargeASSETNewselaTurkCorpus