ParaSCI: A Large Scientific Paraphrase Dataset for Longer Paraphrase Generation

Qingxiu Dong, Xiaojun Wan, Yue Cao


Abstract
We propose ParaSCI, the first large-scale paraphrase dataset in the scientific field, including 33,981 paraphrase pairs from ACL (ParaSCI-ACL) and 316,063 pairs from arXiv (ParaSCI-arXiv). Digging into characteristics and common patterns of scientific papers, we construct this dataset though intra-paper and inter-paper methods, such as collecting citations to the same paper or aggregating definitions by scientific terms. To take advantage of sentences paraphrased partially, we put up PDBERT as a general paraphrase discovering method. The major advantages of paraphrases in ParaSCI lie in the prominent length and textual diversity, which is complementary to existing paraphrase datasets. ParaSCI obtains satisfactory results on human evaluation and downstream tasks, especially long paraphrase generation.
Anthology ID:
2021.eacl-main.33
Volume:
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume
Month:
April
Year:
2021
Address:
Online
Editors:
Paola Merlo, Jorg Tiedemann, Reut Tsarfaty
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
424–434
Language:
URL:
https://aclanthology.org/2021.eacl-main.33
DOI:
10.18653/v1/2021.eacl-main.33
Bibkey:
Cite (ACL):
Qingxiu Dong, Xiaojun Wan, and Yue Cao. 2021. ParaSCI: A Large Scientific Paraphrase Dataset for Longer Paraphrase Generation. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 424–434, Online. Association for Computational Linguistics.
Cite (Informal):
ParaSCI: A Large Scientific Paraphrase Dataset for Longer Paraphrase Generation (Dong et al., EACL 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.eacl-main.33.pdf
Code
 dqxiu/ParaSCI
Data
MS COCOPITS2ORC