Neural Word Segmentation with Rich Pretraining

Jie Yang, Yue Zhang, Fei Dong


Abstract
Neural word segmentation research has benefited from large-scale raw texts by leveraging them for pretraining character and word embeddings. On the other hand, statistical segmentation research has exploited richer sources of external information, such as punctuation, automatic segmentation and POS. We investigate the effectiveness of a range of external training sources for neural word segmentation by building a modular segmentation model, pretraining the most important submodule using rich external sources. Results show that such pretraining significantly improves the model, leading to accuracies competitive to the best methods on six benchmarks.
Anthology ID:
P17-1078
Volume:
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2017
Address:
Vancouver, Canada
Editors:
Regina Barzilay, Min-Yen Kan
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
839–849
Language:
URL:
https://aclanthology.org/P17-1078
DOI:
10.18653/v1/P17-1078
Bibkey:
Cite (ACL):
Jie Yang, Yue Zhang, and Fei Dong. 2017. Neural Word Segmentation with Rich Pretraining. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 839–849, Vancouver, Canada. Association for Computational Linguistics.
Cite (Informal):
Neural Word Segmentation with Rich Pretraining (Yang et al., ACL 2017)
Copy Citation:
PDF:
https://aclanthology.org/P17-1078.pdf
Video:
 https://aclanthology.org/P17-1078.mp4
Code
 jiesutd/RichWordSegmentor