2kenize: Tying Subword Sequences for Chinese Script Conversion

Pranav A, Isabelle Augenstein


Abstract
Simplified Chinese to Traditional Chinese character conversion is a common preprocessing step in Chinese NLP. Despite this, current approaches have insufficient performance because they do not take into account that a simplified Chinese character can correspond to multiple traditional characters. Here, we propose a model that can disambiguate between mappings and convert between the two scripts. The model is based on subword segmentation, two language models, as well as a method for mapping between subword sequences. We further construct benchmark datasets for topic classification and script conversion. Our proposed method outperforms previous Chinese Character conversion approaches by 6 points in accuracy. These results are further confirmed in a downstream application, where 2kenize is used to convert pretraining dataset for topic classification. An error analysis reveals that our method’s particular strengths are in dealing with code mixing and named entities.
Anthology ID:
2020.acl-main.648
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Editors:
Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7257–7272
Language:
URL:
https://aclanthology.org/2020.acl-main.648
DOI:
10.18653/v1/2020.acl-main.648
Bibkey:
Cite (ACL):
Pranav A and Isabelle Augenstein. 2020. 2kenize: Tying Subword Sequences for Chinese Script Conversion. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7257–7272, Online. Association for Computational Linguistics.
Cite (Informal):
2kenize: Tying Subword Sequences for Chinese Script Conversion (A & Augenstein, ACL 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.acl-main.648.pdf
Video:
 http://slideslive.com/38928985
Code
 pranav-ust/2kenize