SiBert: Enhanced Chinese Pre-trained Language Model with Sentence Insertion

Jiahao Chen, Chenjie Cao, Xiuyan Jiang


Abstract
Pre-trained models have achieved great success in learning unsupervised language representations by self-supervised tasks on large-scale corpora. Recent studies mainly focus on how to fine-tune different downstream tasks from a general pre-trained model. However, some studies show that customized self-supervised tasks for a particular type of downstream task can effectively help the pre-trained model to capture more corresponding knowledge and semantic information. Hence a new pre-training task called Sentence Insertion (SI) is proposed in this paper for Chinese query-passage pairs NLP tasks including answer span prediction, retrieval question answering and sentence level cloze test. The related experiment results indicate that the proposed SI can improve the performance of the Chinese Pre-trained models significantly. Moreover, a word segmentation method called SentencePiece is utilized to further enhance Chinese Bert performance for tasks with long texts. The complete source code is available at https://github.com/ewrfcas/SiBert_tensorflow.
Anthology ID:
2020.lrec-1.293
Volume:
Proceedings of the Twelfth Language Resources and Evaluation Conference
Month:
May
Year:
2020
Address:
Marseille, France
Editors:
Nicoletta Calzolari, Frédéric Béchet, Philippe Blache, Khalid Choukri, Christopher Cieri, Thierry Declerck, Sara Goggi, Hitoshi Isahara, Bente Maegaard, Joseph Mariani, Hélène Mazo, Asuncion Moreno, Jan Odijk, Stelios Piperidis
Venue:
LREC
SIG:
Publisher:
European Language Resources Association
Note:
Pages:
2405–2412
Language:
English
URL:
https://aclanthology.org/2020.lrec-1.293
DOI:
Bibkey:
Cite (ACL):
Jiahao Chen, Chenjie Cao, and Xiuyan Jiang. 2020. SiBert: Enhanced Chinese Pre-trained Language Model with Sentence Insertion. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 2405–2412, Marseille, France. European Language Resources Association.
Cite (Informal):
SiBert: Enhanced Chinese Pre-trained Language Model with Sentence Insertion (Chen et al., LREC 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.lrec-1.293.pdf
Code
 ewrfcas/SiBert