Large SMT data-sets extracted from Wikipedia

Dan Tufiş


Abstract
The article presents experiments on mining Wikipedia for extracting SMT useful sentence pairs in three language pairs. Each extracted sentence pair is associated with a cross-lingual lexical similarity score based on which, several evaluations have been conducted to estimate the similarity thresholds which allow the extraction of the most useful data for training three-language pairs SMT systems. The experiments showed that for a similarity score higher than 0.7 all sentence pairs in the three language pairs were fully parallel. However, including in the training sets less parallel sentence pairs (that is with a lower similarity score) showed significant improvements in the translation quality (BLEU-based evaluations). The optimized SMT systems were evaluated on unseen test-sets also extracted from Wikipedia. As one of the main goals of our work was to help Wikipedia contributors to translate (with as little post editing as possible) new articles from major languages into less resourced languages and vice-versa, we call this type of translation experiments “in-genre” translation. As in the case of “in-domain” translation, our evaluations showed that using only “in-genre” training data for translating same genre new texts is better than mixing the training data with “out-of-genre” (even) parallel texts.
Anthology ID:
L14-1024
Volume:
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)
Month:
May
Year:
2014
Address:
Reykjavik, Iceland
Editors:
Nicoletta Calzolari, Khalid Choukri, Thierry Declerck, Hrafn Loftsson, Bente Maegaard, Joseph Mariani, Asuncion Moreno, Jan Odijk, Stelios Piperidis
Venue:
LREC
SIG:
Publisher:
European Language Resources Association (ELRA)
Note:
Pages:
656–663
Language:
URL:
http://www.lrec-conf.org/proceedings/lrec2014/pdf/103_Paper.pdf
DOI:
Bibkey:
Cite (ACL):
Dan Tufiş. 2014. Large SMT data-sets extracted from Wikipedia. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 656–663, Reykjavik, Iceland. European Language Resources Association (ELRA).
Cite (Informal):
Large SMT data-sets extracted from Wikipedia (Tufiş, LREC 2014)
Copy Citation:
PDF:
http://www.lrec-conf.org/proceedings/lrec2014/pdf/103_Paper.pdf