Deep Neural Networks at the Service of Multilingual Parallel Sentence Extraction

Ahmad Aghaebrahimian


Abstract
Wikipedia provides an invaluable source of parallel multilingual data, which are in high demand for various sorts of linguistic inquiry, including both theoretical and practical studies. We introduce a novel end-to-end neural model for large-scale parallel data harvesting from Wikipedia. Our model is language-independent, robust, and highly scalable. We use our system for collecting parallel German-English, French-English and Persian-English sentences. Human evaluations at the end show the strong performance of this model in collecting high-quality parallel data. We also propose a statistical framework which extends the results of our human evaluation to other language pairs. Our model also obtained a state-of-the-art result on the German-English dataset of BUCC 2017 shared task on parallel sentence extraction from comparable corpora.
Anthology ID:
C18-1116
Volume:
Proceedings of the 27th International Conference on Computational Linguistics
Month:
August
Year:
2018
Address:
Santa Fe, New Mexico, USA
Editors:
Emily M. Bender, Leon Derczynski, Pierre Isabelle
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1372–1383
Language:
URL:
https://aclanthology.org/C18-1116
DOI:
Bibkey:
Cite (ACL):
Ahmad Aghaebrahimian. 2018. Deep Neural Networks at the Service of Multilingual Parallel Sentence Extraction. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1372–1383, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
Cite (Informal):
Deep Neural Networks at the Service of Multilingual Parallel Sentence Extraction (Aghaebrahimian, COLING 2018)
Copy Citation:
PDF:
https://aclanthology.org/C18-1116.pdf
Data
BUCC