The Speechmatics Parallel Corpus Filtering System for WMT18

Tom Ash, Remi Francis, Will Williams


Abstract
Our entry to the parallel corpus filtering task uses a two-step strategy. The first step uses a series of pragmatic hard ‘rules’ to remove the worst example sentences. This first step reduces the effective corpus size down from the initial 1 billion to 160 million tokens. The second step uses four different heuristics weighted to produce a score that is then used for further filtering down to 100 or 10 million tokens. Our final system produces competitive results without requiring excessive fine tuning to the exact task or language pair. The first step in isolation provides a very fast filter that gives most of the gains of the final system.
Anthology ID:
W18-6472
Volume:
Proceedings of the Third Conference on Machine Translation: Shared Task Papers
Month:
October
Year:
2018
Address:
Belgium, Brussels
Editors:
Ondřej Bojar, Rajen Chatterjee, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, Christof Monz, Matteo Negri, Aurélie Névéol, Mariana Neves, Matt Post, Lucia Specia, Marco Turchi, Karin Verspoor
Venue:
WMT
SIG:
SIGMT
Publisher:
Association for Computational Linguistics
Note:
Pages:
853–859
Language:
URL:
https://aclanthology.org/W18-6472
DOI:
10.18653/v1/W18-6472
Bibkey:
Cite (ACL):
Tom Ash, Remi Francis, and Will Williams. 2018. The Speechmatics Parallel Corpus Filtering System for WMT18. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 853–859, Belgium, Brussels. Association for Computational Linguistics.
Cite (Informal):
The Speechmatics Parallel Corpus Filtering System for WMT18 (Ash et al., WMT 2018)
Copy Citation:
PDF:
https://aclanthology.org/W18-6472.pdf