KIT’s IWSLT 2020 SLT Translation System

Ngoc-Quan Pham, Felix Schneider, Tuan-Nam Nguyen, Thanh-Le Ha, Thai Son Nguyen, Maximilian Awiszus, Sebastian Stüker, Alexander Waibel


Abstract
This paper describes KIT’s submissions to the IWSLT2020 Speech Translation evaluation campaign. We first participate in the simultaneous translation task, in which our simultaneous models are Transformer based and can be efficiently trained to obtain low latency with minimized compromise in quality. On the offline speech translation task, we applied our new Speech Transformer architecture to end-to-end speech translation. The obtained model can provide translation quality which is competitive to a complicated cascade. The latter still has the upper hand, thanks to the ability to transparently access to the transcription, and resegment the inputs to avoid fragmentation.
Anthology ID:
2020.iwslt-1.4
Volume:
Proceedings of the 17th International Conference on Spoken Language Translation
Month:
July
Year:
2020
Address:
Online
Editors:
Marcello Federico, Alex Waibel, Kevin Knight, Satoshi Nakamura, Hermann Ney, Jan Niehues, Sebastian Stüker, Dekai Wu, Joseph Mariani, Francois Yvon
Venue:
IWSLT
SIG:
SIGSLT
Publisher:
Association for Computational Linguistics
Note:
Pages:
55–61
Language:
URL:
https://aclanthology.org/2020.iwslt-1.4
DOI:
10.18653/v1/2020.iwslt-1.4
Bibkey:
Cite (ACL):
Ngoc-Quan Pham, Felix Schneider, Tuan-Nam Nguyen, Thanh-Le Ha, Thai Son Nguyen, Maximilian Awiszus, Sebastian Stüker, and Alexander Waibel. 2020. KIT’s IWSLT 2020 SLT Translation System. In Proceedings of the 17th International Conference on Spoken Language Translation, pages 55–61, Online. Association for Computational Linguistics.
Cite (Informal):
KIT’s IWSLT 2020 SLT Translation System (Pham et al., IWSLT 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.iwslt-1.4.pdf
Video:
 http://slideslive.com/38929605