From Speech-to-Speech Translation to Automatic Dubbing

Marcello Federico, Robert Enyedi, Roberto Barra-Chicote, Ritwik Giri, Umut Isik, Arvindh Krishnaswamy, Hassan Sawaf


Abstract
We present enhancements to a speech-to-speech translation pipeline in order to perform automatic dubbing. Our architecture features neural machine translation generating output of preferred length, prosodic alignment of the translation with the original speech segments, neural text-to-speech with fine tuning of the duration of each utterance, and, finally, audio rendering to enriches text-to-speech output with background noise and reverberation extracted from the original audio. We report and discuss results of a first subjective evaluation of automatic dubbing of excerpts of TED Talks from English into Italian, which measures the perceived naturalness of automatic dubbing and the relative importance of each proposed enhancement.
Anthology ID:
2020.iwslt-1.31
Volume:
Proceedings of the 17th International Conference on Spoken Language Translation
Month:
July
Year:
2020
Address:
Online
Editors:
Marcello Federico, Alex Waibel, Kevin Knight, Satoshi Nakamura, Hermann Ney, Jan Niehues, Sebastian Stüker, Dekai Wu, Joseph Mariani, Francois Yvon
Venue:
IWSLT
SIG:
SIGSLT
Publisher:
Association for Computational Linguistics
Note:
Pages:
257–264
Language:
URL:
https://aclanthology.org/2020.iwslt-1.31
DOI:
10.18653/v1/2020.iwslt-1.31
Bibkey:
Cite (ACL):
Marcello Federico, Robert Enyedi, Roberto Barra-Chicote, Ritwik Giri, Umut Isik, Arvindh Krishnaswamy, and Hassan Sawaf. 2020. From Speech-to-Speech Translation to Automatic Dubbing. In Proceedings of the 17th International Conference on Spoken Language Translation, pages 257–264, Online. Association for Computational Linguistics.
Cite (Informal):
From Speech-to-Speech Translation to Automatic Dubbing (Federico et al., IWSLT 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.iwslt-1.31.pdf
Video:
 http://slideslive.com/38929599