Minimally Supervised Number Normalization

Kyle Gorman, Richard Sproat


Abstract
We propose two models for verbalizing numbers, a key component in speech recognition and synthesis systems. The first model uses an end-to-end recurrent neural network. The second model, drawing inspiration from the linguistics literature, uses finite-state transducers constructed with a minimal amount of training data. While both models achieve near-perfect performance, the latter model can be trained using several orders of magnitude less data than the former, making it particularly useful for low-resource languages.
Anthology ID:
Q16-1036
Volume:
Transactions of the Association for Computational Linguistics, Volume 4
Month:
Year:
2016
Address:
Cambridge, MA
Editors:
Lillian Lee, Mark Johnson, Kristina Toutanova
Venue:
TACL
SIG:
Publisher:
MIT Press
Note:
Pages:
507–519
Language:
URL:
https://aclanthology.org/Q16-1036
DOI:
10.1162/tacl_a_00114
Bibkey:
Cite (ACL):
Kyle Gorman and Richard Sproat. 2016. Minimally Supervised Number Normalization. Transactions of the Association for Computational Linguistics, 4:507–519.
Cite (Informal):
Minimally Supervised Number Normalization (Gorman & Sproat, TACL 2016)
Copy Citation:
PDF:
https://aclanthology.org/Q16-1036.pdf
Video:
 https://aclanthology.org/Q16-1036.mp4