Decoding with Finite-State Transducers on GPUs

Arturo Argueta, David Chiang


Abstract
Weighted finite automata and transducers (including hidden Markov models and conditional random fields) are widely used in natural language processing (NLP) to perform tasks such as morphological analysis, part-of-speech tagging, chunking, named entity recognition, speech recognition, and others. Parallelizing finite state algorithms on graphics processing units (GPUs) would benefit many areas of NLP. Although researchers have implemented GPU versions of basic graph algorithms, no work, to our knowledge, has been done on GPU algorithms for weighted finite automata. We introduce a GPU implementation of the Viterbi and forward-backward algorithm, achieving speedups of up to 4x over our serial implementations running on different computer architectures and 3335x over widely used tools such as OpenFST.
Anthology ID:
E17-1098
Volume:
Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
Month:
April
Year:
2017
Address:
Valencia, Spain
Editors:
Mirella Lapata, Phil Blunsom, Alexander Koller
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1044–1052
Language:
URL:
https://aclanthology.org/E17-1098
DOI:
Bibkey:
Cite (ACL):
Arturo Argueta and David Chiang. 2017. Decoding with Finite-State Transducers on GPUs. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 1044–1052, Valencia, Spain. Association for Computational Linguistics.
Cite (Informal):
Decoding with Finite-State Transducers on GPUs (Argueta & Chiang, EACL 2017)
Copy Citation:
PDF:
https://aclanthology.org/E17-1098.pdf