High Performance Natural Language Processing

Gabriel Ilharco, Cesar Ilharco, Iulia Turc, Tim Dettmers, Felipe Ferreira, Kenton Lee


Abstract
Scale has played a central role in the rapid progress natural language processing has enjoyed in recent years. While benchmarks are dominated by ever larger models, efficient hardware use is critical for their widespread adoption and further progress in the field. In this cutting-edge tutorial, we will recapitulate the state-of-the-art in natural language processing with scale in perspective. After establishing these foundations, we will cover a wide range of techniques for improving efficiency, including knowledge distillation, quantization, pruning, more efficient architectures, along with case studies and practical implementation tricks.
Anthology ID:
2020.emnlp-tutorials.4
Volume:
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts
Month:
November
Year:
2020
Address:
Online
Editors:
Aline Villavicencio, Benjamin Van Durme
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
24–27
Language:
URL:
https://aclanthology.org/2020.emnlp-tutorials.4
DOI:
10.18653/v1/2020.emnlp-tutorials.4
Bibkey:
Cite (ACL):
Gabriel Ilharco, Cesar Ilharco, Iulia Turc, Tim Dettmers, Felipe Ferreira, and Kenton Lee. 2020. High Performance Natural Language Processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts, pages 24–27, Online. Association for Computational Linguistics.
Cite (Informal):
High Performance Natural Language Processing (Ilharco et al., EMNLP 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.emnlp-tutorials.4.pdf