Is the Best Better? Bayesian Statistical Model Comparison for Natural Language Processing

Piotr Szymański, Kyle Gorman


Abstract
Recent work raises concerns about the use of standard splits to compare natural language processing models. We propose a Bayesian statistical model comparison technique which uses k-fold cross-validation across multiple data sets to estimate the likelihood that one model will outperform the other, or that the two will produce practically equivalent results. We use this technique to rank six English part-of-speech taggers across two data sets and three evaluation metrics.
Anthology ID:
2020.emnlp-main.172
Volume:
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Month:
November
Year:
2020
Address:
Online
Editors:
Bonnie Webber, Trevor Cohn, Yulan He, Yang Liu
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2203–2212
Language:
URL:
https://aclanthology.org/2020.emnlp-main.172
DOI:
10.18653/v1/2020.emnlp-main.172
Bibkey:
Cite (ACL):
Piotr Szymański and Kyle Gorman. 2020. Is the Best Better? Bayesian Statistical Model Comparison for Natural Language Processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2203–2212, Online. Association for Computational Linguistics.
Cite (Informal):
Is the Best Better? Bayesian Statistical Model Comparison for Natural Language Processing (Szymański & Gorman, EMNLP 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.emnlp-main.172.pdf
Optional supplementary material:
 2020.emnlp-main.172.OptionalSupplementaryMaterial.zip
Video:
 https://slideslive.com/38939124
Data
Penn Treebank