Is Your Classifier Actually Biased? Measuring Fairness under Uncertainty with Bernstein Bounds

Kawin Ethayarajh


Abstract
Most NLP datasets are not annotated with protected attributes such as gender, making it difficult to measure classification bias using standard measures of fairness (e.g., equal opportunity). However, manually annotating a large dataset with a protected attribute is slow and expensive. Instead of annotating all the examples, can we annotate a subset of them and use that sample to estimate the bias? While it is possible to do so, the smaller this annotated sample is, the less certain we are that the estimate is close to the true bias. In this work, we propose using Bernstein bounds to represent this uncertainty about the bias estimate as a confidence interval. We provide empirical evidence that a 95% confidence interval derived this way consistently bounds the true bias. In quantifying this uncertainty, our method, which we call Bernstein-bounded unfairness, helps prevent classifiers from being deemed biased or unbiased when there is insufficient evidence to make either claim. Our findings suggest that the datasets currently used to measure specific biases are too small to conclusively identify bias except in the most egregious cases. For example, consider a co-reference resolution system that is 5% more accurate on gender-stereotypical sentences – to claim it is biased with 95% confidence, we need a bias-specific dataset that is 3.8 times larger than WinoBias, the largest available.
Anthology ID:
2020.acl-main.262
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Editors:
Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2914–2919
Language:
URL:
https://aclanthology.org/2020.acl-main.262
DOI:
10.18653/v1/2020.acl-main.262
Bibkey:
Cite (ACL):
Kawin Ethayarajh. 2020. Is Your Classifier Actually Biased? Measuring Fairness under Uncertainty with Bernstein Bounds. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2914–2919, Online. Association for Computational Linguistics.
Cite (Informal):
Is Your Classifier Actually Biased? Measuring Fairness under Uncertainty with Bernstein Bounds (Ethayarajh, ACL 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.acl-main.262.pdf
Video:
 http://slideslive.com/38928838
Data
MultiNLIWinoBias