Defining and Evaluating Fair Natural Language Generation

Catherine Yeo, Alyssa Chen


Abstract
Our work focuses on the biases that emerge in the natural language generation (NLG) task of sentence completion. In this paper, we introduce a mathematical framework of fairness for NLG followed by an evaluation of gender biases in two state-of-the-art language models. Our analysis provides a theoretical formulation for biases in NLG and empirical evidence that existing language generation models embed gender bias.
Anthology ID:
2020.winlp-1.27
Volume:
Proceedings of the Fourth Widening Natural Language Processing Workshop
Month:
July
Year:
2020
Address:
Seattle, USA
Editors:
Rossana Cunha, Samira Shaikh, Erika Varis, Ryan Georgi, Alicia Tsai, Antonios Anastasopoulos, Khyathi Raghavi Chandu
Venue:
WiNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
107–109
Language:
URL:
https://aclanthology.org/2020.winlp-1.27
DOI:
10.18653/v1/2020.winlp-1.27
Bibkey:
Cite (ACL):
Catherine Yeo and Alyssa Chen. 2020. Defining and Evaluating Fair Natural Language Generation. In Proceedings of the Fourth Widening Natural Language Processing Workshop, pages 107–109, Seattle, USA. Association for Computational Linguistics.
Cite (Informal):
Defining and Evaluating Fair Natural Language Generation (Yeo & Chen, WiNLP 2020)
Copy Citation:
Video:
 http://slideslive.com/38929566