Exploring Social Bias in Chatbots using Stereotype Knowledge

Nayeon Lee, Andrea Madotto, Pascale Fung


Abstract
Exploring social bias in chatbot is an important, yet relatively unexplored problem. In this paper, we propose an approach to understand social bias in chatbots by leveraging stereotype knowledge. It allows interesting comparison of bias between chatbots and humans, and provides intuitive analysis of existing chatbots by borrowing the finer-grain concepts of sexism and racism.
Anthology ID:
W19-3655
Volume:
Proceedings of the 2019 Workshop on Widening NLP
Month:
August
Year:
2019
Address:
Florence, Italy
Editors:
Amittai Axelrod, Diyi Yang, Rossana Cunha, Samira Shaikh, Zeerak Waseem
Venue:
WiNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
177–180
Language:
URL:
https://aclanthology.org/W19-3655
DOI:
Bibkey:
Cite (ACL):
Nayeon Lee, Andrea Madotto, and Pascale Fung. 2019. Exploring Social Bias in Chatbots using Stereotype Knowledge. In Proceedings of the 2019 Workshop on Widening NLP, pages 177–180, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Exploring Social Bias in Chatbots using Stereotype Knowledge (Lee et al., WiNLP 2019)
Copy Citation: