Fill the GAP: Exploiting BERT for Pronoun Resolution

Kai-Chou Yang, Timothy Niven, Tzu Hsuan Chou, Hung-Yu Kao


Abstract
In this paper, we describe our entry in the gendered pronoun resolution competition which achieved fourth place without data augmentation. Our method is an ensemble system of BERTs which resolves co-reference in an interaction space. We report four insights from our work: BERT’s representations involve significant redundancy; modeling interaction effects similar to natural language inference models is useful for this task; there is an optimal BERT layer to extract representations for pronoun resolution; and the difference between the attention weights from the pronoun to the candidate entities was highly correlated with the correct label, with interesting implications for future work.
Anthology ID:
W19-3815
Volume:
Proceedings of the First Workshop on Gender Bias in Natural Language Processing
Month:
August
Year:
2019
Address:
Florence, Italy
Editors:
Marta R. Costa-jussà, Christian Hardmeier, Will Radford, Kellie Webster
Venue:
GeBNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
102–106
Language:
URL:
https://aclanthology.org/W19-3815
DOI:
10.18653/v1/W19-3815
Bibkey:
Cite (ACL):
Kai-Chou Yang, Timothy Niven, Tzu Hsuan Chou, and Hung-Yu Kao. 2019. Fill the GAP: Exploiting BERT for Pronoun Resolution. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing, pages 102–106, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Fill the GAP: Exploiting BERT for Pronoun Resolution (Yang et al., GeBNLP 2019)
Copy Citation:
PDF:
https://aclanthology.org/W19-3815.pdf
Code
 zake7749/Fill-the-GAP
Data
GAP Coreference Dataset