Understanding by Understanding Not: Modeling Negation in Language Models

Arian Hosseini, Siva Reddy, Dzmitry Bahdanau, R Devon Hjelm, Alessandro Sordoni, Aaron Courville


Abstract
Negation is a core construction in natural language. Despite being very successful on many tasks, state-of-the-art pre-trained language models often handle negation incorrectly. To improve language models in this regard, we propose to augment the language modeling objective with an unlikelihood objective that is based on negated generic sentences from a raw text corpus. By training BERT with the resulting combined objective we reduce the mean top 1 error rate to 4% on the negated LAMA dataset. We also see some improvements on the negated NLI benchmarks.
Anthology ID:
2021.naacl-main.102
Volume:
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
June
Year:
2021
Address:
Online
Editors:
Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tur, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty, Yichao Zhou
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1301–1312
Language:
URL:
https://aclanthology.org/2021.naacl-main.102
DOI:
10.18653/v1/2021.naacl-main.102
Bibkey:
Cite (ACL):
Arian Hosseini, Siva Reddy, Dzmitry Bahdanau, R Devon Hjelm, Alessandro Sordoni, and Aaron Courville. 2021. Understanding by Understanding Not: Modeling Negation in Language Models. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1301–1312, Online. Association for Computational Linguistics.
Cite (Informal):
Understanding by Understanding Not: Modeling Negation in Language Models (Hosseini et al., NAACL 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.naacl-main.102.pdf
Video:
 https://aclanthology.org/2021.naacl-main.102.mp4
Code
 arianhosseini/negation-learning
Data
LAMAMultiNLISNLIT-REx