An Analysis of the Utility of Explicit Negative Examples to Improve the Syntactic Abilities of Neural Language Models

Hiroshi Noji, Hiroya Takamura


Abstract
We explore the utilities of explicit negative examples in training neural language models. Negative examples here are incorrect words in a sentence, such as barks in *The dogs barks. Neural language models are commonly trained only on positive examples, a set of sentences in the training data, but recent studies suggest that the models trained in this way are not capable of robustly handling complex syntactic constructions, such as long-distance agreement. In this paper, we first demonstrate that appropriately using negative examples about particular constructions (e.g., subject-verb agreement) will boost the model’s robustness on them in English, with a negligible loss of perplexity. The key to our success is an additional margin loss between the log-likelihoods of a correct word and an incorrect word. We then provide a detailed analysis of the trained models. One of our findings is the difficulty of object-relative clauses for RNNs. We find that even with our direct learning signals the models still suffer from resolving agreement across an object-relative clause. Augmentation of training sentences involving the constructions somewhat helps, but the accuracy still does not reach the level of subject-relative clauses. Although not directly cognitively appealing, our method can be a tool to analyze the true architectural limitation of neural models on challenging linguistic constructions.
Anthology ID:
2020.acl-main.309
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Editors:
Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3375–3385
Language:
URL:
https://aclanthology.org/2020.acl-main.309
DOI:
10.18653/v1/2020.acl-main.309
Bibkey:
Cite (ACL):
Hiroshi Noji and Hiroya Takamura. 2020. An Analysis of the Utility of Explicit Negative Examples to Improve the Syntactic Abilities of Neural Language Models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3375–3385, Online. Association for Computational Linguistics.
Cite (Informal):
An Analysis of the Utility of Explicit Negative Examples to Improve the Syntactic Abilities of Neural Language Models (Noji & Takamura, ACL 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.acl-main.309.pdf
Video:
 http://slideslive.com/38929448
Code
 aistairc/lm_syntax_negative