Probing Linguistic Systematicity

Emily Goodwin, Koustuv Sinha, Timothy J. O’Donnell


Abstract
Recently, there has been much interest in the question of whether deep natural language understanding (NLU) models exhibit systematicity, generalizing such that units like words make consistent contributions to the meaning of the sentences in which they appear. There is accumulating evidence that neural models do not learn systematically. We examine the notion of systematicity from a linguistic perspective, defining a set of probing tasks and a set of metrics to measure systematic behaviour. We also identify ways in which network architectures can generalize non-systematically, and discuss why such forms of generalization may be unsatisfying. As a case study, we perform a series of experiments in the setting of natural language inference (NLI). We provide evidence that current state-of-the-art NLU systems do not generalize systematically, despite overall high performance.
Anthology ID:
2020.acl-main.177
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Editors:
Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1958–1969
Language:
URL:
https://aclanthology.org/2020.acl-main.177
DOI:
10.18653/v1/2020.acl-main.177
Bibkey:
Cite (ACL):
Emily Goodwin, Koustuv Sinha, and Timothy J. O’Donnell. 2020. Probing Linguistic Systematicity. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1958–1969, Online. Association for Computational Linguistics.
Cite (Informal):
Probing Linguistic Systematicity (Goodwin et al., ACL 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.acl-main.177.pdf
Video:
 http://slideslive.com/38929277
Code
 emilygoodwin/systematicity