ListOps: A Diagnostic Dataset for Latent Tree Learning

Nikita Nangia, Samuel Bowman


Abstract
Latent tree learning models learn to parse a sentence without syntactic supervision, and use that parse to build the sentence representation. Existing work on such models has shown that, while they perform well on tasks like sentence classification, they do not learn grammars that conform to any plausible semantic or syntactic formalism (Williams et al., 2018a). Studying the parsing ability of such models in natural language can be challenging due to the inherent complexities of natural language, like having several valid parses for a single sentence. In this paper we introduce ListOps, a toy dataset created to study the parsing ability of latent tree models. ListOps sequences are in the style of prefix arithmetic. The dataset is designed to have a single correct parsing strategy that a system needs to learn to succeed at the task. We show that the current leading latent tree models are unable to learn to parse and succeed at ListOps. These models achieve accuracies worse than purely sequential RNNs.
Anthology ID:
N18-4013
Volume:
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop
Month:
June
Year:
2018
Address:
New Orleans, Louisiana, USA
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
92–99
URL:
https://www.aclweb.org/anthology/N18-4013
DOI:
10.18653/v1/N18-4013
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
https://www.aclweb.org/anthology/N18-4013.pdf