Language Modeling with Syntactic and Semantic Representation for Sentence Acceptability Predictions

Adam Ek, Jean-Philippe Bernardy, Shalom Lappin


Abstract
In this paper, we investigate the effect of enhancing lexical embeddings in LSTM language models (LM) with syntactic and semantic representations. We evaluate the language models using perplexity, and we evaluate the performance of the models on the task of predicting human sentence acceptability judgments. We train LSTM language models on sentences automatically annotated with universal syntactic dependency roles (Nivre, 2016), dependency depth and universal semantic tags (Abzianidze et al., 2017) to predict sentence acceptability judgments. Our experiments indicate that syntactic tags lower perplexity, while semantic tags increase it. Our experiments also show that neither syntactic nor semantic tags improve the performance of LSTM language models on the task of predicting sentence acceptability judgments.
Anthology ID:
W19-6108
Volume:
Proceedings of the 22nd Nordic Conference on Computational Linguistics
Month:
September–October
Year:
2019
Address:
Turku, Finland
Venues:
NoDaLiDa | WS
SIG:
Publisher:
Linköping University Electronic Press
Note:
Pages:
76–85
URL:
https://www.aclweb.org/anthology/W19-6108
DOI:
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
https://www.aclweb.org/anthology/W19-6108.pdf