Kristina Gulordava


2020

pdf bib
Probing for Referential Information in Language Models
Ionut-Teodor Sorodoc | Kristina Gulordava | Gemma Boleda
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

Language models keep track of complex information about the preceding context – including, e.g., syntactic relations in a sentence. We investigate whether they also capture information beneficial for resolving pronominal anaphora in English. We analyze two state of the art models with LSTM and Transformer architectures, via probe tasks and analysis on a coreference annotated corpus. The Transformer outperforms the LSTM in all analyses. Our results suggest that language models are more successful at learning grammatical constraints than they are at learning truly referential information, in the sense of capturing the fact that we use language to refer to entities in the world. However, we find traces of the latter aspect, too.

2019

pdf bib
Putting Words in Context: LSTM Language Models and Lexical Ambiguity
Laura Aina | Kristina Gulordava | Gemma Boleda
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

In neural network models of language, words are commonly represented using context-invariant representations (word embeddings) which are then put in context in the hidden layers. Since words are often ambiguous, representing the contextually relevant information is not trivial. We investigate how an LSTM language model deals with lexical ambiguity in English, designing a method to probe its hidden representations for lexical and contextual information about words. We find that both types of information are represented to a large extent, but also that there is room for improvement for contextual information.

pdf bib
Towards Incremental Learning of Word Embeddings Using Context Informativeness
Alexandre Kabbach | Kristina Gulordava | Aurélie Herbelot
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop

In this paper, we investigate the task of learning word embeddings from very sparse data in an incremental, cognitively-plausible way. We focus on the notion of ‘informativeness’, that is, the idea that some content is more valuable to the learning process than other. We further highlight the challenges of online learning and argue that previous systems fall short of implementing incrementality. Concretely, we incorporate informativeness in a previously proposed model of nonce learning, using it for context selection and learning rate modulation. We test our system on the task of learning new words from definitions, as well as on the task of learning new words from potentially uninformative contexts. We demonstrate that informativeness is crucial to obtaining state-of-the-art performance in a truly incremental setup.

2018

pdf bib
How to represent a word and predict it, too: Improving tied architectures for language modelling
Kristina Gulordava | Laura Aina | Gemma Boleda
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

Recent state-of-the-art neural language models share the representations of words given by the input and output mappings. We propose a simple modification to these architectures that decouples the hidden state from the word embedding prediction. Our architecture leads to comparable or better results compared to previous tied models and models without tying, with a much smaller number of parameters. We also extend our proposal to word2vec models, showing that tying is appropriate for general word prediction tasks.

pdf bib
Colorless Green Recurrent Networks Dream Hierarchically
Kristina Gulordava | Piotr Bojanowski | Edouard Grave | Tal Linzen | Marco Baroni
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)

Recurrent neural networks (RNNs) achieved impressive results in a variety of linguistic processing tasks, suggesting that they can induce non-trivial properties of language. We investigate to what extent RNNs learn to track abstract hierarchical syntactic structure. We test whether RNNs trained with a generic language modeling objective in four languages (Italian, English, Hebrew, Russian) can predict long-distance number agreement in various constructions. We include in our evaluation nonsensical sentences where RNNs cannot rely on semantic or lexical cues (“The colorless green ideas I ate with the chair sleep furiously”), and, for Italian, we compare model performance to human intuitions. Our language-model-trained RNNs make reliable predictions about long-distance agreement, and do not lag much behind human performance. We thus bring support to the hypothesis that RNNs are not just shallow-pattern extractors, but they also acquire deeper grammatical competence.

2016

pdf bib
Multi-lingual Dependency Parsing Evaluation: a Large-scale Analysis of Word Order Properties using Artificial Data
Kristina Gulordava | Paola Merlo
Transactions of the Association for Computational Linguistics, Volume 4

The growing work in multi-lingual parsing faces the challenge of fair comparative evaluation and performance analysis across languages and their treebanks. The difficulty lies in teasing apart the properties of treebanks, such as their size or average sentence length, from those of the annotation scheme, and from the linguistic properties of languages. We propose a method to evaluate the effects of word order of a language on dependency parsing performance, while controlling for confounding treebank properties. The method uses artificially-generated treebanks that are minimal permutations of actual treebanks with respect to two word order properties: word order variation and dependency lengths. Based on these artificial data on twelve languages, we show that longer dependencies and higher word order variability degrade parsing performance. Our method also extends to minimal pairs of individual sentences, leading to a finer-grained understanding of parsing errors.

pdf bib
Discontinuous Verb Phrases in Parsing and Machine Translation of English and German
Sharid Loáiciga | Kristina Gulordava
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

In this paper, we focus on the verb-particle (V-Prt) split construction in English and German and its difficulty for parsing and Machine Translation (MT). For German, we use an existing test suite of V-Prt split constructions, while for English, we build a new and comparable test suite from raw data. These two data sets are then used to perform an analysis of errors in dependency parsing, word-level alignment and MT, which arise from the discontinuous order in V-Prt split constructions. In the automatic alignments of parallel corpora, most of the particles align to NULL. These mis-alignments and the inability of phrase-based MT system to recover discontinuous phrases result in low quality translations of V-Prt split constructions both in English and German. However, our results show that the V-Prt split phrases are correctly parsed in 90% of cases, suggesting that syntactic-based MT should perform better on these constructions. We evaluate a syntactic-based MT system on German and compare its performance to the phrase-based system.

2015

pdf bib
Dependency length minimisation effects in short spans: a large-scale analysis of adjective placement in complex noun phrases
Kristina Gulordava | Paola Merlo | Benoit Crabbé
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

pdf bib
Diachronic Trends in Word Order Freedom and Dependency Length in Dependency-Annotated Corpora of Latin and Ancient Greek
Kristina Gulordava | Paola Merlo
Proceedings of the Third International Conference on Dependency Linguistics (Depling 2015)

pdf bib
Structural and lexical factors in adjective placement in complex noun phrases across Romance languages
Kristina Gulordava | Paola Merlo
Proceedings of the Nineteenth Conference on Computational Natural Language Learning

2011

pdf bib
A distributional similarity approach to the detection of semantic change in the Google Books Ngram corpus.
Kristina Gulordava | Marco Baroni
Proceedings of the GEMS 2011 Workshop on GEometrical Models of Natural Language Semantics