Olga Kovaleva


2022

pdf bib
Down and Across: Introducing Crossword-Solving as a New NLP Benchmark
Saurabh Kulshreshtha | Olga Kovaleva | Namrata Shivagunde | Anna Rumshisky
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Solving crossword puzzles requires diverse reasoning capabilities, access to a vast amount of knowledge about language and the world, and the ability to satisfy the constraints imposed by the structure of the puzzle. In this work, we introduce solving crossword puzzles as a new natural language understanding task. We release a corpus of crossword puzzles collected from the New York Times daily crossword spanning 25 years and comprised of a total of around nine thousand puzzles. These puzzles include a diverse set of clues: historic, factual, word meaning, synonyms/antonyms, fill-in-the-blank, abbreviations, prefixes/suffixes, wordplay, and cross-lingual, as well as clues that depend on the answers to other clues. We separately release the clue-answer pairs from these puzzles as an open-domain question answering dataset containing over half a million unique clue-answer pairs. For the question answering task, our baselines include several sequence-to-sequence and retrieval-based generative models. We also introduce a non-parametric constraint satisfaction baseline for solving the entire crossword puzzle. Finally, we propose an evaluation framework which consists of several complementary performance metrics.

2021

pdf bib
BERT Busters: Outlier Dimensions that Disrupt Transformers
Olga Kovaleva | Saurabh Kulshreshtha | Anna Rogers | Anna Rumshisky
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

2020

pdf bib
Towards Visual Dialog for Radiology
Olga Kovaleva | Chaitanya Shivade | Satyananda Kashyap | Karina Kanjaria | Joy Wu | Deddeh Ballah | Adam Coy | Alexandros Karargyris | Yufan Guo | David Beymer Beymer | Anna Rumshisky | Vandana Mukherjee Mukherjee
Proceedings of the 19th SIGBioMed Workshop on Biomedical Language Processing

Current research in machine learning for radiology is focused mostly on images. There exists limited work in investigating intelligent interactive systems for radiology. To address this limitation, we introduce a realistic and information-rich task of Visual Dialog in radiology, specific to chest X-ray images. Using MIMIC-CXR, an openly available database of chest X-ray images, we construct both a synthetic and a real-world dataset and provide baseline scores achieved by state-of-the-art models. We show that incorporating medical history of the patient leads to better performance in answering questions as opposed to conventional visual question answering model which looks only at the image. While our experiments show promising results, they indicate that the task is extremely challenging with significant scope for improvement. We make both the datasets (synthetic and gold standard) and the associated code publicly available to the research community.

pdf bib
A Primer in BERTology: What We Know About How BERT Works
Anna Rogers | Olga Kovaleva | Anna Rumshisky
Transactions of the Association for Computational Linguistics, Volume 8

Transformer-based models have pushed state of the art in many areas of NLP, but our understanding of what is behind their success is still limited. This paper is the first survey of over 150 studies of the popular BERT model. We review the current state of knowledge about how BERT works, what kind of information it learns and how it is represented, common modifications to its training objectives and architecture, the overparameterization issue, and approaches to compression. We then outline directions for future research.

2019

pdf bib
Revealing the Dark Secrets of BERT
Olga Kovaleva | Alexey Romanov | Anna Rogers | Anna Rumshisky
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

BERT-based architectures currently give state-of-the-art performance on many NLP tasks, but little is known about the exact mechanisms that contribute to its success. In the current work, we focus on the interpretation of self-attention, which is one of the fundamental underlying components of BERT. Using a subset of GLUE tasks and a set of handcrafted features-of-interest, we propose the methodology and carry out a qualitative and quantitative analysis of the information encoded by the individual BERT’s heads. Our findings suggest that there is a limited set of attention patterns that are repeated across different heads, indicating the overall model overparametrization. While different heads consistently use the same attention patterns, they have varying impact on performance across different tasks. We show that manually disabling attention in certain heads leads to a performance improvement over the regular fine-tuned BERT models.

pdf bib
Calls to Action on Social Media: Detection, Social Impact, and Censorship Potential
Anna Rogers | Olga Kovaleva | Anna Rumshisky
Proceedings of the Second Workshop on Natural Language Processing for Internet Freedom: Censorship, Disinformation, and Propaganda

Calls to action on social media are known to be effective means of mobilization in social movements, and a frequent target of censorship. We investigate the possibility of their automatic detection and their potential for predicting real-world protest events, on historical data of Bolotnaya protests in Russia (2011-2013). We find that political calls to action can be annotated and detected with relatively high accuracy, and that in our sample their volume has a moderate positive correlation with rally attendance.

2018

pdf bib
Similarity-Based Reconstruction Loss for Meaning Representation
Olga Kovaleva | Anna Rumshisky | Alexey Romanov
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

This paper addresses the problem of representation learning. Using an autoencoder framework, we propose and evaluate several loss functions that can be used as an alternative to the commonly used cross-entropy reconstruction loss. The proposed loss functions use similarities between words in the embedding space, and can be used to train any neural model for text generation. We show that the introduced loss functions amplify semantic diversity of reconstructed sentences, while preserving the original meaning of the input. We test the derived autoencoder-generated representations on paraphrase detection and language inference tasks and demonstrate performance improvement compared to the traditional cross-entropy loss.