Tommi Jaakkola


2021

pdf bib
Consistent Accelerated Inference via Confident Adaptive Transformers
Tal Schuster | Adam Fisch | Tommi Jaakkola | Regina Barzilay
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

We develop a novel approach for confidently accelerating inference in the large and expensive multilayer Transformers that are now ubiquitous in natural language processing (NLP). Amortized or approximate computational methods increase efficiency, but can come with unpredictable performance costs. In this work, we present CATs – Confident Adaptive Transformers – in which we simultaneously increase computational efficiency, while guaranteeing a specifiable degree of consistency with the original model with high confidence. Our method trains additional prediction heads on top of intermediate layers, and dynamically decides when to stop allocating computational effort to each input using a meta consistency classifier. To calibrate our early prediction stopping rule, we formulate a unique extension of conformal prediction. We demonstrate the effectiveness of this approach on four classification and regression tasks.

2020

pdf bib
Blank Language Models
Tianxiao Shen | Victor Quach | Regina Barzilay | Tommi Jaakkola
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

We propose Blank Language Model (BLM), a model that generates sequences by dynamically creating and filling in blanks. The blanks control which part of the sequence to expand, making BLM ideal for a variety of text editing and rewriting tasks. The model can start from a single blank or partially completed text with blanks at specified locations. It iteratively determines which word to place in a blank and whether to insert new blanks, and stops generating when no blanks are left to fill. BLM can be efficiently trained using a lower bound of the marginal data likelihood. On the task of filling missing text snippets, BLM significantly outperforms all other baselines in terms of both accuracy and fluency. Experiments on style transfer and damaged ancient text restoration demonstrate the potential of this framework for a wide range of applications.

2019

pdf bib
Rethinking Cooperative Rationalization: Introspective Extraction and Complement Control
Mo Yu | Shiyu Chang | Yang Zhang | Tommi Jaakkola
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

Selective rationalization has become a common mechanism to ensure that predictive models reveal how they use any available features. The selection may be soft or hard, and identifies a subset of input features relevant for prediction. The setup can be viewed as a co-operate game between the selector (aka rationale generator) and the predictor making use of only the selected features. The co-operative setting may, however, be compromised for two reasons. First, the generator typically has no direct access to the outcome it aims to justify, resulting in poor performance. Second, there’s typically no control exerted on the information left outside the selection. We revise the overall co-operative framework to address these challenges. We introduce an introspective model which explicitly predicts and incorporates the outcome into the selection process. Moreover, we explicitly control the rationale complement via an adversary so as not to leave any useful information out of the selection. We show that the two complementary mechanisms maintain both high predictive accuracy and lead to comprehensive rationales.

2018

pdf bib
Gromov-Wasserstein Alignment of Word Embedding Spaces
David Alvarez-Melis | Tommi Jaakkola
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

Cross-lingual or cross-domain correspondences play key roles in tasks ranging from machine translation to transfer learning. Recently, purely unsupervised methods operating on monolingual embeddings have become effective alignment tools. Current state-of-the-art methods, however, involve multiple steps, including heuristic post-hoc refinement strategies. In this paper, we cast the correspondence problem directly as an optimal transport (OT) problem, building on the idea that word embeddings arise from metric recovery algorithms. Indeed, we exploit the Gromov-Wasserstein distance that measures how similarities between pairs of words relate across languages. We show that our OT objective can be estimated efficiently, requires little or no tuning, and results in performance comparable with the state-of-the-art in various unsupervised word translation tasks.

2017

pdf bib
A causal framework for explaining the predictions of black-box sequence-to-sequence models
David Alvarez-Melis | Tommi Jaakkola
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing

We interpret the predictions of any black-box structured input-structured output model around a specific input-output pair. Our method returns an “explanation” consisting of groups of input-output tokens that are causally related. These dependencies are inferred by querying the model with perturbed inputs, generating a graph over tokens from the responses, and solving a partitioning problem to select the most relevant components. We focus the general approach on sequence-to-sequence problems, adopting a variational autoencoder to yield meaningful input perturbations. We test our method across several NLP sequence generation tasks.

pdf bib
Aspect-augmented Adversarial Networks for Domain Adaptation
Yuan Zhang | Regina Barzilay | Tommi Jaakkola
Transactions of the Association for Computational Linguistics, Volume 5

We introduce a neural method for transfer learning between two (source and target) classification tasks or aspects over the same domain. Rather than training on target labels, we use a few keywords pertaining to source and target aspects indicating sentence relevance instead of document class labels. Documents are encoded by learning to embed and softly select relevant sentences in an aspect-dependent manner. A shared classifier is trained on the source encoded documents and labels, and applied to target encoded documents. We ensure transfer through aspect-adversarial training so that encoded documents are, as sets, aspect-invariant. Experimental results demonstrate that our approach outperforms different baselines and model variants on two datasets, yielding an improvement of 27% on a pathology dataset and 5% on a review dataset.

2016

pdf bib
Semi-supervised Question Retrieval with Gated Convolutions
Tao Lei | Hrishikesh Joshi | Regina Barzilay | Tommi Jaakkola | Kateryna Tymoshenko | Alessandro Moschitti | Lluís Màrquez
Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Ten Pairs to Tag – Multilingual POS Tagging via Coarse Mapping between Embeddings
Yuan Zhang | David Gaddy | Regina Barzilay | Tommi Jaakkola
Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Rationalizing Neural Predictions
Tao Lei | Regina Barzilay | Tommi Jaakkola
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing

pdf bib
Learning to refine text based recommendations
Youyang Gu | Tao Lei | Regina Barzilay | Tommi Jaakkola
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing

2015

pdf bib
Molding CNNs for text: non-linear, non-consecutive convolutions
Tao Lei | Regina Barzilay | Tommi Jaakkola
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing

pdf bib
An Unsupervised Method for Uncovering Morphological Chains
Karthik Narasimhan | Regina Barzilay | Tommi Jaakkola
Transactions of the Association for Computational Linguistics, Volume 3

Most state-of-the-art systems today produce morphological analysis based only on orthographic patterns. In contrast, we propose a model for unsupervised morphological analysis that integrates orthographic and semantic views of words. We model word formation in terms of morphological chains, from base words to the observed words, breaking the chains into parent-child relations. We use log-linear models with morpheme and word-level features to predict possible parents, including their modifications, for each word. The limited set of candidate parents for each word render contrastive estimation feasible. Our model consistently matches or outperforms five state-of-the-art systems on Arabic, English and Turkish.

2014

pdf bib
Steps to Excellence: Simple Inference with Refined Scoring of Dependency Trees
Yuan Zhang | Tao Lei | Regina Barzilay | Tommi Jaakkola | Amir Globerson
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf bib
Low-Rank Tensors for Scoring Dependency Structures
Tao Lei | Yu Xin | Yuan Zhang | Regina Barzilay | Tommi Jaakkola
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf bib
Greed is Good if Randomized: New Inference for Dependency Parsing
Yuan Zhang | Tao Lei | Regina Barzilay | Tommi Jaakkola
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)

2010

pdf bib
On Dual Decomposition and Linear Programming Relaxations for Natural Language Processing
Alexander M. Rush | David Sontag | Michael Collins | Tommi Jaakkola
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing

pdf bib
Dual Decomposition for Parsing with Non-Projective Head Automata
Terry Koo | Alexander M. Rush | Michael Collins | Tommi Jaakkola | David Sontag
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing