Joel Tetreault

Also published as: Joel R. Tetreault


2023

pdf bib
Missing Information, Unresponsive Authors, Experimental Flaws: The Impossibility of Assessing the Reproducibility of Previous Human Evaluations in NLP
Anya Belz | Craig Thomson | Ehud Reiter | Gavin Abercrombie | Jose M. Alonso-Moral | Mohammad Arvan | Anouck Braggaar | Mark Cieliebak | Elizabeth Clark | Kees van Deemter | Tanvi Dinkar | Ondřej Dušek | Steffen Eger | Qixiang Fang | Mingqi Gao | Albert Gatt | Dimitra Gkatzia | Javier González-Corbelle | Dirk Hovy | Manuela Hürlimann | Takumi Ito | John D. Kelleher | Filip Klubicka | Emiel Krahmer | Huiyuan Lai | Chris van der Lee | Yiru Li | Saad Mahamood | Margot Mieskes | Emiel van Miltenburg | Pablo Mosteiro | Malvina Nissim | Natalie Parde | Ondřej Plátek | Verena Rieser | Jie Ruan | Joel Tetreault | Antonio Toral | Xiaojun Wan | Leo Wanner | Lewis Watson | Diyi Yang
Proceedings of the Fourth Workshop on Insights from Negative Results in NLP

We report our efforts in identifying a set of previous human evaluations in NLP that would be suitable for a coordinated study examining what makes human evaluations in NLP more/less reproducible. We present our results and findings, which include that just 13% of papers had (i) sufficiently low barriers to reproduction, and (ii) enough obtainable information, to be considered for reproduction, and that all but one of the experiments we selected for reproduction was discovered to have flaws that made the meaningfulness of conducting a reproduction questionable. As a result, we had to change our coordinated study design from a reproduce approach to a standardise-then-reproduce-twice approach. Our overall (negative) finding that the great majority of human evaluations in NLP is not repeatable and/or not reproducible and/or too flawed to justify reproduction, paints a dire picture, but presents an opportunity for a rethink about how to design and report human evaluations in NLP.

pdf bib
Multi-View Source Ablation for Faithful Summarization
Shuyang Cao | Liang Ma | Di Lu | Robert L Logan IV | Joel Tetreault | Alejandro Jaimes
Findings of the Association for Computational Linguistics: EACL 2023

In this paper, we present MuFaSSa (Multi-view Faithfulness Scoring via Source Ablation), a metric for evaluating faithfulness of abstractive summaries, and for guiding training of more faithful summarizers. For evaluation, MuFaSSa employs different strategies (e.g., masking entity mentions) to first remove information from the source document to form multiple ablated views. Then, the faithfulness level of each token in a generated summary is measured by the difference between the token generation probabilities when given the original document and the ablated document as inputs to trained summarizers. For training, MuFaSSa uses a novel word truncation objective that drops unfaithful tokens located by MuFaSSa in both the decoder input and output. Alignments with human-annotated faithfulness labels on AggreFact show that MuFaSSa is comparable to or better than existing metrics built on classifiers or QA models pre-trained on other tasks. In experiments on summarization with XSum and CNN/DailyMail, models trained with word truncation using MuFaSSa outperform competitive methods according to both automatic faithfulness metrics and human assessments.

pdf bib
A New Task and Dataset on Detecting Attacks on Human Rights Defenders
Shihao Ran | Di Lu | Aoife Cahill | Joel Tetreault | Alejandro Jaimes
Findings of the Association for Computational Linguistics: ACL 2023

The ability to conduct retrospective analyses of attacks on human rights defenders over time and by location is important for humanitarian organizations to better understand historical or ongoing human rights violations and thus better manage the global impact of such events. We hypothesize that NLP can support such efforts by quickly processing large collections of news articles to detect and summarize the characteristics of attacks on human rights defenders. To that end, we propose a new dataset for detecting Attacks on Human Rights Defenders (HRDsAttack) consisting of crowdsourced annotations on 500 online news articles. The annotations include fine-grained information about the type and location of the attacks, as well as information about the victim(s). We demonstrate the usefulness of the dataset by using it to train and evaluate baseline models on several sub-tasks to predict the annotated characteristics.

pdf bib
Harnessing the power of LLMs: Evaluating human-AI text co-creation through the lens of news headline generation
Zijian Ding | Alison Smith-Renner | Wenjuan Zhang | Joel Tetreault | Alejandro Jaimes
Findings of the Association for Computational Linguistics: EMNLP 2023

To explore how humans can best leverage LLMs for writing and how interacting with these models affects feelings of ownership and trust in the writing process, we compared common human-AI interaction types (e.g., guiding system, selecting from system outputs, post-editing outputs) in the context of LLM-assisted news headline generation. While LLMs alone can generate satisfactory news headlines, on average, human control is needed to fix undesirable model outputs. Of the interaction methods, guiding and selecting model output added the most benefit with the lowest cost (in time and effort). Further, AI assistance did not harm participants’ perception of control compared to freeform editing.

pdf bib
Defining a New NLP Playground
Sha Li | Chi Han | Pengfei Yu | Carl Edwards | Manling Li | Xingyao Wang | Yi Fung | Charles Yu | Joel Tetreault | Eduard Hovy | Heng Ji
Findings of the Association for Computational Linguistics: EMNLP 2023

The recent explosion of performance of large language models (LLMs) has changed the field of Natural Language Processing (NLP) more abruptly and seismically than any other shift in the field’s 80 year history. This has resulted in concerns that the field will become homogenized and resource-intensive. This new status quo has put many academic researchers, especially PhD students, at a disadvantage. This paper aims to define a new NLP playground by proposing 20+ PhD-dissertation-worthy research directions, covering theoretical analysis, new and challenging problems, learning paradigms and interdisciplinary applications.

pdf bib
Little Giants: Exploring the Potential of Small LLMs as Evaluation Metrics in Summarization in the Eval4NLP 2023 Shared Task
Neema Kotonya | Saran Krishnasamy | Joel Tetreault | Alejandro Jaimes
Proceedings of the 4th Workshop on Evaluation and Comparison of NLP Systems

This paper describes and analyzes our participation in the 2023 Eval4NLP shared task, which focuses on assessing the effectiveness of prompt-based techniques to empower Large Language Models to handle the task of quality estimation, particularly in the context of evaluating machine translations and summaries. We conducted systematic experiments with various prompting techniques, including standard prompting, prompts informed by annotator instructions, and innovative chain-of-thought prompting. In addition, we integrated these approaches with zero-shot and one-shot learning methods to maximize the efficacy of our evaluation procedures. Our work reveals that combining these approaches using a “small”, open source model (orca_mini_v3_7B) yields competitive results.

pdf bib
BUMP: A Benchmark of Unfaithful Minimal Pairs for Meta-Evaluation of Faithfulness Metrics
Liang Ma | Shuyang Cao | Robert L Logan IV | Di Lu | Shihao Ran | Ke Zhang | Joel Tetreault | Alejandro Jaimes
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

The proliferation of automatic faithfulness metrics for summarization has produced a need for benchmarks to evaluate them. While existing benchmarks measure the correlation with human judgements of faithfulness on model-generated summaries, they are insufficient for diagnosing whether metrics are: 1) consistent, i.e., indicate lower faithfulness as errors are introduced into a summary, 2) effective on human-written texts, and 3) sensitive to different error types (as summaries can contain multiple errors). To address these needs, we present a benchmark of unfaithful minimal pairs (BUMP), a dataset of 889 human-written, minimally different summary pairs, where a single error is introduced to a summary from the CNN/DailyMail dataset to produce an unfaithful summary. We find BUMP complements existing benchmarks in a number of ways: 1) the summaries in BUMP are harder to discriminate and less probable under SOTA summarization models, 2) unlike non-pair-based datasets, BUMP can be used to measure the consistency of metrics, and reveals that the most discriminative metrics tend not to be the most consistent, and 3) unlike datasets containing generated summaries with multiple errors, BUMP enables the measurement of metrics’ performance on individual error types.

pdf bib
Event Extraction as Question Generation and Answering
Di Lu | Shihao Ran | Joel Tetreault | Alejandro Jaimes
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

Recent work on Event Extraction has reframed the task as Question Answering (QA), with promising results. The advantage of this approach is that it addresses the error propagation issue found in traditional token-based classification approaches by directly predicting event arguments without extracting candidates first. However, the questions are typically based on fixed templates and they rarely leverage contextual information such as relevant arguments. In addition, prior QA-based approaches have difficulty handling cases where there are multiple arguments for the same role. In this paper, we propose QGA-EE, which enables a Question Generation (QG) model to generate questions that incorporate rich contextual information instead of using fixed templates. We also propose dynamic templates to assist the training of QG model. Experiments show that QGA-EE outperforms all prior single-task-based models on the ACE05 English dataset.

2022

pdf bib
XLTime: A Cross-Lingual Knowledge Transfer Framework for Temporal Expression Extraction
Yuwei Cao | William Groves | Tanay Kumar Saha | Joel Tetreault | Alejandro Jaimes | Hao Peng | Philip Yu
Findings of the Association for Computational Linguistics: NAACL 2022

Temporal Expression Extraction (TEE) is essential for understanding time in natural language. It has applications in Natural Language Processing (NLP) tasks such as question answering, information retrieval, and causal inference. To date, work in this area has mostly focused on English as there is a scarcity of labeled data for other languages. We propose XLTime, a novel framework for multilingual TEE. XLTime works on top of pre-trained language models and leverages multi-task learning to prompt cross-language knowledge transfer both from English and within the non-English languages. XLTime alleviates problems caused by a shortage of data in the target language. We apply XLTime with different language models and show that it outperforms the previous automatic SOTA methods on French, Spanish, Portuguese, and Basque, by large margins. XLTime also closes the gap considerably on the handcrafted HeidelTime method.

pdf bib
CrisisLTLSum: A Benchmark for Local Crisis Event Timeline Extraction and Summarization
Hossein Rajaby Faghihi | Bashar Alhafni | Ke Zhang | Shihao Ran | Joel Tetreault | Alejandro Jaimes
Findings of the Association for Computational Linguistics: EMNLP 2022

Social media has increasingly played a key role in emergency response: first responders can use public posts to better react to ongoing crisis events and deploy the necessary resources where they are most needed. Timeline extraction and abstractive summarization are critical technical tasks to leverage large numbers of social media posts about events. Unfortunately, there are few datasets for benchmarking technical approaches for those tasks. This paper presents , the largest dataset of local crisis event timelines available to date. contains 1,000 crisis event timelines across four domains: wildfires, local fires, traffic, and storms. We built using a semi-automated cluster-then-refine approach to collect data from the public Twitter stream. Our initial experiments indicate a significant gap between the performance of strong baselines compared to the human performance on both tasks. Our dataset, code, and models are publicly available (https://github.com/CrisisLTLSum/CrisisTimelines).

pdf bib
Proceedings of the Second Workshop on NLP for Positive Impact (NLP4PI)
Laura Biester | Dorottya Demszky | Zhijing Jin | Mrinmaya Sachan | Joel Tetreault | Steven Wilson | Lu Xiao | Jieyu Zhao
Proceedings of the Second Workshop on NLP for Positive Impact (NLP4PI)

pdf bib
Mapping the Design Space of Human-AI Interaction in Text Summarization
Ruijia Cheng | Alison Smith-Renner | Ke Zhang | Joel Tetreault | Alejandro Jaimes-Larrarte
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

Automatic text summarization systems commonly involve humans for preparing data or evaluating model performance, yet, there lacks a systematic understanding of humans’ roles, experience, and needs when interacting with or being assisted by AI. From a human-centered perspective, we map the design opportunities and considerations for human-AI interaction in text summarization and broader text generation tasks. We first conducted a systematic literature review of 70 papers, developing a taxonomy of five interactions in AI-assisted text generation and relevant design dimensions. We designed text summarization prototypes for each interaction. We then interviewed 16 users, aided by the prototypes, to understand their expectations, experience, and needs regarding efficiency, control, and trust with AI in text summarization and propose design considerations accordingly.

pdf bib
An Exploration of Post-Editing Effectiveness in Text Summarization
Vivian Lai | Alison Smith-Renner | Ke Zhang | Ruijia Cheng | Wenjuan Zhang | Joel Tetreault | Alejandro Jaimes-Larrarte
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

Automatic summarization methods are efficient but can suffer from low quality. In comparison, manual summarization is expensive but produces higher quality. Can humans and AI collaborate to improve summarization performance? In similar text generation tasks (e.g., machine translation), human-AI collaboration in the form of “post-editing” AI-generated text reduces human workload and improves the quality of AI output. Therefore, we explored whether post-editing offers advantages in text summarization. Specifically, we conducted an experiment with 72 participants, comparing post-editing provided summaries with manual summarization for summary quality, human efficiency, and user experience on formal (XSum news) and informal (Reddit posts) text. This study sheds valuable insights on when post-editing is useful for text summarization: it helped in some cases (e.g., when participants lacked domain knowledge) but not in others (e.g., when provided summaries include inaccurate information). Participants’ different editing strategies and needs for assistance offer implications for future human-AI summarization systems.

2021

pdf bib
Evaluating the Evaluation Metrics for Style Transfer: A Case Study in Multilingual Formality Transfer
Eleftheria Briakou | Sweta Agrawal | Joel Tetreault | Marine Carpuat
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

While the field of style transfer (ST) has been growing rapidly, it has been hampered by a lack of standardized practices for automatic evaluation. In this paper, we evaluate leading automatic metrics on the oft-researched task of formality style transfer. Unlike previous evaluations, which focus solely on English, we expand our focus to Brazilian-Portuguese, French, and Italian, making this work the first multilingual evaluation of metrics in ST. We outline best practices for automatic evaluation in (formality) style transfer and identify several models that correlate well with human judgments and are robust across languages. We hope that this work will help accelerate development in ST, where human evaluation is often challenging to collect.

pdf bib
Journalistic Guidelines Aware News Image Captioning
Xuewen Yang | Svebor Karaman | Joel Tetreault | Alejandro Jaimes
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

The task of news article image captioning aims to generate descriptive and informative captions for news article images. Unlike conventional image captions that simply describe the content of the image in general terms, news image captions follow journalistic guidelines and rely heavily on named entities to describe the image content, often drawing context from the whole article they are associated with. In this work, we propose a new approach to this task, motivated by caption guidelines that journalists follow. Our approach, Journalistic Guidelines Aware News Image Captioning (JoGANIC), leverages the structure of captions to improve the generation quality and guide our representation design. Experimental results, including detailed ablation studies, on two large-scale publicly available datasets show that JoGANIC substantially outperforms state-of-the-art methods both on caption generation and named entity related metrics.

pdf bib
Olá, Bonjour, Salve! XFORMAL: A Benchmark for Multilingual Formality Style Transfer
Eleftheria Briakou | Di Lu | Ke Zhang | Joel Tetreault
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

We take the first step towards multilingual style transfer by creating and releasing XFORMAL, a benchmark of multiple formal reformulations of informal text in Brazilian Portuguese, French, and Italian. Results on XFORMAL suggest that state-of-the-art style transfer approaches perform close to simple baselines, indicating that style transfer is even more challenging when moving multilingual.

pdf bib
A Review of Human Evaluation for Style Transfer
Eleftheria Briakou | Sweta Agrawal | Ke Zhang | Joel Tetreault | Marine Carpuat
Proceedings of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics (GEM 2021)

This paper reviews and summarizes human evaluation practices described in 97 style transfer papers with respect to three main evaluation aspects: style transfer, meaning preservation, and fluency. In principle, evaluations by human raters should be the most reliable. However, in style transfer papers, we find that protocols for human evaluations are often underspecified and not standardized, which hampers the reproducibility of research in this field and progress toward better human and automatic evaluation methods.

pdf bib
A Novel Framework for Detecting Important Subevents from Crisis Events via Dynamic Semantic Graphs
Evangelia Spiliopoulou | Tanay Kumar Saha | Joel Tetreault | Alejandro Jaimes
Proceedings of the Seventh Workshop on Noisy User-generated Text (W-NUT 2021)

Social media is an essential tool to share information about crisis events, such as natural disasters. Event Detection aims at extracting information in the form of an event, but considers each event in isolation, without combining information across sentences or events. Many posts in Crisis NLP contain repetitive or complementary information which needs to be aggregated (e.g., the number of trapped people and their location) for disaster response. Although previous approaches in Crisis NLP aggregate information across posts, they only use shallow representations of the content (e.g., keywords), which cannot adequately represent the semantics of a crisis event and its sub-events. In this work, we propose a novel framework to extract critical sub-events from a large-scale crisis event by combining important information across relevant tweets. Our framework first converts all the tweets from a crisis event into a temporally-ordered set of graphs. Then it extracts sub-graphs that represent semantic relationships connecting verbs and nouns in 3 to 6 node sub-graphs. It does this by learning edge weights via Dynamic Graph Convolutional Networks (DGCNs) and extracting smaller, relevant sub-graphs. Our experiments show that our extracted structures (1) are semantically meaningful sub-events and (2) contain information important for the large crisis-event. Furthermore, we show that our approach significantly outperforms event detection baselines, highlighting the importance of aggregating information across tweets for our task.

pdf bib
GTN-ED: Event Detection Using Graph Transformer Networks
Sanghamitra Dutta | Liang Ma | Tanay Kumar Saha | Di Liu | Joel Tetreault | Alejandro Jaimes
Proceedings of the Fifteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-15)

Recent works show that the graph structure of sentences, generated from dependency parsers, has potential for improving event detection. However, they often only leverage the edges (dependencies) between words, and discard the dependency labels (e.g., nominal-subject), treating the underlying graph edges as homogeneous. In this work, we propose a novel framework for incorporating both dependencies and their labels using a recently proposed technique called Graph Transformer Network (GTN). We integrate GTN to leverage dependency relations on two existing homogeneous-graph-based models and demonstrate an improvement in the F1 score on the ACE dataset.

2020

pdf bib
Proceedings of the First Joint Workshop on Narrative Understanding, Storylines, and Events
Claire Bonial | Tommaso Caselli | Snigdha Chaturvedi | Elizabeth Clark | Ruihong Huang | Mohit Iyyer | Alejandro Jaimes | Heng Ji | Lara J. Martin | Ben Miller | Teruko Mitamura | Nanyun Peng | Joel Tetreault
Proceedings of the First Joint Workshop on Narrative Understanding, Storylines, and Events

pdf bib
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Dan Jurafsky | Joyce Chai | Natalie Schluter | Joel Tetreault
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

pdf bib
Creating a Domain-diverse Corpus for Theory-based Argument Quality Assessment
Lily Ng | Anne Lauscher | Joel Tetreault | Courtney Napoles
Proceedings of the 7th Workshop on Argument Mining

Computational models of argument quality (AQ) have focused primarily on assessing the overall quality or just one specific characteristic of an argument, such as its convincingness or its clarity. However, previous work has claimed that assessment based on theoretical dimensions of argumentation could benefit writers, but developing such models has been limited by the lack of annotated data. In this work, we describe GAQCorpus, the first large, domain-diverse annotated corpus of theory-based AQ. We discuss how we designed the annotation task to reliably collect a large number of judgments with crowdsourcing, formulating theory-based guidelines that helped make subjective judgments of AQ more objective. We demonstrate how to identify arguments and adapt the annotation task for three diverse domains. Our work will inform research on theory-based argumentation annotation and enable the creation of more diverse corpora to support computational AQ assessment.

pdf bib
The ApposCorpus: a new multilingual, multi-domain dataset for factual appositive generation
Yova Kementchedjhieva | Di Lu | Joel Tetreault
Proceedings of the 28th International Conference on Computational Linguistics

News articles, image captions, product reviews and many other texts mention people and organizations whose name recognition could vary for different audiences. In such cases, background information about the named entities could be provided in the form of an appositive noun phrase, either written by a human or generated automatically. We expand on the previous work in appositive generation with a new, more realistic, end-to-end definition of the task, instantiated by a dataset that spans four languages (English, Spanish, German and Polish), two entity types (person and organization) and two domains (Wikipedia and News). We carry out an extensive analysis of the data and the task, pointing to the various modeling challenges it poses. The results we obtain with standard language generation methods show that the task is indeed non-trivial, and leaves plenty of room for improvement.

pdf bib
Rhetoric, Logic, and Dialectic: Advancing Theory-based Argument Quality Assessment in Natural Language Processing
Anne Lauscher | Lily Ng | Courtney Napoles | Joel Tetreault
Proceedings of the 28th International Conference on Computational Linguistics

Though preceding work in computational argument quality (AQ) mostly focuses on assessing overall AQ, researchers agree that writers would benefit from feedback targeting individual dimensions of argumentation theory. However, a large-scale theory-based corpus and corresponding computational models are missing. We fill this gap by conducting an extensive analysis covering three diverse domains of online argumentative writing and presenting GAQCorpus: the first large-scale English multi-domain (community Q&A forums, debate forums, review forums) corpus annotated with theory-based AQ scores. We then propose the first computational approaches to theory-based assessment, which can serve as strong baselines for future work. We demonstrate the feasibility of large-scale AQ annotation, show that exploiting relations between dimensions yields performance improvements, and explore the synergies between theory-based prediction and practical AQ assessment.

2019

pdf bib
This Email Could Save Your Life: Introducing the Task of Email Subject Line Generation
Rui Zhang | Joel Tetreault
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

Given the overwhelming number of emails, an effective subject line becomes essential to better inform the recipient of the email’s content. In this paper, we propose and study the task of email subject line generation: automatically generating an email subject line from the email body. We create the first dataset for this task and find that email subject line generation favor extremely abstractive summary which differentiates it from news headline generation or news single document summarization. We then develop a novel deep learning method and compare it to several baselines as well as recent state-of-the-art text summarization systems. We also investigate the efficacy of several automatic metrics based on correlations with human judgments and propose a new automatic evaluation metric. Our system outperforms competitive baselines given both automatic and human evaluations. To our knowledge, this is the first work to tackle the problem of effective email subject line generation.

pdf bib
Enabling Robust Grammatical Error Correction in New Domains: Data Sets, Metrics, and Analyses
Courtney Napoles | Maria Nădejde | Joel Tetreault
Transactions of the Association for Computational Linguistics, Volume 7

Until now, grammatical error correction (GEC) has been primarily evaluated on text written by non-native English speakers, with a focus on student essays. This paper enables GEC development on text written by native speakers by providing a new data set and metric. We present a multiple-reference test corpus for GEC that includes 4,000 sentences in two new domains (formal and informal writing by native English speakers) and 2,000 sentences from a diverse set of non-native student writing. We also collect human judgments of several GEC systems on this new test set and perform a meta-evaluation, assessing how reliable automatic metrics are across these domains. We find that commonly used GEC metrics have inconsistent performance across domains, and therefore we propose a new ensemble metric that is robust on all three domains of text.

pdf bib
Dialogue Act Classification with Context-Aware Self-Attention
Vipul Raheja | Joel Tetreault
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)

Recent work in Dialogue Act classification has treated the task as a sequence labeling problem using hierarchical deep neural networks. We build on this prior work by leveraging the effectiveness of a context-aware self-attention mechanism coupled with a hierarchical recurrent neural network. We conduct extensive evaluations on standard Dialogue Act classification datasets and show significant improvement over state-of-the-art results on the Switchboard Dialogue Act (SwDA) Corpus. We also investigate the impact of different utterance-level representation learning methods and show that our method is effective at capturing utterance-level semantic text representations while maintaining high accuracy.

pdf bib
Personalizing Grammatical Error Correction: Adaptation to Proficiency Level and L1
Maria Nadejde | Joel Tetreault
Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019)

Grammar error correction (GEC) systems have become ubiquitous in a variety of software applications, and have started to approach human-level performance for some datasets. However, very little is known about how to efficiently personalize these systems to the user’s characteristics, such as their proficiency level and first language, or to emerging domains of text. We present the first results on adapting a general purpose neural GEC system to both the proficiency level and the first language of a writer, using only a few thousand annotated sentences. Our study is the broadest of its kind, covering five proficiency levels and twelve different languages, and comparing three different adaptation scenarios: adapting to the proficiency level only, to the first language only, or to both aspects simultaneously. We show that tailoring to both scenarios achieves the largest performance improvement (3.6 F0.5) relative to a strong baseline.

pdf bib
Proceedings of the Third Workshop on Abusive Language Online
Sarah T. Roberts | Joel Tetreault | Vinodkumar Prabhakaran | Zeerak Waseem
Proceedings of the Third Workshop on Abusive Language Online

pdf bib
The Unbearable Weight of Generating Artificial Errors for Grammatical Error Correction
Phu Mon Htut | Joel Tetreault
Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications

In this paper, we investigate the impact of using 4 recent neural models for generating artificial errors to help train the neural grammatical error correction models. We conduct a battery of experiments on the effect of data size, models, and comparison with a rule-based approach.

2018

pdf bib
Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications
Joel Tetreault | Jill Burstein | Ekaterina Kochmar | Claudia Leacock | Helen Yannakoudakis
Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications

pdf bib
Discourse Coherence in the Wild: A Dataset, Evaluation and Methods
Alice Lai | Joel Tetreault
Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue

To date there has been very little work on assessing discourse coherence methods on real-world data. To address this, we present a new corpus of real-world texts (GCDC) as well as the first large-scale evaluation of leading discourse coherence algorithms. We show that neural models, including two that we introduce here (SentAvg and ParSeq), tend to perform best. We analyze these performance differences and discuss patterns we observed in low coherence texts in four domains.

pdf bib
How do you correct run-on sentences it’s not as easy as it seems
Junchao Zheng | Courtney Napoles | Joel Tetreault | Kostiantyn Omelianchuk
Proceedings of the 2018 EMNLP Workshop W-NUT: The 4th Workshop on Noisy User-generated Text

Run-on sentences are common grammatical mistakes but little research has tackled this problem to date. This work introduces two machine learning models to correct run-on sentences that outperform leading methods for related tasks, punctuation restoration and whole-sentence grammatical error correction. Due to the limited annotated data for this error, we experiment with artificially generating training data from clean newswire text. Our findings suggest artificial training data is viable for this task. We discuss implications for correcting run-ons and other types of mistakes that have low coverage in error-annotated corpora.

pdf bib
Dear Sir or Madam, May I Introduce the GYAFC Dataset: Corpus, Benchmarks and Metrics for Formality Style Transfer
Sudha Rao | Joel Tetreault
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)

Style transfer is the task of automatically transforming a piece of text in one particular style into another. A major barrier to progress in this field has been a lack of training and evaluation datasets, as well as benchmarks and automatic metrics. In this work, we create the largest corpus for a particular stylistic transfer (formality) and show that techniques from the machine translation community can serve as strong baselines for future work. We also discuss challenges of using automatic metrics.

2017

pdf bib
Finding Good Conversations Online: The Yahoo News Annotated Comments Corpus
Courtney Napoles | Joel Tetreault | Aasish Pappu | Enrica Rosato | Brian Provenzale
Proceedings of the 11th Linguistic Annotation Workshop

This work presents a dataset and annotation scheme for the new task of identifying “good” conversations that occur online, which we call ERICs: Engaging, Respectful, and/or Informative Conversations. We develop a taxonomy to reflect features of entire threads and individual comments which we believe contribute to identifying ERICs; code a novel dataset of Yahoo News comment threads (2.4k threads and 10k comments) and 1k threads from the Internet Argument Corpus; and analyze the features characteristic of ERICs. This is one of the largest annotated corpora of online human dialogues, with the most detailed set of annotations. It will be valuable for identifying ERICs and other aspects of argumentation, dialogue, and discourse.

pdf bib
Proceedings of the First Workshop on Abusive Language Online
Zeerak Waseem | Wendy Hui Kyong Chung | Dirk Hovy | Joel Tetreault
Proceedings of the First Workshop on Abusive Language Online

pdf bib
Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications
Joel Tetreault | Jill Burstein | Claudia Leacock | Helen Yannakoudakis
Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications

pdf bib
A Report on the 2017 Native Language Identification Shared Task
Shervin Malmasi | Keelan Evanini | Aoife Cahill | Joel Tetreault | Robert Pugh | Christopher Hamill | Diane Napolitano | Yao Qian
Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications

Native Language Identification (NLI) is the task of automatically identifying the native language (L1) of an individual based on their language production in a learned language. It is typically framed as a classification task where the set of L1s is known a priori. Two previous shared tasks on NLI have been organized where the aim was to identify the L1 of learners of English based on essays (2013) and spoken responses (2016) they provided during a standardized assessment of academic English proficiency. The 2017 shared task combines the inputs from the two prior tasks for the first time. There are three tracks: NLI on the essay only, NLI on the spoken response only (based on a transcription of the response and i-vector acoustic features), and NLI using both responses. We believe this makes for a more interesting shared task while building on the methods and results from the previous two shared tasks. In this paper, we report the results of the shared task. A total of 19 teams competed across the three different sub-tasks. The fusion track showed that combining the written and spoken responses provides a large boost in prediction accuracy. Multiple classifier systems (e.g. ensembles and meta-classifiers) were the most effective in all tasks, with most based on traditional classifiers (e.g. SVMs) with lexical/syntactic features.

pdf bib
GEC into the future: Where are we going and how do we get there?
Keisuke Sakaguchi | Courtney Napoles | Joel Tetreault
Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications

The field of grammatical error correction (GEC) has made tremendous bounds in the last ten years, but new questions and obstacles are revealing themselves. In this position paper, we discuss the issues that need to be addressed and provide recommendations for the field to continue to make progress, and propose a new shared task. We invite suggestions and critiques from the audience to make the new shared task a community-driven venture.

pdf bib
JFLEG: A Fluency Corpus and Benchmark for Grammatical Error Correction
Courtney Napoles | Keisuke Sakaguchi | Joel Tetreault
Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers

We present a new parallel corpus, JHU FLuency-Extended GUG corpus (JFLEG) for developing and evaluating grammatical error correction (GEC). Unlike other corpora, it represents a broad range of language proficiency levels and uses holistic fluency edits to not only correct grammatical errors but also make the original text more native sounding. We describe the types of corrections made and benchmark four leading GEC systems on this corpus, identifying specific areas in which they do well and how they can improve. JFLEG fulfills the need for a new gold standard to properly assess the current state of GEC.

2016

pdf bib
An Empirical Analysis of Formality in Online Communication
Ellie Pavlick | Joel Tetreault
Transactions of the Association for Computational Linguistics, Volume 4

This paper presents an empirical study of linguistic formality. We perform an analysis of humans’ perceptions of formality in four different genres. These findings are used to develop a statistical model for predicting formality, which is evaluated under different feature settings and genres. We apply our model to an investigation of formality in online discussion forums, and present findings consistent with theories of formality and linguistic coordination.

pdf bib
Reassessing the Goals of Grammatical Error Correction: Fluency Instead of Grammaticality
Keisuke Sakaguchi | Courtney Napoles | Matt Post | Joel Tetreault
Transactions of the Association for Computational Linguistics, Volume 4

The field of grammatical error correction (GEC) has grown substantially in recent years, with research directed at both evaluation metrics and improved system performance against those metrics. One unvisited assumption, however, is the reliance of GEC evaluation on error-coded corpora, which contain specific labeled corrections. We examine current practices and show that GEC’s reliance on such corpora unnaturally constrains annotation and automatic evaluation, resulting in (a) sentences that do not sound acceptable to native speakers and (b) system rankings that do not correlate with human judgments. In light of this, we propose an alternate approach that jettisons costly error coding in favor of unannotated, whole-sentence rewrites. We compare the performance of existing metrics over different gold-standard annotations, and show that automatic evaluation with our new annotation scheme has very strong correlation with expert rankings (ρ = 0.82). As a result, we advocate for a fundamental and necessary shift in the goal of GEC, from correcting small, labeled error types, to producing text that has native fluency.

pdf bib
Humor in Collective Discourse: Unsupervised Funniness Detection in the New Yorker Cartoon Caption Contest
Dragomir Radev | Amanda Stent | Joel Tetreault | Aasish Pappu | Aikaterini Iliakopoulou | Agustin Chanfreau | Paloma de Juan | Jordi Vallmitjana | Alejandro Jaimes | Rahul Jha | Robert Mankoff
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

The New Yorker publishes a weekly captionless cartoon. More than 5,000 readers submit captions for it. The editors select three of them and ask the readers to pick the funniest one. We describe an experiment that compares a dozen automatic methods for selecting the funniest caption. We show that negative sentiment, human-centeredness, and lexical centrality most strongly match the funniest captions, followed by positive sentiment. These results are useful for understanding humor and also in the design of more engaging conversational agents in text and multimodal (vision+text) systems. As part of this work, a large set of cartoons and captions is being made available to the community.

pdf bib
Proceedings of the 11th Workshop on Innovative Use of NLP for Building Educational Applications
Joel Tetreault | Jill Burstein | Claudia Leacock | Helen Yannakoudakis
Proceedings of the 11th Workshop on Innovative Use of NLP for Building Educational Applications

pdf bib
Do Characters Abuse More Than Words?
Yashar Mehdad | Joel Tetreault
Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue

pdf bib
There’s No Comparison: Reference-less Evaluation Metrics in Grammatical Error Correction
Courtney Napoles | Keisuke Sakaguchi | Joel Tetreault
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing

2015

pdf bib
It Depends: Dependency Parser Comparison Using A Web-based Evaluation Tool
Jinho D. Choi | Joel Tetreault | Amanda Stent
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

pdf bib
Ground Truth for Grammatical Error Correction Metrics
Courtney Napoles | Keisuke Sakaguchi | Matt Post | Joel Tetreault
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

pdf bib
Proceedings of the Tenth Workshop on Innovative Use of NLP for Building Educational Applications
Joel Tetreault | Jill Burstein | Claudia Leacock
Proceedings of the Tenth Workshop on Innovative Use of NLP for Building Educational Applications

pdf bib
Oracle and Human Baselines for Native Language Identification
Shervin Malmasi | Joel Tetreault | Mark Dras
Proceedings of the Tenth Workshop on Innovative Use of NLP for Building Educational Applications

2014

pdf bib
Non-Monotonic Parsing of Fluent Umm I mean Disfluent Sentences
Mohammad Sadegh Rasooli | Joel Tetreault
Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, volume 2: Short Papers

pdf bib
Proceedings of the Ninth Workshop on Innovative Use of NLP for Building Educational Applications
Joel Tetreault | Jill Burstein | Claudia Leacock
Proceedings of the Ninth Workshop on Innovative Use of NLP for Building Educational Applications

pdf bib
Proceedings of the First Joint Workshop on Statistical Parsing of Morphologically Rich Languages and Syntactic Analysis of Non-Canonical Languages
Yoav Goldberg | Yuval Marton | Ines Rehbein | Yannick Versley | Özlem Çetinoğlu | Joel Tetreault
Proceedings of the First Joint Workshop on Statistical Parsing of Morphologically Rich Languages and Syntactic Analysis of Non-Canonical Languages

pdf bib
Predicting Grammaticality on an Ordinal Scale
Michael Heilman | Aoife Cahill | Nitin Madnani | Melissa Lopez | Matthew Mulholland | Joel Tetreault
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

pdf bib
Automated Grammatical Error Correction for Language Learners
Joel Tetreault | Claudia Leacock
Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Tutorial Abstracts

2013

pdf bib
Joint Parsing and Disfluency Detection in Linear Time
Mohammad Sadegh Rasooli | Joel Tetreault
Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing

pdf bib
Robust Systems for Preposition Error Correction Using Wikipedia Revisions
Aoife Cahill | Nitin Madnani | Joel Tetreault | Diane Napolitano
Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Proceedings of the First Workshop on Metaphor in NLP
Ekaterina Shutova | Beata Beigman Klebanov | Joel Tetreault | Zornitsa Kozareva
Proceedings of the First Workshop on Metaphor in NLP

pdf bib
Proceedings of the Eighth Workshop on Innovative Use of NLP for Building Educational Applications
Joel Tetreault | Jill Burstein | Claudia Leacock
Proceedings of the Eighth Workshop on Innovative Use of NLP for Building Educational Applications

pdf bib
A Report on the First Native Language Identification Shared Task
Joel Tetreault | Daniel Blanchard | Aoife Cahill
Proceedings of the Eighth Workshop on Innovative Use of NLP for Building Educational Applications

pdf bib
Proceedings of the Seventeenth Conference on Computational Natural Language Learning: Shared Task
Hwee Tou Ng | Joel Tetreault | Siew Mei Wu | Yuanbin Wu | Christian Hadiwinoto
Proceedings of the Seventeenth Conference on Computational Natural Language Learning: Shared Task

pdf bib
The CoNLL-2013 Shared Task on Grammatical Error Correction
Hwee Tou Ng | Siew Mei Wu | Yuanbin Wu | Christian Hadiwinoto | Joel Tetreault
Proceedings of the Seventeenth Conference on Computational Natural Language Learning: Shared Task

2012

pdf bib
Identifying High-Level Organizational Elements in Argumentative Discourse
Nitin Madnani | Michael Heilman | Joel Tetreault | Martin Chodorow
Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Re-examining Machine Translation Metrics for Paraphrase Identification
Nitin Madnani | Joel Tetreault | Martin Chodorow
Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Correcting Comma Errors in Learner Essays, and Restoring Commas in Newswire Text
Ross Israel | Joel Tetreault | Martin Chodorow
Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Problems in Evaluating Grammatical Error Detection Systems
Martin Chodorow | Markus Dickinson | Ross Israel | Joel Tetreault
Proceedings of COLING 2012

pdf bib
Native Tongues, Lost and Found: Resources and Empirical Evaluations in Native Language Identification
Joel Tetreault | Daniel Blanchard | Aoife Cahill | Martin Chodorow
Proceedings of COLING 2012

pdf bib
Proceedings of the Seventh Workshop on Building Educational Applications Using NLP
Joel Tetreault | Jill Burstein | Claudia Leacock
Proceedings of the Seventh Workshop on Building Educational Applications Using NLP

pdf bib
Exploring Grammatical Error Correction with Not-So-Crummy Machine Translation
Nitin Madnani | Joel Tetreault | Martin Chodorow
Proceedings of the Seventh Workshop on Building Educational Applications Using NLP

pdf bib
Precision Isn’t Everything: A Hybrid Approach to Grammatical Error Detection
Michael Heilman | Aoife Cahill | Joel Tetreault
Proceedings of the Seventh Workshop on Building Educational Applications Using NLP

2011

pdf bib
They Can Help: Using Crowdsourcing to Improve the Evaluation of Grammatical Error Detection Systems
Nitin Madnani | Martin Chodorow | Joel Tetreault | Alla Rozovskaya
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Exploiting Syntactic and Distributional Information for Spelling Correction with Web-Scale N-gram Models
Wei Xu | Joel Tetreault | Martin Chodorow | Ralph Grishman | Le Zhao
Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing

pdf bib
Proceedings of the Sixth Workshop on Innovative Use of NLP for Building Educational Applications
Joel Tetreault | Jill Burstein | Claudia Leacock
Proceedings of the Sixth Workshop on Innovative Use of NLP for Building Educational Applications

pdf bib
E-rating Machine Translation
Kristen Parton | Joel Tetreault | Nitin Madnani | Martin Chodorow
Proceedings of the Sixth Workshop on Statistical Machine Translation

2010

pdf bib
Using an Error-Annotated Learner Corpus to Develop an ESL/EFL Error Correction System
Na-Rae Han | Joel Tetreault | Soo-Hwa Lee | Jin-Young Ha
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

This paper presents research on building a model of grammatical error correction, for preposition errors in particular, in English text produced by language learners. Unlike most previous work which trains a statistical classifier exclusively on well-formed text written by native speakers, we train a classifier on a large-scale, error-tagged corpus of English essays written by ESL learners, relying on contextual and grammatical features surrounding preposition usage. First, we show that such a model can achieve high performance values: 93.3% precision and 14.8% recall for error detection and 81.7% precision and 13.2% recall for error detection and correction when tested on preposition replacement errors. Second, we show that this model outperforms models trained on well-edited text produced by native speakers of English. We discuss the implications of our approach in the area of language error modeling and the issues stemming from working with a noisy data set whose error annotations are not exhaustive.

pdf bib
Proceedings of the NAACL HLT 2010 Fifth Workshop on Innovative Use of NLP for Building Educational Applications
Joel Tetreault | Jill Burstein | Claudia Leacock
Proceedings of the NAACL HLT 2010 Fifth Workshop on Innovative Use of NLP for Building Educational Applications

pdf bib
Rethinking Grammatical Error Annotation and Evaluation with the Amazon Mechanical Turk
Joel Tetreault | Elena Filatova | Martin Chodorow
Proceedings of the NAACL HLT 2010 Fifth Workshop on Innovative Use of NLP for Building Educational Applications

pdf bib
Towards Using Structural Events To Assess Non-native Speech
Lei Chen | Joel Tetreault | Xiaoming Xi
Proceedings of the NAACL HLT 2010 Fifth Workshop on Innovative Use of NLP for Building Educational Applications

pdf bib
Using Parse Features for Preposition Selection and Error Detection
Joel Tetreault | Jennifer Foster | Martin Chodorow
Proceedings of the ACL 2010 Conference Short Papers

pdf bib
Using Entity-Based Features to Model Coherence in Student Essays
Jill Burstein | Joel Tetreault | Slava Andreyev
Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics

2009

pdf bib
Proceedings of the Fourth Workshop on Innovative Use of NLP for Building Educational Applications
Joel Tetreault | Jill Burstein | Claudia Leacock
Proceedings of the Fourth Workshop on Innovative Use of NLP for Building Educational Applications

pdf bib
Human Evaluation of Article and Noun Number Usage: Influences of Context and Construction Variability
John Lee | Joel Tetreault | Martin Chodorow
Proceedings of the Third Linguistic Annotation Workshop (LAW III)

2008

pdf bib
Proceedings of the Third Workshop on Innovative Use of NLP for Building Educational Applications
Joel Tetreault | Jill Burstein | Rachele De Felice
Proceedings of the Third Workshop on Innovative Use of NLP for Building Educational Applications

pdf bib
Native Judgments of Non-Native Usage: Experiments in Preposition Error Detection
Joel Tetreault | Martin Chodorow
Coling 2008: Proceedings of the workshop on Human Judgements in Computational Linguistics

pdf bib
The Ups and Downs of Preposition Error Detection in ESL Writing
Joel R. Tetreault | Martin Chodorow
Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008)

2007

pdf bib
Detection of Grammatical Errors Involving Prepositions
Martin Chodorow | Joel Tetreault | Na-Rae Han
Proceedings of the Fourth ACL-SIGSEM Workshop on Prepositions

pdf bib
Estimating the Reliability of MDP Policies: a Confidence Interval Approach
Joel Tetreault | Dan Bohus | Diane Litman
Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference

pdf bib
Comparing User Simulation Models For Dialog Strategy Learning
Hua Ai | Joel Tetreault | Diane Litman
Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Companion Volume, Short Papers

pdf bib
Exploring Affect-Context Dependencies for Adaptive System Development
Kate Forbes-Riley | Mihai Rotaru | Diane Litman | Joel Tetreault
Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Companion Volume, Short Papers

2006

pdf bib
Comparing the Utility of State Features in Spoken Dialogue Using Reinforcement Learning
Joel Tetreault | Diane Litman
Proceedings of the Human Language Technology Conference of the NAACL, Main Conference

pdf bib
Using Reinforcement Learning to Build a Better Model of Dialogue State
Joel R. Tetreault | Diane J. Litman
11th Conference of the European Chapter of the Association for Computational Linguistics

2004

pdf bib
Semi-automatic Syntactic and Semantic Corpus Annotation with a Deep Parser
Mary D. Swift | Myroslava O. Dzikovska | Joel R. Tetreault | James F. Allen
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)

pdf bib
Evaluation of Transcription and Annotation Tools for a Multi-modal, Multi-party Dialogue Corpus
Saurabh Garg | Bilyana Martinovski | Susan Robinson | Jens Stephan | Joel Tetreault | David R. Traum
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)

pdf bib
Discourse Annotation in the Monroe Corpus
Joel Tetreault | Mary Swift | Preethum Prithviraj | Myroslava Dzikovska | James Allen
Proceedings of the Workshop on Discourse Annotation

pdf bib
Incremental Parsing with Reference Interaction
Scott C. Stoness | Joel Tetreault | James Allen
Proceedings of the Workshop on Incremental Parsing: Bringing Engineering and Cognition Together

2001

pdf bib
A Corpus-Based Evaluation of Centering and Pronoun Resolution
Joel R. Tetreault
Computational Linguistics, Volume 27, Number 4, December 2001

1999

pdf bib
A Flexible Architecture for Reference Resolution
Donna K. Byron | Joel R. Tetreault
Ninth Conference of the European Chapter of the Association for Computational Linguistics

pdf bib
Analysis of Syntax-Based Pronoun Resolution Methods
Joel R. Tetreault
Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics

Search
Co-authors