Gabriel Murray


2023

pdf bib
Mixture-of-Linguistic-Experts Adapters for Improving and Interpreting Pre-trained Language Models
Raymond Li | Gabriel Murray | Giuseppe Carenini
Findings of the Association for Computational Linguistics: EMNLP 2023

In this work, we propose a method that combines two popular research areas by injecting linguistic structures into pre-trained language models in the parameter-efficient fine-tuning (PEFT) setting. In our approach, parallel adapter modules encoding different linguistic structures are combined using a novel Mixture-of-Linguistic-Experts architecture, where Gumbel-Softmax gates are used to determine the importance of these modules at each layer of the model. To reduce the number of parameters, we first train the model for a fixed small number of steps before pruning the experts based on their important scores. Our experiment results with three different pre-trained models show that our approach can outperform state-of-the-art PEFT methods with a comparable number of parameters. In addition, we provide additional analysis to examine the experts selected by each model at each layer to provide insights for future studies.

pdf bib
Diversity-Aware Coherence Loss for Improving Neural Topic Models
Raymond Li | Felipe Gonzalez-Pizarro | Linzi Xing | Gabriel Murray | Giuseppe Carenini
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

The standard approach for neural topic modeling uses a variational autoencoder (VAE) framework that jointly minimizes the KL divergence between the estimated posterior and prior, in addition to the reconstruction loss. Since neural topic models are trained by recreating individual input documents, they do not explicitly capture the coherence between words on the corpus level. In this work, we propose a novel diversity-aware coherence loss that encourages the model to learn corpus-level coherence scores while maintaining high diversity between topics. Experimental results on multiple datasets show that our method significantly improves the performance of neural topic models without requiring any pretraining or additional parameters.

2022

pdf bib
Human Guided Exploitation of Interpretable Attention Patterns in Summarization and Topic Segmentation
Raymond Li | Wen Xiao | Linzi Xing | Lanjun Wang | Gabriel Murray | Giuseppe Carenini
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing

The multi-head self-attention mechanism of the transformer model has been thoroughly investigated recently. In one vein of study, researchers are interested in understanding why and how transformers work. In another vein, researchers propose new attention augmentation methods to make transformers more accurate, efficient and interpretable. In this paper, we combine these two lines of research in a human-in-the-loop pipeline to first discover important task-specific attention patterns. Then those patterns are injected, not only to smaller models, but also to the original model. The benefits of our pipeline and discovered patterns are demonstrated in two case studies with extractive summarization and topic segmentation. After discovering interpretable patterns in BERT-based models fine-tuned for the two downstream tasks, experiments indicate that when we inject the patterns into attention heads, the models show considerable improvements in accuracy and efficiency.

2019

pdf bib
Discourse Analysis and Its Applications
Shafiq Joty | Giuseppe Carenini | Raymond Ng | Gabriel Murray
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts

Discourse processing is a suite of Natural Language Processing (NLP) tasks to uncover linguistic structures from texts at several levels, which can support many downstream applications. This involves identifying the topic structure, the coherence structure, the coreference structure, and the conversation structure for conversational discourse. Taken together, these structures can inform text summarization, machine translation, essay scoring, sentiment analysis, information extraction, question answering, and thread recovery. The tutorial starts with an overview of basic concepts in discourse analysis – monologue vs. conversation, synchronous vs. asynchronous conversation, and key linguistic structures in discourse analysis. We also give an overview of linguistic structures and corresponding discourse analysis tasks that discourse researchers are generally interested in, as well as key applications on which these discourse structures have an impact.

2018

pdf bib
Language-Based Automatic Assessment of Cognitive and Communicative Functions Related to Parkinson’s Disease
Lesley Jessiman | Gabriel Murray | McKenzie Braley
Proceedings of the First International Workshop on Language Cognition and Computational Models

We explore the use of natural language processing and machine learning for detecting evidence of Parkinson’s disease from transcribed speech of subjects who are describing everyday tasks. Experiments reveal the difficulty of treating this as a binary classification task, and a multi-class approach yields superior results. We also show that these models can be used to predict cognitive abilities across all subjects.

pdf bib
NLP for Conversations: Sentiment, Summarization, and Group Dynamics
Gabriel Murray | Giuseppe Carenini | Shafiq Joty
Proceedings of the 27th International Conference on Computational Linguistics: Tutorial Abstracts

2017

pdf bib
Detecting Dementia through Retrospective Analysis of Routine Blog Posts by Bloggers with Dementia
Vaden Masrani | Gabriel Murray | Thalia Field | Giuseppe Carenini
BioNLP 2017

We investigate if writers with dementia can be automatically distinguished from those without by analyzing linguistic markers in written text, in the form of blog posts. We have built a corpus of several thousand blog posts, some by people with dementia and others by people with loved ones with dementia. We use this dataset to train and test several machine learning methods, and achieve prediction performance at a level far above the baseline.

pdf bib
Modelling Participation in Small Group Social Sequences with Markov Rewards Analysis
Gabriel Murray
Proceedings of the Second Workshop on NLP and Computational Social Science

We explore a novel computational approach for analyzing member participation in small group social sequences. Using a complex state representation combining information about dialogue act types, sentiment expression, and participant roles, we explore which sequence states are associated with high levels of member participation. Using a Markov Rewards framework, we associate particular states with immediate positive and negative rewards, and employ a Value Iteration algorithm to calculate the expected value of all states. In our findings, we focus on discourse states belonging to team leaders and project managers which are either very likely or very unlikely to lead to participation from the rest of the group members.

2012

pdf bib
Using the Omega Index for Evaluating Abstractive Community Detection
Gabriel Murray | Giuseppe Carenini | Raymond Ng
Proceedings of Workshop on Evaluation Metrics and System Comparison for Automatic Summarization

2010

pdf bib
Domain Adaptation to Summarize Human Conversations
Oana Sandu | Giuseppe Carenini | Gabriel Murray | Raymond Ng
Proceedings of the 2010 Workshop on Domain Adaptation for Natural Language Processing

pdf bib
Generating and Validating Abstracts of Meeting Conversations: a User Study
Gabriel Murray | Giuseppe Carenini | Raymond Ng
Proceedings of the 6th International Natural Language Generation Conference

pdf bib
Interpretation and Transformation for Abstracting Conversations
Gabriel Murray | Giuseppe Carenini | Raymond Ng
Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics

pdf bib
Exploiting Conversation Structure in Unsupervised Topic Segmentation for Emails
Shafiq Joty | Giuseppe Carenini | Gabriel Murray | Raymond T. Ng
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing

2009

pdf bib
Predicting Subjectivity in Multimodal Conversations
Gabriel Murray | Giuseppe Carenini
Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing

2008

pdf bib
Summarizing Spoken and Written Conversations
Gabriel Murray | Giuseppe Carenini
Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing

2007

pdf bib
Automatic Segmentation and Summarization of Meeting Speech
Gabriel Murray | Pei-Yun Hsueh | Simon Tucker | Jonathan Kilgour | Jean Carletta | Johanna D. Moore | Steve Renals
Proceedings of Human Language Technologies: The Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT)

2006

pdf bib
Incorporating Speaker and Discourse Features into Speech Summarization
Gabriel Murray | Steve Renals | Jean Carletta | Johanna Moore
Proceedings of the Human Language Technology Conference of the NAACL, Main Conference

pdf bib
Dimensionality Reduction Aids Term Co-Occurrence Based Multi-Document Summarization
Ben Hachey | Gabriel Murray | David Reitter
Proceedings of the Workshop on Task-Focused Summarization and Question Answering

pdf bib
Prosodic Correlates of Rhetorical Relations
Gabriel Murray | Maite Taboada | Steve Renals
Proceedings of the Analyzing Conversations in Text and Speech

2005

pdf bib
Evaluating Automatic Summaries of Meeting Recordings
Gabriel Murray | Steve Renals | Jean Carletta | Johanna Moore
Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization