Matthew Marge


2022

pdf bib
DOROTHIE: Spoken Dialogue for Handling Unexpected Situations in Interactive Autonomous Driving Agents
Ziqiao Ma | Benjamin VanDerPloeg | Cristian-Paul Bara | Yidong Huang | Eui-In Kim | Felix Gervits | Matthew Marge | Joyce Chai
Findings of the Association for Computational Linguistics: EMNLP 2022

In the real world, autonomous driving agents navigate in highly dynamic environments full of unexpected situations where pre-trained models are unreliable. In these situations, what is immediately available to vehicles is often only human operators. Empowering autonomous driving agents with the ability to navigate in a continuous and dynamic environment and to communicate with humans through sensorimotor-grounded dialogue becomes critical. To this end, we introduce Dialogue On the ROad To Handle Irregular Events (DOROTHIE), a novel interactive simulation platform that enables the creation of unexpected situations on the fly to support empirical studies on situated communication with autonomous driving agents. Based on this platform, we created the Situated Dialogue Navigation (SDN), a navigation benchmark of 183 trials with a total of 8415 utterances, around 18.7 hours of control streams, and 2.9 hours of trimmed audio. SDN is developed to evaluate the agent’s ability to predict dialogue moves from humans as well as generate its own dialogue moves and physical navigation actions. We further developed a transformer-based baseline model for these SDN tasks. Our empirical results indicate that language guided-navigation in a highly dynamic environment is an extremely difficult task for end-to-end models. These results will provide insight towards future work on robust autonomous driving agents

pdf bib
A System For Robot Concept Learning Through Situated Dialogue
Benjamin Kane | Felix Gervits | Matthias Scheutz | Matthew Marge
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue

Robots operating in unexplored environments with human teammates will need to learn unknown concepts on the fly. To this end, we demonstrate a novel system that combines a computational model of question generation with a cognitive robotic architecture. The model supports dynamic production of back-and-forth dialogue for concept learning given observations of an environment, while the architecture supports symbolic reasoning, action representation, one-shot learning and other capabilities for situated interaction. The system is able to learn about new concepts including objects, locations, and actions, using an underlying approach that is generalizable and scalable. We evaluate the system by comparing learning efficiency to a human baseline in a collaborative reference resolution task and show that the system is effective and efficient in learning new concepts, and that it can informatively generate explanations about its behavior.

2021

pdf bib
How Should Agents Ask Questions For Situated Learning? An Annotated Dialogue Corpus
Felix Gervits | Antonio Roque | Gordon Briggs | Matthias Scheutz | Matthew Marge
Proceedings of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue

Intelligent agents that are confronted with novel concepts in situated environments will need to ask their human teammates questions to learn about the physical world. To better understand this problem, we need data about asking questions in situated task-based interactions. To this end, we present the Human-Robot Dialogue Learning (HuRDL) Corpus - a novel dialogue corpus collected in an online interactive virtual environment in which human participants play the role of a robot performing a collaborative tool-organization task. We describe the corpus data and a corresponding annotation scheme to offer insight into the form and content of questions that humans ask to facilitate learning in a situated environment. We provide the corpus as an empirically-grounded resource for improving question generation in situated intelligent agents.

2020

pdf bib
Dialogue-AMR: Abstract Meaning Representation for Dialogue
Claire Bonial | Lucia Donatelli | Mitchell Abrams | Stephanie M. Lukin | Stephen Tratz | Matthew Marge | Ron Artstein | David Traum | Clare Voss
Proceedings of the Twelfth Language Resources and Evaluation Conference

This paper describes a schema that enriches Abstract Meaning Representation (AMR) in order to provide a semantic representation for facilitating Natural Language Understanding (NLU) in dialogue systems. AMR offers a valuable level of abstraction of the propositional content of an utterance; however, it does not capture the illocutionary force or speaker’s intended contribution in the broader dialogue context (e.g., make a request or ask a question), nor does it capture tense or aspect. We explore dialogue in the domain of human-robot interaction, where a conversational robot is engaged in search and navigation tasks with a human partner. To address the limitations of standard AMR, we develop an inventory of speech acts suitable for our domain, and present “Dialogue-AMR”, an enhanced AMR that represents not only the content of an utterance, but the illocutionary force behind it, as well as tense and aspect. To showcase the coverage of the schema, we use both manual and automatic methods to construct the “DialAMR” corpus—a corpus of human-robot dialogue annotated with standard AMR and our enriched Dialogue-AMR schema. Our automated methods can be used to incorporate AMR into a larger NLU pipeline supporting human-robot dialogue.

2019

pdf bib
A Research Platform for Multi-Robot Dialogue with Humans
Matthew Marge | Stephen Nogar | Cory J. Hayes | Stephanie M. Lukin | Jesse Bloecker | Eric Holder | Clare Voss
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)

This paper presents a research platform that supports spoken dialogue interaction with multiple robots. The demonstration showcases our crafted MultiBot testing scenario in which users can verbally issue search, navigate, and follow instructions to two robotic teammates: a simulated ground robot and an aerial robot. This flexible language and robotic platform takes advantage of existing tools for speech recognition and dialogue management that are compatible with new domains, and implements an inter-agent communication protocol (tactical behavior specification), where verbal instructions are encoded for tasks assigned to the appropriate robot.

pdf bib
B. Rex: a dialogue agent for book recommendations
Mitchell Abrams | Luke Gessler | Matthew Marge
Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue

We present B. Rex, a dialogue agent for book recommendations. B. Rex aims to exploit the cognitive ease of natural dialogue and the excitement of a whimsical persona in order to engage users who might not enjoy using more common interfaces for finding new books. B. Rex succeeds in making book recommendations with good quality based on only information revealed by the user in the dialogue.

2018

pdf bib
Consequences and Factors of Stylistic Differences in Human-Robot Dialogue
Stephanie Lukin | Kimberly Pollard | Claire Bonial | Matthew Marge | Cassidy Henry | Ron Artstein | David Traum | Clare Voss
Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue

This paper identifies stylistic differences in instruction-giving observed in a corpus of human-robot dialogue. Differences in verbosity and structure (i.e., single-intent vs. multi-intent instructions) arose naturally without restrictions or prior guidance on how users should speak with the robot. Different styles were found to produce different rates of miscommunication, and correlations were found between style differences and individual user variation, trust, and interaction experience with the robot. Understanding potential consequences and factors that influence style can inform design of dialogue systems that are robust to natural variation from human users.

pdf bib
Dialogue Structure Annotation for Multi-Floor Interaction
David Traum | Cassidy Henry | Stephanie Lukin | Ron Artstein | Felix Gervits | Kimberly Pollard | Claire Bonial | Su Lei | Clare Voss | Matthew Marge | Cory Hayes | Susan Hill
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

pdf bib
ScoutBot: A Dialogue System for Collaborative Navigation
Stephanie M. Lukin | Felix Gervits | Cory J. Hayes | Pooja Moolchandani | Anton Leuski | John G. Rogers III | Carlos Sanchez Amaro | Matthew Marge | Clare R. Voss | David Traum
Proceedings of ACL 2018, System Demonstrations

ScoutBot is a dialogue interface to physical and simulated robots that supports collaborative exploration of environments. The demonstration will allow users to issue unconstrained spoken language commands to ScoutBot. ScoutBot will prompt for clarification if the user’s instruction needs additional input. It is trained on human-robot dialogue collected from Wizard-of-Oz experiments, where robot responses were initiated by a human wizard in previous interactions. The demonstration will show a simulated ground robot (Clearpath Jackal) in a simulated environment supported by ROS (Robot Operating System).

2017

pdf bib
Exploring Variation of Natural Human Commands to a Robot in a Collaborative Navigation Task
Matthew Marge | Claire Bonial | Ashley Foots | Cory Hayes | Cassidy Henry | Kimberly Pollard | Ron Artstein | Clare Voss | David Traum
Proceedings of the First Workshop on Language Grounding for Robotics

Robot-directed communication is variable, and may change based on human perception of robot capabilities. To collect training data for a dialogue system and to investigate possible communication changes over time, we developed a Wizard-of-Oz study that (a) simulates a robot’s limited understanding, and (b) collects dialogues where human participants build a progressively better mental model of the robot’s understanding. With ten participants, we collected ten hours of human-robot dialogue. We analyzed the structure of instructions that participants gave to a remote robot before it responded. Our findings show a general initial preference for including metric information (e.g., move forward 3 feet) over landmarks (e.g., move to the desk) in motion commands, but this decreased over time, suggesting changes in perception.

2015

pdf bib
Miscommunication Recovery in Physically Situated Dialogue
Matthew Marge | Alexander Rudnicky
Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue

2010

pdf bib
Using the Amazon Mechanical Turk to Transcribe and Annotate Meeting Speech for Extractive Summarization
Matthew Marge | Satanjeev Banerjee | Alexander Rudnicky
Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk

pdf bib
Towards Improving the Naturalness of Social Conversations with Dialogue Systems
Matthew Marge | João Miranda | Alan Black | Alexander Rudnicky
Proceedings of the SIGDIAL 2010 Conference

pdf bib
Comparing Spoken Language Route Instructions for Robots across Environment Representations
Matthew Marge | Alexander Rudnicky
Proceedings of the SIGDIAL 2010 Conference

2008

pdf bib
Creation of a New Domain and Evaluation of Comparison Generation in a Natural Language Generation System
Matthew Marge | Amy Isard | Johanna Moore
Proceedings of the Fifth International Natural Language Generation Conference