13th Conference of the European Chapter
of the Association for computational Linguistics

13th Conference of the European Chapter
of the Association for computational Linguistics


Avignon France April 23 - 27 2012



Invited Speakers

 

Regina Barzilay

Massachusetts Institute of Technology, USA

Regina Barzilay is an Associate Professor in the Department of Electrical Engineering and Computer Science and a member of the Computer Science and Artificial Intelligence Laboratory. Her research interests are in natural language processing. Recent achievements of her group include the decipherment of ancient semitic language Ugaritic and the development of reinforcement learning algorithms for language grounding. She is a recipient of various awards including the NSF Career Award, Microsoft Faculty Fellowship, the MIT Technology Review TR-35 Award, and best paper awards in the top NLP conferences. She serves as an associate editor of the Journal of Artificial Intelligence Research.

Learning to Behave by Reading

Abstract

In this talk, I will address the problem of grounding linguistic analysis in control applications, such as game playing and robot navigation. We assume access to natural language documents that describe the desired behavior of a control algorithm (e.g., game strategy guides). Our goal is to demonstrate that knowledge automatically extracted from such documents can dramatically improve performance of the target application. First, I will present a reinforcement learning algorithm for learning to map natural language instructions to executable actions. This technique has enabled automation of tasks that until now have required human participation --- for example, automatically configuring software by consulting how-to guides. Next, I will present a Monte-Carlo search algorithm for game playing that incorporates information from game strategy guides. In this framework, the task of text interpretation is formulated as a probabilistic model that is trained based on feedback from Monte-Carlo search. When applied to the Civilization strategy game, a language-empowered player outperforms its traditional counterpart by a significant margin.

Martin Cooke

Ikerbasque (Basque Foundation for Science), Spain

Martin Cooke is Ikerbasque Professor at the Basque Foundation for Science and member of the Language and Speech Laboratory in the University of the Basque Country in Vitoria, Spain. Prior to this, he was Professor of Computer Science at the University of Sheffield, UK. His recent work focuses on computer models of human speech perception and production in adverse conditions in first and second languages. He is the author of Modelling Auditory Processing and Organisation (Cambridge University Press) and Visual Representations of Speech Signals (John Wiley). Cooke received a BSc in Computer Science and Mathematics from the University of Manchester and a PhD in Computer Science from theUniversity of Sheffield.

 

Speech Communication in the Wild

Abstract

Much of what we know about speech perception comes from laboratory studies with clean, canonical speech, ideal listeners and artificial tasks. But how do interlocutors manage to communicate effectively in the seemingly less-than-ideal conditions of everyday listening, which frequently involve trying to make sense of speech while listening in a non-native language, or in the presence of competing sound sources, or while multitasking? In this talk I'll examine the effect of real-world conditions on speech perception and quantify the contributions made by factors such as binaural hearing, visual information and prior knowledge to speech communication in noise. I'll present a computational model which trades stimulus-related cues with information from learnt speech models, and examine how well it handles both energetic and informational masking in a two-sentence separation task. Speech communication also involves listening-while-talking. In the final part of the talk I'll describe some ways in which speakers might be making communication easier for their interlocutors, and demonstrate the application of these principles to improving the intelligibility of natural and synthetic speech in adverse conditions.

Raymond Mooney

University of Texas at Austin, USA

Raymond J. Mooney is a Professor in the Department of Computer Science at the University of Texas at Austin. He received his Ph.D. in 1988 from the University of Illinois at Urbana/Champaign. He is an author of over 150 published research papers, primarily in the areas of machine learning and natural language processing. He is the former President of the International Machine Learning Society, was program co-chair for the 2006 AAAI Conference on Artificial Intelligence, general chair of the 2005 Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, and co-chair of the 1990 International Conference on Machine Learning. He is a Fellow of both the American Association for Artificial Intelligence and the Association for Computing Machinery, and the recipient of best paper awards from the National Conference on Artificial Intelligence, the SIGKDD International Conference on Knowledge Discovery and Data Mining, the International Conference on Machine Learning, and the Annual Meeting of the Association for Computational Linguistics. His recent research has focused on learning for natural-language processing, connecting language and perception, statistical relational learning, and transfer learning.

Learning Language from Perceptual Context

Abstract

Machine learning has become the dominant approach to building natural-language processing systems. However, current approaches generally require a great deal of laboriously constructed human-annotated training data. Ideally, a computer would be able to acquire language like a child by being exposed to linguistic input in the context of a relevant but ambiguous perceptual environment. As a step in this direction, we have developed systems that learn to sportscast simulated robot soccer games and to follow navigation instructions in virtual environments by simply observing sample human linguistic behavior in context. This work builds on our earlier work on supervised learning of semantic parsers that map natural language into a formal meaning representation. In order to apply such methods to learning from observation, we have developed methods that estimate the meaning of sentences given just their ambiguous perceptual context.