2017Q3 Reports: Tutorial Chairs

From Admin Wiki
Jump to: navigation, search

Tutorial Chairs

Maja Popović, Humboldt-Universität zu Berlin
Jordan Boyd-Graber, University of Colorado, Boulder

In total, we received 26 submissions for the joint ACL/EACL/EMNLP call, and we accepted 19 of them. Nine tutorials had ACL as the preferred conference: one was rejected, two were redirected to EMNLP, and the rest were accepted.

The following six tutorials were accepted for ACL 2017:

1) Multimodal Machine Learning - Louis-Philippe Morency and Tadas Baltrusaitis

Multimodal machine learning is a vibrant multi-disciplinary research field which addresses some of the original goals of artificial intelligence by integrating and modeling multiple communicative modalities, including linguistic, acoustic and visual messages. With the initial research on audio-visual speech recognition and more recently with image and video captioning projects, this research field brings some unique challenges for multimodal researchers given the heterogeneity of the data and the contingency often found between modalities. The present tutorial will review fundamental concepts of machine learning and deep neural networks before describing the five main challenges in multimodal machine learning.

2) Deep Learning for Dialogue Systems - Yun-Nung Chen, Asli Celikyilmaz, and Dilek Hakkani-Tur

The traditional conversational systems have rather complex and/or modular pipelines. The advance of deep learning technologies has recently risen the applications of neural models to dialogue modeling. Nevertheless, applying deep learning technologies for building robust and scalable dialogue systems is still a challenging task and an open research area as it requires deeper understanding of the classic pipelines as well as detailed knowledge on the benchmark of the models of the prior work and the recent state-of-the-art work. Thus, this tutorial is designed to focus on an overview of the dialogue system development while describing most recent research for building dialogue systems, and summarizing the challenges.

3) Deep Learning for Semantic Composition - Xiaodan Zhu and Edward Grefenstette

Learning representation to model the meaning of text has been a core problem in NLP. The last several years have seen extensive interests on distributional approaches, in which text spans of different granularities are encoded as vectors of numerical values. If properly learned, such representation has showed to achieve the state-of-the-art performance on a wide range of NLP problems. This tutorial will cover the fundamentals and the state-of-the-art research on neural network-based modeling for semantic composition, which aims to learn distributed representation for different granularities of text, e.g., phrases, sentences, or even documents, from their sub-component meaning representation, e.g., word embedding.

4) Beyond Words: Deep Learning for Multi-word Expressions and Collocations - Valia Kordoni

The aim of this tutorial is to go beyond the learning of word vectors and present methods for learning vector representations for Multiword Expressions and bilingual phrase pairs, all of which are useful for various NLP applications. This tutorial aims to provide attendees with a clear notion of the linguistic and distributional characteristics of Multiword Expressions (MWEs), their relevance for the intersection of deep learning and natural language processing, what methods and resources are available to support their use, and what more could be done in the future.

5) Natural Language Processing for Precision Medicine - Hoifung Poon, Chris Quirk, Kristina Toutanova, and Wen-tau Yih

The tutorial will introduce precision medicine and showcase the vast opportunities for NLP in this burgeoning field with great societal impact. Pressing NLP problems, state-of-the art methods, and important applications, as well as datasets, medical resources, and practical issues will be reviewed. The tutorial will provide an accessible overview of biomedicine, and does not presume knowledge in biology or healthcare. The ultimate goal is to reduce the entry barrier for NLP researchers to contribute to this exciting domain.

6) Making Better Use of the Crowd - Jennifer Wortman Vaughan

Over the last decade, crowdsourcing has been used to harness the power of human computation to solve tasks that are notoriously difficult to solve with computers alone, such as determining whether or not an image contains a tree, rating the relevance of a website, or verifying the phone number of a business. This tutorial will show innovative uses of crowdsourcing that go beyond data collection and annotation: applications to natural language processing and machine learning, hybrid intelligence or “human in the loop” AI systems that leverage the complementary strengths of humans and machines in order to achieve more than either could achieve alone, and large scale studies of human behavior online.