Difference between revisions of "2020Q1 Reports: ACL 2020"

From Admin Wiki
Jump to navigation Jump to search
Line 128: Line 128:
 
'''List of SAC/ACs and recruitment'''
 
'''List of SAC/ACs and recruitment'''
  
Following ACL2019, we have adopted a hierarchical structure where each area is chaired by one or two senior ACs, who are supported by a group of area chairs. We have a total of 40 Senior Area Chairs and 299 Area Chairs. \n
+
Following ACL2019, we have adopted a hierarchical structure where each area is chaired by one or two senior ACs, who are supported by a group of area chairs. We have a total of 40 Senior Area Chairs and 299 Area Chairs.  
 
'''''Recruitment''''': We individually created preference lists for SACs, discussed these and made decisions.  ACs were selected by SACs.
 
'''''Recruitment''''': We individually created preference lists for SACs, discussed these and made decisions.  ACs were selected by SACs.
  
Line 141: Line 141:
 
<li>SACs: Tim Baldwin, Nikolaos Aletras
 
<li>SACs: Tim Baldwin, Nikolaos Aletras
 
<li>ACs: A. Seza Dögruöz, Afshin Rahimi, Alice Oh, Brendan O'Connor, Daniel Preotiuc-Pietro, David Bamman, David Jurgens, David Mimno, Diana Inkpen, Diyi Yang, Eiji Aramaki, Jacob Eisenstein, Jonathan K. Kummerfeld, Kalina Bontcheva
 
<li>ACs: A. Seza Dögruöz, Afshin Rahimi, Alice Oh, Brendan O'Connor, Daniel Preotiuc-Pietro, David Bamman, David Jurgens, David Mimno, Diana Inkpen, Diyi Yang, Eiji Aramaki, Jacob Eisenstein, Jonathan K. Kummerfeld, Kalina Bontcheva
 +
</ul>
 +
 +
Dialogue and Interactive Systems
 +
<ul>
 +
<li>SACs: Jason Williams, Mari Ostendorf
 +
<li>ACs: Alborz Geramifard, Amanda Stent, Asli Celikyilmaz, Casey Kennington, David Traum, Dilek Hakkani-Tur, Gabriel Skantze, Helen Hastie, Heriberto Cuayahuitl, Kai Yu, Kallirroi Georgila, Luciana Benotti, Luis Fernando D'Haro, Nina Dethlefs, Ryuichiro Higashinaka, Stefan Ultes, Sungjin Lee, Tsung-Hsien Wen, Y-Lan Boureau, Yun-Nung Chen, Zhou Yu
 +
</ul>
 +
 +
Discourse and Pragmatics
 +
<ul>
 +
<li>SACs: Annie Louis (taking over for Diane Litman)
 +
<li>ACs: Chloé Braud, Junyi Jessy Li, Manfred Stede, Shafiq Joty, Sujian Li, Yangfeng Ji
 +
</ul>
 +
 +
Ethics and NLP
 +
<ul>
 +
<li>SACs: Dirk Hovy
 +
<li>ACs: Alan W Black, Emily M. Bender, Vinodkumar Prabhakaran, Yulia Tsvetkov
 +
</ul>
 +
 +
Generation
 +
<ul>
 +
<li>SACs: Wei Xu, Alexander Rush
 +
<li>ACs: John Wieting, Laura Perez-Beltrachini, Lu Wang, Miltiadis Allamanis, Mohit Iyyer, Nanyun Peng, Sam Wiseman, Shashi Narayan, Sudha Rao, Tatsunori Hashimoto, Xiaojun Wan, Xipeng Qiu
 +
</ul>
 +
 +
Information Extraction
 +
<ul>
 +
<li>SACs: Doug Downey, Hoifun Poon
 +
<li>ACs: Alan Ritter, Chandra Bhagavatula, Gerard de Melo, Kai-Wei Chang, Marius Pasca, Mo Yu, Radu Florian, Ruihong Huang, Sameer Singh, Satoshi Sekine, Snigdha Chaturvedi, Sumithra Velupillai, Timothy Miller, Vivek Srikumar, William Yang Wang, Yunyao Li
 +
</ul>
 +
 +
Information Retrieval and Text Mining
 +
<ul>
 +
<li>SACs: Chin-Yew Lin, Nazli Goharian
 +
<li>ACs: Andrew Yates, Arman Cohan, Bing Qin, Craig Macdonald, Danai Koutra, Elad Yom-Tov, Franco Maria Nardini, Kalliopi Zervanou, Luca Soldaini, Nicola Tonellotto, Pu-Jen Cheng, Seung-won Hwang, Yangqiu Song, Yansong Feng
 +
</ul>
 +
 +
Interpretability and Analysis of Models for NLP
 +
<ul>
 +
<li>SACs: Yoav Goldberg
 +
<li>ACs: Adina Williams, Afra Alishahi, Douwe Kiela, Grzegorz Chrupała, Marco Baroni, Yonatan Belinkov, Zachary C. Lipton
 +
</ul>
 +
 +
Language Grounding to Vision, Robotics and Beyond
 +
<ul>
 +
<li>SACs: Yoav Artzi
 +
<li>ACs: Angeliki Lazaridou, Dan Goldwasser, Jason Baldridge, Jesse Thomason, Lisa Anne Hendricks, Parisa Kordjamshidi, Raffaella Bernardi, Vicente Ordonez, Yonatan Bisk
 +
</ul>
 +
 +
Machine Learning for NLP
 +
<ul>
 +
<li>SACs: Andre Martins, Isabelle Augenstein
 +
<li>ACs: Ankur Parikh, Anna Rumshisky, Bruno Martins, Caio Corro, Dani Yogatama, Daniel Beck, Dipanjan Das, Edouard Grave, Emma Strubell, Gholamreza Haffari, Ivan Titov, Joseph Le Roux, Jun Suzuki, Kevin Gimpel, Michael Auli, Ming-Wei Chang, Shay B. Cohen, Vlad Niculae, Waleed Ammar, Wilker Aziz, Yejin Choi, Zita Marinho, Zornitsa Kozareva
 +
</ul>
 +
 +
Machine Translation
 +
<ul>
 +
<li>SACs: Marine Carpuat, Alexandra Birch
 +
<li>ACs: Ann Clifton, Antonio Toral, Atsushi Fujita, Boxing Chen, Carolina Scarton, Chi-kiu Lo, Christian Hardmeier, Deyi Xiong, Franois Yvon, George Foster, Jiajun Zhang, Jrg Tiedemann, Maja Popovič, Marcello Federico, Marcin Junczys-Dowmunt, Marco Turchi, Marta R. Costa-jussà, Matt Post, Nadir Durrani, Qun Liu, Rico Sennrich, Taro Watanabe, Yuki Arase, Yvette Graham
 +
</ul>
 +
 +
Multidisciplinary and Area Chair COI
 +
<ul>
 +
<li>SACs: Michael Strube
 +
<li>ACs: Anders Søgaard, David Schlangen, Katrin Erk, Kentaro Inui, Kevin Duh, Massimo Poesio, Mausam, Pascal Denis
 +
</ul>
 +
 +
NLP Applications
 +
<ul>
 +
<li>SACs: Preslav Nakov, Karin Verspoor
 +
<li>ACs: Alexander Fraser, Antonio Jimeno Yepes, Aoife Cahill, Daniel Cer, Diarmuid Ó Séaghdha, Giovanni Da San Martino, Hassan Sajjad, Kevin Cohen, Marcos Zampieri, Michel Galley, Min Zhang, Pierre Zweigenbaum, Razvan Bunescu, Sara Rosenthal, Tristan Naumann, Vincent Ng, Wei Gao, Wei Lu
 +
</ul>
 +
 +
Phonology, Morphology and Word Segmentation
 +
<ul>
 +
<li>SACs: Kemal Oflazer
 +
<li>ACs: Christo Kirov, David R. Mortensen, Kareem Darwish, Reut Tsarfaty, Yue Zhang, Özlem Çetinoğlu
 +
</ul>
 +
 +
Question Answering
 +
<ul>
 +
<li>SACs: Eugene Agichtein, Alessandro Moschitti
 +
<li>ACs: Avi Sil, Dina Demner-Fushman, Evangelos Kanoulas, Gerhard Weikum, Idan Szpektor, Jimmy Lin, Oleg Rokhlenko, Sanda Harabagiu, Wen-tau Yih, William Cohen
 +
</ul>
 +
 +
Resources and Evaluation
 +
<ul>
 +
<li>SACs: Nathan Schneider, Barbara Plank
 +
<li>ACs: Allyson Ettinger, Annemarie Friedrich, Antonios Anastasopoulos, Arianna Bisazza, Claire Bonial, Daniel Zeman, Emmanuele Chersoni, Ines Rehbein, Lonneke van der Plas, Maria Liakata, Sara Tonelli, Sarvnaz Karimi, Tim Van de Cruys, Vered Shwartz, Walid Magdy, Çağri Çöltekin
 +
</ul>
 +
 +
Semantics: Lexical
 +
<ul>
 +
<li>SACs: Ekaterina Shutova, Aline Villavicencio
 +
<li>ACs: Alessandro Lenci, Anna Feldman, Aurélie Herbelot, Beata Beigman Klebanov, Carlos Ramisch, Chris Biemann, Enrico Santus, Fabio Massimo Zanzotto, Helen Yannakoudakis, Ivan Vulič, Jose Camacho-Collados, Marianna Apidianaki, Paul Cook, Saif Mohammad
 +
</ul>
 +
 +
Semantics: Sentence Level
 +
<ul>
 +
<li>SACs: Mohit Bansal
 +
<li>ACs: Andreas Vlachos, Christopher Potts, Danqi Chen, Eunsol Choi, He He, Jonathan Berant, Kevin Small, Marek Rei, Sebastian Ruder, Siva Reddy, Swabha Swayamdipta, Thomas Wolf, Veselin Stoyanov
 +
</ul>
 +
 +
Semantics: Textual Inference and Other Areas of Semantics
 +
<ul>
 +
<li>SACs: Sam Bowman
 +
<li>ACs: Anette Frank, Eduardo Blanco, Edward Grefenstette, Jacob Andreas, Jonathan May, Kenton Lee, Lasha Abzianidze, Luheng He, Mehrnoosh Sadrzadeh, Rachel Rudinger, Roy Schwartz, Valeria de Paiva
 +
</ul>
 +
 +
Sentiment Analysis, Stylistic Analysis, and Argument Mining
 +
<ul>
 +
<li>SACs: Smaranda Muresan, Swapna Somasundaran
 +
<li>ACs: Bing Liu, Claire Cardie, Elena Musi, Iryna Gurevych, Julian Brooke, Lun-Wei Ku, Marie-Francine Moens, Minlie Huang, Paolo Rosso, Roman Klinger, Serena Villata, Soujanya Poria, Thamar Solorio, Yulan He
 +
</ul>
 +
 +
Speech and Multimodality
 +
<ul>
 +
<li>SACs: Eric Fosler-Lussier
 +
<li>ACs: Bhuvana Ramabhadran, Florian Metze, Gerasimos Potamianos, Hamid Palangi, Martha Larson
 +
</ul>
 +
 +
Summarization
 +
<ul>
 +
<li>SACs: Fei Liu
 +
<li>ACs: Caiming Xiong, Giuseppe Carenini, Katja Markert, Manabu Okumura, Michael Elhadad, Ramesh Nallapati, Sebastian Gehrmann, Wenjie Li, Xiaodan Zhu, Yang Gao
 +
</ul>
 +
 +
Syntax: Tagging, Chunking and Parsing
 +
<ul>
 +
<li>SACs: David Chiang
 +
<li>ACs: Carlos Gómez-Rodríguez, Emily Pitler, Liang Huang, Miguel Ballesteros, Miryam de Lhoneux, Slav Petrov, Stephan Oepen, Weiwei Sun
 +
</ul>
 +
 +
THEME
 +
<ul>
 +
<li>SACs:  Marilyn Walker (taking over for Ellen Riloff)
 +
<li>ACs: Donia Scott, Johan Bos, Luke Zettlemoyer, Philipp Koehn, Raymond Mooney
 +
</ul>
 +
 +
Theory and Formalism in NLP (Linguistic and Mathematical)
 +
<ul>
 +
<li>SACs: Daniel Gildea
 +
<li>ACs: Alexander Koller, Laura Kallmeyer, Marco Kuhlmann
 
</ul>
 
</ul>
  

Revision as of 08:36, 25 February 2020

General Chair

Dan Jurafsky, Stanford University

The 58th annual meeting of the Association for Computational Linguistics (ACL) will take place in Seattle, Washington at the Hyatt Regency Seattle in downtown Seattle from July 5th through July 10th, 2020.

We have a great set of chairs! We are continuing 2019's new roles (Diversity and Inclusion chairs, Remote Presentation Chairs, AV Chairs) and adding new ones: (Sustainability chair), and we are doing well in demographic representation among our chairs (gender and region).

Following advice from last year, we have been using Slack for most intra-committee communication (and we put the Slack channel into the ACL pro space, so it can be preserved for future years), and using email only when absolutely necessary.

As usual, the growing size of the conference (both in papers and attendees) is a challenge, but both in papers and space we have been doing well (see the individual chair summaries below).

On Mar 11, we will have a site visit at the hotel in Seattle which besides Priscilla will include the General Chair, and representatives from the Program Chairs, the D&I chairs, and the AV chairs. We will also use that occasion to have a committee mtg including those folks plus the relatively large number of ACL2020 organizing committee members who are local to Seattle.


[some highlights from the below chair summaries to be added here]

Program Chairs

Joyce Chai, University of Michigan

Natalie Schluter, IT University of Copenhagen, Denmark

Joel Tetreault, Dataminr, USA

---

New Initiatives This Year

Earlier Submission Deadline and Notification To accommodate a more realistic workflow, given (1) the rapid growth in the number of submissions to ACL conferences, (2) together with avoiding the period for authors from Dec. 15-Jan. 15 while giving us more time to implement and test new implementations, we moved the submission deadline back to December 9. Specifically, previous PCs advised us to do this to set a precedent for future PCs, in accommodating a more realistic timeline. The timeline is still packed, but workable. We also plan notifications to be out earlier than normal, to provide an extra 1-2 weeks for visa applicants, as an inclusion measure.

Four New Tracks

ACL2020 introduced four new tracks:(1) Ethics and NLP. Ethical issues have become increasingly important as more advanced tools become available for NLP research and development. We dedicated a new track and explicitly invite contributions that study ethical issues and impact regarding NLP research and applications. (2) Interpretation and Analysis of Models for NLP. As the community strives for pushing performance boundaries, understanding behaviors of STOA models becomes critical. (3) Theory and Formalism. This track is designed to encourage submissions targeted to theoretical underpinning of NLP models which had little/small presence in the past ACL conferences. (4) Theme: Taking Stock of Where We’ve Been and Where We’re Going. The last few years have witnessed an unprecedented growth in NLP since the field began over sixty years ago. This track is designed to invite submissions that can provide insight for the community to assess how much we have accomplished today with respect to the past and where the field should be heading to. The theme track is different from other tracks. We therefore made some modifications in the review form to reflect that.

Extended Automatic COI Detection/Automatic Reviewer-Paper Assignment

We carried out offline COI detection and automatic paper assignment for the first time for an *ACL conference. The code used were ACL2020-customised implementations of Amanda Stent’s COI detection software and Graham Neubig’s automatic reviewer-paper assignment software.

Mandatory Reviewer Duty and Recruitment

To meet the reviewer demands of a growing conference, we made reviewer volunteering mandatory for submission authors. This resulted in a record number of volunteer candidate reviewers (over 11K). We note that these volunteers were candidates and only a subset of them were actually given reviewing assignments. Using a Microsoft Reviewer/Author form, we collected a variety of information on potential reviewers like ACL anthology page, website, self-declared reviewer experience, 1st & 2nd track preferences, etc. to (1) provide information sheets on reviewers to SACs and ACs, as a tool when manually correcting the automatic reviewer-paper assignments, (2) to manually balance the reviewer pools among tracks, and (3) to filter the list of reviewers based on whether the reviewer (i) had superiority PhD-student or higher, (ii) had reviewed for at least 4 previous *ACL conference, and (iii) had a minimum number of ACL anthology publications. To counterbalance (3ii), we provided SACs with a list of novice reviewers and introduced our a Reviewer Mentoring Program (see below).

New Reviewer Mentoring Program

Given the rapid growth of NLP in terms of number of papers and new students, it is very important for our community to mentor and train our new reviewers. ACL2020 has launched a pilot program which calls for each AC to mentor at least one novice reviewer. Ultimately, the goal is to provide long-needed mentoring to new reviewers. At the very least, this process will inform ACL on constructing a reviewer mentoring program that is more scalable in the future. For most tracks, each AC was paired with at least a mentee (often a Ph.D. student, or a junior researcher who has just graduated). The AC would work with the mentee, provide feedback and help the mentee to improve the quality of his/her reviews. Close to 300 junior researchers were selected to participate in this program. We will put together a detailed report on this program after the conference.

Updated Review Form with New Rating Scale and Evaluation Item

We have separate review forms for regular tracks and the theme track. Our review forms were built upon the form from EMNLP-IJCNLP2019 and ACL2019 with two new extensions. (1) We have removed the rating 3 (ambivalent) from the overall recommendation as we would like reviewers to take a stand on whether the paper is above the borderline (3.5) or below the borderline (2.5). The reason for this change is that ambivalent cases often take a long time to discuss. By taking a stand, reviewers would provide more informative feedback for AC/SAC to make a recommendation. ICLR 2020 has adopted similar rating strategies (although with a different scale). (2) As ethical concerns and societal impacts are an important consideration for NLP research, we have explicitly ask reviewers to evaluate ethical implications of each submission. On the review form, we ask reviewers whether there are any ethical concerns about a submission that the area chairs and program chairs should be aware of. We also encourage reviewers to flag such concerns to the authors.


Other Efforts

Initial submission reviews and desk rejects

We have received a record number of 3,429 submissions (approximately a 15% increase over ACL2019). All papers were carefully inspected to check for violations of ACL policies (ranging from formatting to anonymization to use of supplementary material). Similar to ACL2019, we used assistants to speed up an otherwise long process. All issues identified by assistants were cross-examined by two PCs. We noticed that many papers did not strictly follow the ACL style sheet. We have thus been lenient in terms of margin, line numbers, fonts, etc formatting issues. As a result 29 submissions were desk rejected for violating ACL policies on anonymity, page length, double blind review, etc.

Manual adjustment of submission tracks

Many papers were not submitted to the right track where they could receive reviews from most relevant reviewers. SACs were instructed to flag the papers that should be moved to a different track. We went through every single suggestion and moved papers around if warranted. This turned out to be a major effort. In total, 500-600 papers were moved across tracks as a result.

Manual adjustment of AC and reviewer assignment

As the automatic reviewer assignment is not perfect, SACs did much manual work adjusting AC assignments as well as reviewer assignments. This effort varied among tracks. Given the current set up in Softconf, ACs’ roles are pretty limited. ACs are essentially meta-reviewers who do not have access to the reviewer accounts, and therefore, cannot add reviewers, nor make reviewer assignments, nor contact reviewers directly. We have given this feedback to softconf and hopefully the system will be updated to support extended AC roles for future conferences.

Communication

Because of several new initiatives implemented this year, extensive efforts have been made to communicate these changes to SACs, ACs, reviewers, as well as authors. Besides direct emails, we have used blog postings as well as twitters as our additional communication channels assisted by the publicity chair and the web chairs.


Submission Status

We have received 3,429 papers (2244 long and 1185 short) have been submitted. Here is the distribution of long, short and total papers per track.

  • Cognitive Modeling and Psycholinguistics: 49 39 88
  • Computational Social Science and Social Media: 73 38 111
  • Dialogue and Interactive Systems: 204 71 275
  • Discourse and Pragmatics: 36 20 56
  • Ethics and NLP: 30 22 52
  • Generation: 142 71 213
  • Information Extraction: 159 83 242
  • Information Retrieval and Text Mining: 55 41 96
  • Interpretability and Analysis of Models for NLP: 110 54 164
  • Language Grounding to Vision, Robotics and Beyond: 69 24 93
  • Machine Learning for NLP: 186 109 295
  • Machine Translation: 158 104 262
  • NLP Applications: 169 99 268
  • Phonology, Morphology and Word Segmentation: 38 15 53
  • Question Answering: 109 63 172
  • Resources and Evaluation: 88 48 136
  • Semantics: Lexical: 57 37 94
  • Semantics: Sentence Level: 66 29 95
  • Semantics: Textual Inference and Other Areas of Semantics: 81 31 112
  • Sentiment Analysis, Stylistic Analysis, and Argument Mining: 112 66 178
  • Speech and Multimodality: 38 27 65
  • Summarization: 90 37 127
  • Syntax: Tagging, Chunking and Parsing: 47 28 75
  • Theme: 67 26 93
  • Theory and Formalism in NLP (Linguistic and Mathematical): 11 3 14


Summary of Timelines

  • Oct 15 - Nov 30: SACs invite ACs and reviewers
  • Nov 25: Reviewer profiles completed
  • Dec 09: ACL Paper Submission Deadline (long and short papers)
  • Dec 10 - Jan 14: initial submission reviews and desk rejects; automatic reviewer assignment and COI detection; manual adjustment of assignment;
  • Jan 17 - Feb 07: Review Period
  • Feb 08 - Feb 11: ACs chase late reviews
  • Feb 12 - Feb 17: Author Response
  • Feb 18 - Feb 25: Reviewer Discussion Period (ACs lead discussion), ACs provide feedback to mentees.
  • Feb 25 - Mar 03: ACs produce meta-reviews
  • Mar 03 - Mar 10: SACs rank papers based on meta-reviews and make recommendations to PC chairs
  • Mar 11 - Apr 02: PC chairs make decisions (they may consult SACs during this time); SACs and ACs recommend best reviewers
  • Apr 03 - Accept / Reject Notifications
  • Apr 24: Camera ready


List of SAC/ACs and recruitment

Following ACL2019, we have adopted a hierarchical structure where each area is chaired by one or two senior ACs, who are supported by a group of area chairs. We have a total of 40 Senior Area Chairs and 299 Area Chairs. Recruitment: We individually created preference lists for SACs, discussed these and made decisions. ACs were selected by SACs.

Cognitive Modeling and Psycholinguistics

  • SACs: Emily Prud’hommeaux
  • ACs: Cassandra L. Jacobs, Cecilia Ovesdotter Alm, Christos Christodoulopoulos, Masoud Rouhizadeh, Serguei Pakhomov, Yevgeni Berzak

Computational Social Science and Social Media

  • SACs: Tim Baldwin, Nikolaos Aletras
  • ACs: A. Seza Dögruöz, Afshin Rahimi, Alice Oh, Brendan O'Connor, Daniel Preotiuc-Pietro, David Bamman, David Jurgens, David Mimno, Diana Inkpen, Diyi Yang, Eiji Aramaki, Jacob Eisenstein, Jonathan K. Kummerfeld, Kalina Bontcheva

Dialogue and Interactive Systems

  • SACs: Jason Williams, Mari Ostendorf
  • ACs: Alborz Geramifard, Amanda Stent, Asli Celikyilmaz, Casey Kennington, David Traum, Dilek Hakkani-Tur, Gabriel Skantze, Helen Hastie, Heriberto Cuayahuitl, Kai Yu, Kallirroi Georgila, Luciana Benotti, Luis Fernando D'Haro, Nina Dethlefs, Ryuichiro Higashinaka, Stefan Ultes, Sungjin Lee, Tsung-Hsien Wen, Y-Lan Boureau, Yun-Nung Chen, Zhou Yu

Discourse and Pragmatics

  • SACs: Annie Louis (taking over for Diane Litman)
  • ACs: Chloé Braud, Junyi Jessy Li, Manfred Stede, Shafiq Joty, Sujian Li, Yangfeng Ji

Ethics and NLP

  • SACs: Dirk Hovy
  • ACs: Alan W Black, Emily M. Bender, Vinodkumar Prabhakaran, Yulia Tsvetkov

Generation

  • SACs: Wei Xu, Alexander Rush
  • ACs: John Wieting, Laura Perez-Beltrachini, Lu Wang, Miltiadis Allamanis, Mohit Iyyer, Nanyun Peng, Sam Wiseman, Shashi Narayan, Sudha Rao, Tatsunori Hashimoto, Xiaojun Wan, Xipeng Qiu

Information Extraction

  • SACs: Doug Downey, Hoifun Poon
  • ACs: Alan Ritter, Chandra Bhagavatula, Gerard de Melo, Kai-Wei Chang, Marius Pasca, Mo Yu, Radu Florian, Ruihong Huang, Sameer Singh, Satoshi Sekine, Snigdha Chaturvedi, Sumithra Velupillai, Timothy Miller, Vivek Srikumar, William Yang Wang, Yunyao Li

Information Retrieval and Text Mining

  • SACs: Chin-Yew Lin, Nazli Goharian
  • ACs: Andrew Yates, Arman Cohan, Bing Qin, Craig Macdonald, Danai Koutra, Elad Yom-Tov, Franco Maria Nardini, Kalliopi Zervanou, Luca Soldaini, Nicola Tonellotto, Pu-Jen Cheng, Seung-won Hwang, Yangqiu Song, Yansong Feng

Interpretability and Analysis of Models for NLP

  • SACs: Yoav Goldberg
  • ACs: Adina Williams, Afra Alishahi, Douwe Kiela, Grzegorz Chrupała, Marco Baroni, Yonatan Belinkov, Zachary C. Lipton

Language Grounding to Vision, Robotics and Beyond

  • SACs: Yoav Artzi
  • ACs: Angeliki Lazaridou, Dan Goldwasser, Jason Baldridge, Jesse Thomason, Lisa Anne Hendricks, Parisa Kordjamshidi, Raffaella Bernardi, Vicente Ordonez, Yonatan Bisk

Machine Learning for NLP

  • SACs: Andre Martins, Isabelle Augenstein
  • ACs: Ankur Parikh, Anna Rumshisky, Bruno Martins, Caio Corro, Dani Yogatama, Daniel Beck, Dipanjan Das, Edouard Grave, Emma Strubell, Gholamreza Haffari, Ivan Titov, Joseph Le Roux, Jun Suzuki, Kevin Gimpel, Michael Auli, Ming-Wei Chang, Shay B. Cohen, Vlad Niculae, Waleed Ammar, Wilker Aziz, Yejin Choi, Zita Marinho, Zornitsa Kozareva

Machine Translation

  • SACs: Marine Carpuat, Alexandra Birch
  • ACs: Ann Clifton, Antonio Toral, Atsushi Fujita, Boxing Chen, Carolina Scarton, Chi-kiu Lo, Christian Hardmeier, Deyi Xiong, Franois Yvon, George Foster, Jiajun Zhang, Jrg Tiedemann, Maja Popovič, Marcello Federico, Marcin Junczys-Dowmunt, Marco Turchi, Marta R. Costa-jussà, Matt Post, Nadir Durrani, Qun Liu, Rico Sennrich, Taro Watanabe, Yuki Arase, Yvette Graham

Multidisciplinary and Area Chair COI

  • SACs: Michael Strube
  • ACs: Anders Søgaard, David Schlangen, Katrin Erk, Kentaro Inui, Kevin Duh, Massimo Poesio, Mausam, Pascal Denis

NLP Applications

  • SACs: Preslav Nakov, Karin Verspoor
  • ACs: Alexander Fraser, Antonio Jimeno Yepes, Aoife Cahill, Daniel Cer, Diarmuid Ó Séaghdha, Giovanni Da San Martino, Hassan Sajjad, Kevin Cohen, Marcos Zampieri, Michel Galley, Min Zhang, Pierre Zweigenbaum, Razvan Bunescu, Sara Rosenthal, Tristan Naumann, Vincent Ng, Wei Gao, Wei Lu

Phonology, Morphology and Word Segmentation

  • SACs: Kemal Oflazer
  • ACs: Christo Kirov, David R. Mortensen, Kareem Darwish, Reut Tsarfaty, Yue Zhang, Özlem Çetinoğlu

Question Answering

  • SACs: Eugene Agichtein, Alessandro Moschitti
  • ACs: Avi Sil, Dina Demner-Fushman, Evangelos Kanoulas, Gerhard Weikum, Idan Szpektor, Jimmy Lin, Oleg Rokhlenko, Sanda Harabagiu, Wen-tau Yih, William Cohen

Resources and Evaluation

  • SACs: Nathan Schneider, Barbara Plank
  • ACs: Allyson Ettinger, Annemarie Friedrich, Antonios Anastasopoulos, Arianna Bisazza, Claire Bonial, Daniel Zeman, Emmanuele Chersoni, Ines Rehbein, Lonneke van der Plas, Maria Liakata, Sara Tonelli, Sarvnaz Karimi, Tim Van de Cruys, Vered Shwartz, Walid Magdy, Çağri Çöltekin

Semantics: Lexical

  • SACs: Ekaterina Shutova, Aline Villavicencio
  • ACs: Alessandro Lenci, Anna Feldman, Aurélie Herbelot, Beata Beigman Klebanov, Carlos Ramisch, Chris Biemann, Enrico Santus, Fabio Massimo Zanzotto, Helen Yannakoudakis, Ivan Vulič, Jose Camacho-Collados, Marianna Apidianaki, Paul Cook, Saif Mohammad

Semantics: Sentence Level

  • SACs: Mohit Bansal
  • ACs: Andreas Vlachos, Christopher Potts, Danqi Chen, Eunsol Choi, He He, Jonathan Berant, Kevin Small, Marek Rei, Sebastian Ruder, Siva Reddy, Swabha Swayamdipta, Thomas Wolf, Veselin Stoyanov

Semantics: Textual Inference and Other Areas of Semantics

  • SACs: Sam Bowman
  • ACs: Anette Frank, Eduardo Blanco, Edward Grefenstette, Jacob Andreas, Jonathan May, Kenton Lee, Lasha Abzianidze, Luheng He, Mehrnoosh Sadrzadeh, Rachel Rudinger, Roy Schwartz, Valeria de Paiva

Sentiment Analysis, Stylistic Analysis, and Argument Mining

  • SACs: Smaranda Muresan, Swapna Somasundaran
  • ACs: Bing Liu, Claire Cardie, Elena Musi, Iryna Gurevych, Julian Brooke, Lun-Wei Ku, Marie-Francine Moens, Minlie Huang, Paolo Rosso, Roman Klinger, Serena Villata, Soujanya Poria, Thamar Solorio, Yulan He

Speech and Multimodality

  • SACs: Eric Fosler-Lussier
  • ACs: Bhuvana Ramabhadran, Florian Metze, Gerasimos Potamianos, Hamid Palangi, Martha Larson

Summarization

  • SACs: Fei Liu
  • ACs: Caiming Xiong, Giuseppe Carenini, Katja Markert, Manabu Okumura, Michael Elhadad, Ramesh Nallapati, Sebastian Gehrmann, Wenjie Li, Xiaodan Zhu, Yang Gao

Syntax: Tagging, Chunking and Parsing

  • SACs: David Chiang
  • ACs: Carlos Gómez-Rodríguez, Emily Pitler, Liang Huang, Miguel Ballesteros, Miryam de Lhoneux, Slav Petrov, Stephan Oepen, Weiwei Sun

THEME

  • SACs: Marilyn Walker (taking over for Ellen Riloff)
  • ACs: Donia Scott, Johan Bos, Luke Zettlemoyer, Philipp Koehn, Raymond Mooney

Theory and Formalism in NLP (Linguistic and Mathematical)

  • SACs: Daniel Gildea
  • ACs: Alexander Koller, Laura Kallmeyer, Marco Kuhlmann

Local Organisation Chairs

Priscilla Rasmussen, ACL

With advice from:

Jianfeng Gao, Microsoft Research

Luke Zettlemoyer, University of Washington

Tutorial Chairs

Agata Savary, University of Tours, France

Yue Zhang, Westlake University, Hangzhou, China

The call, submission, reviewing and selection of tutorials was coordinated jointly for 4 conferences: ACL, AACL-IJCNLP, COLING and EMNLP.

Before drafting the call, we collected lists of tutorials offered within the past 4 years. We analysed previous calls for tutorials and reports from tutorial chairs (from 2016, 2017, 2018 and 2019). We consulted previous tutorial chairs with a questionnaire including questions about: the number of submissions, encouraging submissions on specific topics or from specific lecturers, the review procedure, the evaluation criteria, the post-tutorial availability of the slides/codes, and lessons learned from tutorial coordination. We also discussed the publication of slides and video recordings from future tutorials with the persons in charge of the ACL Anthology. As a result of these steps, we created two new sections for the ACL Conference Handbook (future chairs might consider updating these documents yearly):

The final call differs from previous calls in several aspects: (i) the expectations about tutorial proposals were made clearer, (ii) following the central ACL decision, the teachers' payment policy was replaced by a fee-waiving policy, (iii) the required submission details include two new items: diversity considerations and agreement for open access publication of slides, codes, data and video recordings, (iv) the evaluation criteria (see below) are announced.

We recruited a review committee of 19 members, including the 8 tutorial chairs and 11 external members selected for their large understanding of the NLP domain and a good experience in reviewing and/or tutorial teaching:

Review Committee

  • Timothy Baldwin (University of Melbourne, Australia) - AACL-IJCNLP 2020 tutorial chair
  • Daniel Beck (University of Melbourne, Australia) - COLING 2020 tutorial chair
  • Emily M. Bender (University of Washington, WA, USA)
  • Erik Cambria (Nanyang Technological University, Singapore)
  • Gaël Dias (University of Caen Normandie, France)
  • Stefan Evert (Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany)
  • Yang Liu (Tsinghua University, Beijing, China)
  • Agata Savary (University of Tours, France) - ACL 2020 tutorial chair
  • João Sedoc (Johns Hopkins University, Baltimore, MD, USA)
  • Lucia Specia (Sheffield University, UK) - COLING 2020 tutorial chair
  • Xu SUN (Peking University, China)
  • Yulia Tsvetkov (Carnegie Mellon University, Pittsburgh, PA, USA)
  • Benjamin Van Durme (Johns Hopkins University, Baltimore, MD, USA) - EMNLP 2020 tutorial chair
  • Aline Villavicencio (University of Sheffield, UK and Federal University of Rio Grande do Sul, Brazil) - EMNLP 2020 tutorial chair
  • Taro Watanabe (Google, Inc., Tokyo, Japan)
  • Aaron Steven White (University of Rochester, NY, USA)
  • Fei Xia (University of Washington, WA, USA) - AACL-IJCNLP 2020 tutorial chair
  • Yue Zhang (Westlake University, Hangzhou, China) - ACL 2020 tutorial chair
  • Meishan Zhang (Tianjin University, China)

In total, we received 43 submissions for the 4 conferences. Each reviewer was assigned 6-7 proposals and each proposal received 3 reviews. The selection criteria included: clarity and preparedness, novelty or timely character of the topic, lecturers' experience, likely audience interest, open access of the teaching material, diversity aspects (multilingualism, gender, age and country of the lecturers), and compatibility with the preferred venues. We accepted 31 proposals.

The decision making was handled via an online meeting of the 8 tutorial chairs. In particular, the selection of tutorials for each conference was done via the expression of interest of the tutorial chairs on a round-robin basis. Some slight adjustments were also performed after the meeting to better fit the authors' preferences. In total, 8, 8, 8 and 7 proposals were selected for ACL, AACL-IJCNLP, COLING and EMNLP, respectively. Upon the announcement the results, 2 of the proposals accepted for AACL-IJCNLP were withdrawn.

The submission, review, selection and collection of final material for all tutorials was handled via a dedicated SoftConf space, shared by the 4 coordinating conferences. After the selection of proposals, a separate track was created on SoftConf for each conference. The final submission page (one per conference) was set up so as to collect all the necessary data including notably: the tutorial slides, URLs for course material (if any), printable material (if any) and agreement for open access publication.

The final selection for ACL 2020 consists of the following 8 tutorials of 3 hours each (each of them had ACL as the preferred or the second preferred venue):

Morning Tutorials

T1: Interpretability and Analysis in Neural NLP (cutting-edge)
Yonatan Belinkov, Sebastian Gehrmann and Ellie Pavlick
While deep learning has transformed the NLP field and impacted the larger computational linguistics community, the rise of neural networks is stained by their opaque nature: It is challenging to interpret the inner workings of neural network models, and explicate their behavior. Therefore, in the last few years, an increasingly large body of work has been devoted to the analysis and interpretation of neural network models in NLP.
This body of work is so far lacking a common framework and methodology. Moreover, approaching the analysis of modern neural networks can be difficult for newcomers to the field. This tutorial aims to fill this gap and introduce the nascent field of interpretability and analysis of neural networks in NLP.
The tutorial covers the main lines of analysis work, such as probing classifier, behavior studies and test suites, psycholinguistic methods, visualizations, adversarial examples, and other methods. We highlight not only the most commonly applied analysis methods, but also the specific limitations and shortcomings of current approaches, in order to inform participants where to focus future efforts.

T2: Multi-modal Information Extraction from Text, Semi-structured, and Tabular Data on the Web (cutting-edge)
Xin Luna Dong, Hannaneh Hajishirzi, Colin Lockard and Prashant Shiralkar
The World Wide Web contains vast quantities of textual information in several forms: unstructured text, template-based semi-structured webpages (which present data in key-value pairs and lists), and tables. Methods for extracting information from these sources and converting it to a structured form have been a target of research from the natural language processing (NLP), data mining, and database communities. While these researchers have largely separated extraction from web data into different problems based on the modality of the data, they have faced similar problems such as learning with limited labeled data, defining (or avoiding defining) ontologies, making use of prior knowledge, and scaling solutions to deal with the size of the Web.
In this tutorial we take a holistic view toward information extraction, exploring the commonalities in the challenges and solutions developed to address these different forms of text. We will explore the approaches targeted at unstructured text that largely rely on learning syntactic or semantic textual patterns, approaches targeted at semi-structured documents that learn to identify structural patterns in the template, and approaches targeting web tables which rely heavily on entity linking and type information.
While these different data modalities have largely been considered separately in the past, recent research has started taking a more inclusive approach toward textual extraction, in which the multiple signals offered by textual, layout, and visual clues are combined into a single extraction model made possible by new deep learning approaches. At the same time, trends within purely textual extraction have shifted toward full-document understanding rather than considering sentences as independent units. With this in mind, it is worth considering the information extraction problem as a whole to motivate solutions that harness textual semantics along with visual and semi-structured layout information. We will discuss these approaches and suggest avenues for future work.

T3: Reviewing Natural Language Processing Research (introductory)
Kevin Cohen, Karën Fort, Margot Mieskes and Aurélie Névéol
As the demand for reviewing grows, so must the pool of reviewers. As the survey presented by Graham Neubig at the 2019 ACL showed, a considerable number of reviewers are junior researchers, who might lack the experience and expertise necessary for high-quality reviews. Some of them might not have the environment or lack opportunities that allow them to learn the skills necessary. A tutorial on reviewing for the NLP community might increase reviewers’ confidence, as well as the quality of the reviews. This introductory tutorial will cover the goals, processes, and evaluation of reviewing research papers in natural language processing.

T4: Stylized Text Generation: Approaches and Applications (cutting-edge)
Lili Mou and Olga Vechtomova
Text generation has played an important role in various applications of natural language processing (NLP), and kn recent studies, researchers are paying increasing attention to modeling and manipulating the style of the generation text, which we call stylized text generation. In this tutorial, we will provide a comprehensive literature review in this direction. We start from the definition of style and different settings of stylized text generation, illustrated with various applications. Then, we present different settings of stylized generation, such as parallel supervised, style label-supervised, and unsupervised. In each setting, we delve deep into machine learning methods, including embedding learning techniques to represent style}, adversarial learning and reinforcement learning with cycle consistency to match content but to distinguish different styles. We also introduce current approaches of evaluating stylized text generation systems. We conclude our tutorial by presenting the challenges of stylized text generation and discussing future directions, such as small-data training, non-categorical style modeling, and a generalized scope of style transfer (e.g., controlling the syntax as a style).

Afternoon Tutorials

T5: Achieving Common Ground in Multi-modal Dialogue (cutting-edge)
Malihe Alikhani and Matthew Stone
All communication aims at achieving common ground (grounding): interlocutors can work together effectively only with mutual beliefs about what the state of the world is, about what their goals are, and about how they plan to make their goals a reality. Computational dialogue research offers some classic results on grouding, which unfortunately offer scant guidance to the design of grounding modules and behaviors in cutting-edge systems. In this tutorial, we focus on three main topic areas: 1) grounding in human-human communication; 2) grounding in dialogue systems; and 3) grounding in multi-modal interactive systems, including image-oriented conversations and human-robot interactions. We highlight a number of achievements of recent computational research in coordinating complex content, show how these results lead to rich and challenging opportunities for doing grounding in more flexible and powerful ways, and canvass relevant insights from the literature on human--human conversation. We expect that the tutorial will be of interest to researchers in dialogue systems, computational semantics and cognitive modeling, and hope that it will catalyze research and system building that more directly explores the creative, strategic ways conversational agents might be able to seek and offer evidence about their understanding of their interlocutors.

T6: Commonsense Reasoning for Natural Language Processing (introductory)
Maarten Sap, Vered Shwartz, Antoine Bosselut, Dan Roth and Yejin Choi
In our tutorial, we (1) outline the various types of commonsense (e.g., physical, social), and (2) discuss techniques to gather and represent commonsense knowledge, while highlighting the challenges specific to this type of knowledge (e.g., reporting bias). We will then (3) discuss the types of commonsense knowledge captured by modern NLP systems (e.g., large pretrained language models), and (4) present ways to measure systems' commonsense reasoning abilities. We finish with (5) a discussion of various ways in which commonsense reasoning can be used to improve performance on NLP tasks, exemplified by an (6) interactive session on integrating commonsense into a downstream task.

T7: Integrating Ethics into the NLP Curriculum (introductory)
Emily M. Bender, Dirk Hovy and Alexandra Schofield
Our goal in this tutorial is to empower NLP researchers and practitioners with tools and resources to teach others about how to ethically apply NLP techniques. Our tutorial will present both high-level strategies for developing an ethics-oriented curriculum, based on experience and best practices, as well as specific sample exercises that can be brought to a classroom. We plan to make this a highly interactive work session culminating in a shared online resource page that pools lesson plans, assignments, exercise ideas, reading suggestions, and ideas from the attendees. We consider three primary topics with our session that frequently underlie ethical issues in NLP research: Dual use, bias and privacy.
In this setting, a key lesson is that there is no single approach to ethical NLP: each project requires thoughtful consideration about what steps can be taken to best support people affected by that project. However, we can learn (and teach) what kinds of issues to be aware of and what kinds of strategies are available for mitigating harm. To teach this process, we apply and promote interactive exercises that provide an opportunity to ideate, discuss, and reflect. We plan to facilitate this in a way that encourages positive discussion, emphasizing the creation of ideas for the future instead of negative opinions of previous work.

T8: Recent Advances in Open-Domain Question Answering (cutting-edge)
Danqi Chen and Scott Wen-tau Yih
Open-domain (textual) question answering (QA), the task of finding answers to open-domain questions by searching a large collection of documents, has been a long-standing problem in NLP, information retrieval (IR) and related fields (Voorhees et al., 1999; Moldovan et al., 2000; Brill et al.,2002; Ferrucci et al., 2010). Traditional QA systems were usually constructed as a pipeline, consisting of many different components such as question processing, document/passage retrieval and answer processing. With the rapid development of neural reading comprehension (Chen, 2018), modern open-domain QA systems have been restructured by combining traditional IR techniques and neural reading comprehension models (Chen et al., 2017; Yang et al., 2019) or even implemented in a fully end-to-end fashion (Lee et al., 2019; Seo et al., 2019). While the system architecture has been drastically simplified, two technical challenges remain critical:(1) “Retriever”: finding documents that (might)contain an answer from a large collection of documents; (2) “Reader”: finding the answer in a given paragraph or a document.
In this tutorial, we aim to provide a comprehensive and coherent overview of recent advances in this line of research. We will start by first giving a brief historical background of open-domain question answering, discussing the basic setup and core technical challenges of the research problem.The focus will then shift to modern techniques and resources proposed for open-domain QA, including the basics of latest neural reading comprehension systems, new datasets and models. The scope will also be broadened to cover the information retrieval component on how to effectively identify passages relevant to the questions. Moreover, in-depth discussions will be given on the use of traditional / neural IR modules, as well as the trade-offs between modular design and end-to-end training. If time permits, we also plan to discuss some hybrid approaches for answering questions using both text and large knowledge bases (e.g. (Sun et al., 2018)) and give a critical review on how structured data complements the information from unstructured text.
At the end of our tutorial, we will discuss some important questions, including (1) How much progress have we made compared to the QA systems developed in the last decade?(2) What are the main challenges and limitations of cur-rent approaches? (3) How to trade off the efficiency (computational time and memory requirements) and accuracy in the deep learning era? We hope that our tutorial will not only serve as a useful resource for the audience to efficiently acquire the up-to-date knowledge, but also provide new perspectives to stimulate the advances of open-domain QA research in the next phase.

Workshop Chairs

Milica Gašić, Heinrich Heine University Düsseldorf

Dilek Hakkani-Tur, Amazon Alexa AI

Saif M. Mohammad, National Research Council Canada

Ves Stoyanov, Facebook AI

Student Research Workshop Chairs and Faculty Advisors

Student Research Workshop Co-chairs

Rotem Dror, Technion - Israel Institute of Technology

Jiangming Liu, The University of Edinburgh

Shruti Rijhwani, Carnegie Mellon University


Student Research Workshop Faculty Advisors

Omri Abend, Hebrew University of Jerusalem

Sujian Li, Peking University

Zhou Yu, University of California, Davis


Information about the Student Research Workshop (SRW) has posted on the workshop's website: https://sites.google.com/view/acl20studentresearchworkshop/. The SRW Call for Papers has been distributed to ACL mailing lists, as well as on our official Twitter account (@acl_srw) and the ACL meeting's Twitter account (@acl_meeting).


Pre-submission Mentoring Phase (completed mid-February 2020)

Before submission to the main deadline, the SRW offered pre-submission mentoring by experienced researchers of the ACL community. The pre-submission mentoring primarily serves to provide feedback on the writing style, readability and presentation of the paper.

We recruited 30 mentors for providing pre-submission feedback. The deadline for the pre-submission phase was January 17, 2020. We had 57 pre-submissions in total.

Mentors were matched to pre-submissions according to their research areas. All mentors have already provided feedback for the submissions and it was sent to the authors mid-February 2020. The majority of mentors have also offered to participate in follow-up discussions with the authors via email until the main submission deadline.

Vouchers for one month's free use of Grammarly Premium have been sent to all the pre-submission authors. These were provided by the ACL 2020 Diversity and Inclusion Committee.


Main submission

For the main submission, the START (softconf) submission page has been set up. Currently, we have recruited 200 members of the ACL community (both students and senior researchers) to serve as the Program Committee for reviewing submissions to the SRW. We plan on inviting more PC members, as the number of submissions is likely to be larger than originally estimated.

Submission deadlines for the SRW are as follows:

  • Paper submission deadline: March 6, 2020
  • Review deadline: April 10, 2020
  • Acceptance notification: April 15, 2020
  • Camera-ready deadline: May 6, 2020
  • Travel grant application deadline: to be decided.
  • Travel grant notification: to be decided.

We also plan to have a post-acceptance mentoring process, for all papers accepted to the SRW.


Funding

The SRW has applied for an NSF grant of $18,000. The Don and Betty Walker international fund will also be able to provide student support. The SRW organizers have made contact with a number of industry companies to obtain sponsorship, but not yet secured additional funding. Contact has been made with the ACL 2020 sponsorship chairs and with Priscilla to investigate other funding opportunities, as well as the Student Volunteer Program, which helps students cover registration fee to the main conference.

Audio-Video Chairs

Hamid Palangi, Microsoft Research, Redmond

Lianhui Qin, University of Washington

Conference Handbook Chair

Nanyun Peng, University of Southern California

Demo Chairs

Asli Celikyilmaz, Microsoft Research, Redmond

Shawn Wen, PolyAI


Details of Activities:

The web site for ACL 2020 Demonstrations Track is: https://acl2020.org/calls/demos/[1], which includes details about submissions, deadlines, reviewing policy and important dates.

Compared to the last year, we have made a few changes to the track. Specifically, in the submission details, we encouraged the authors to include visual aids (e.g., screenshots, snapshots, or diagrams) in the paper. This year the submissions are single blind, in which the authors are allowed to disclose their names on their submitted manuscript. We kept the style files same as last year.

The deadline for submissions was January 31, 2020.

This year we have record number of demonstration paper submissions, over 130 submissions. After a few desk rejects, a total of 122 papers are reviewed. The technical Program Committee is in place. To accommodate minimum three reviewers for each paper, we have reached out close to 300 reviewers and 213 have accepted. We managed to assign 3 reviewers to all submitted papers, with no more than 3 papers per reviewer. Currently we have 152 technical program committee members. The program committee is scheduled to submit their reviews by March 10, 2020.

Important Dates

Paper submission deadline: Friday, January 31st, 2020

Notification of acceptance: Friday, April 3rd, 2020

Camera-ready submission: Friday, April 24th, 2020

Diversity & Inclusion (D&I) Chairs

Cecilia Ovesdotter Alm, Rochester Institute of Technology

Vinodkumar Prabhakaran, Google


1. We created five different sub-committees (listed below) to tackle ACL D&I related activities. In the interest of transparency and institutional memory, we prepared a separate memorandum of understanding (MoU) for each sub-committee, which articulates a mission statement, five minimum tasks the sub-committee is responsible for (with the fifth task being a blog post), useful links, and detailed guidelines per task. In these guidelines, each task entry contains:

  • Task title
  • Interfaces (recommendations for whom to communicate with to address the task)
  • Subtasks (an enumerated list of subtask descriptions)
  • Timeline (when to begin)

In designing the task, we built and expanded on NAACL2019 D&I activities and lessons learned. We will hand over the MoUs for future conferences; we hope that this resource will facilitate future D&I committees’ planning activities.

2. For communication and teamwork, we set up:

  • An ACL 2020 D&I slack channel, facilitating keeping records of interactions.
  • A Google folder with designated subfolders for D&I subcommittees
  • An ACL 2020 D&I chairs google groups email handle: <acl2020-diversity-inclusion-chairs@googlegroups.com>

3. We recruited 13 volunteers across the 5 subcommittees, constituting the ACL 2020 D&I Team, recognized on the conference website: https://acl2020.org/committees/diversity-inclusion.

Academic Inclusion Chairs Mission: Ensure the venue is welcoming to researchers from diverse subdisciplines, conducive to building academic networks across disciplines and career stages.

  • Aakanksha Naik, Carnegie Mellon University
  • Alla Rozovskaya, Queens College (City University of New York)
  • Emily Prud’hommeaux, Boston College

Accessibility Chairs Mission: Ensure the venue is accessible for researchers with any disability, including provision of requested access services.

  • Masoud Rouhizadeh, Johns Hopkins University
  • Naomi Saphra, University of Edinburgh
  • Sushant Kafle, Google/Rochester Institute of Technology

Childcare Chairs Mission: Ensure adequate childcare provisions to help researchers who are caregivers of children to attend the conference.

  • Khyathi Chandu, Carnegie Mellon University
  • Stephen Mayhew, Duolingo

Financial Access Chairs Mission: Ensure provision of financial access to researchers from underrepresented demographics and geographies to attend the conference.

  • Allyson Ettinger, University of Chicago
  • Ryan Georgi, KPMG
  • Tirthankar Ghosal, Indian Institute of Technology (IIT) Patna

Socio-cultural Inclusion Chairs Mission: Ensure a welcoming and inclusive environment for researchers from various socio-cultural subgroups, accommodate for diverse needs for food and drinks at the conference, as well as support initiatives for groups to socialize and network.

  • Maarten Sap, University of Washington
  • Shruti Palaskar, Carnegie Mellon University

Kick-off meetings with all subcommittees took place in December before the winter holidays. Correspondence is mostly taking place on slack, alternatively by email.

4. A message distributed on ACL2020 social media on September 17 2019 invited community members to share comments and suggestions with the D&I chairs. We received some important feedback.

5. A blog post entitled The ACL 2020 Diversity and Inclusion Committee appeared on the ACL 2020 website and subsequently social media on February 4 2020. We received some important feedback as well as inquiries about accommodations.

6. The sponsorship booklet has been updated for D&I sponsorships. In consultation with Priscilla we added a third sponsor-ship level category. The resulting levels are Champion, Ally, and Contributor. The list of benefits is now also up-to-date. We alerted that multipacks may result in lower cost than single conference sponsorship.

7. Grammarly has provided a generous in-kind donation in the form of writing support software licenses. Codes have been distributed to SRW and WiNLP for distribution among their authors, together with an outreach email template (adjusted from NAACL 2019). Joel Tetreault and Tirthankar Goshal (Financial Access sub-committee) were instrumental in this process. In this context, we also arrived at how to recognize in-kind sponsors by discussion and consensus.

8. We coordinated a room request across subcommittees, submitted to Priscilla as a spreadsheet, detailing space and furniture requirements for sub-committees’ activities.

9. We have submitted a request for a set of updates to D&I items in the registration form and are at work on recommending updates to the D&I special request form.

10. We recommended reconsidering onsite childcare at ACL 2020. We illustrated that onsite childcare is a standard feature at comparative conference venues. Onsite childcare service is missing at ACL conferences and may especially impact junior researchers. Data shared by two comparable AI conferences indicate that childcare usage can increase substantially from one year to another, such that a multiyear commitment should be made for establishing a meaningful utility assessment of onsite childcare. Data on ACL 2019 usage was retrieved by Priscilla, while we obtained proposals from 3 providers. We have chosen KiddieCorp as a potential vendor for this service.

11. With help from the General Chair, we initiated a conversation about the need for a D&I budget. Subsequently, we prepared a detailed budget request, split into costs and back-stop costs (items that apply when there is a request), which was passed on to the ACL Exec. Sushant Kafle (Accessibility sub-committee) was instrumental in the process of obtaining proposals by vendors for access services. Our requested budget is detailed below, which includes the onsite childcare costs as well.

In conclusion, the D&I activities are progressing and awaiting a decision on budget. In addition, several resources prepared or enhanced may facilitate future D&I committees’ planning activities, for instance the MOUs, the coordinated room request, the revised sponsorship booklet section, the detailed budget request summary, the process for distributing the writing support software in-kind donation, and the onsite childcare proposal summary.

Local Sponsorship Chairs

Hoifung Poon, Microsoft

Kristina Toutanova, Google


Publication Chairs

Steven Bethard, University of Arizona

Ryan Cotterrell, University of Cambridge

Rui Yan, Peking University

Starting from the style files from ACL 2019, we have produced new LaTeX style files for ACL 2020. Most of the description was retained, but the order of sections was overhauled to make sure that important information wasn't scattered so haphazardly across the document. Other improvements were also made, like using the recommended citation style consistently throughout the LaTeX source, and separating out all the LaTeX-specific stuff into clearly marked sections. The MS Word version was derived from these LaTeX versions to match as closely as possible. The LaTeX version was also posted to the Overleaf gallery. The most recent .bib file for the entire ACL Anthology was included in the style file distribution to encourage authors to use the official citations for ACL Anthology publications. All style file changes were merged into https://github.com/acl-org/acl-pub/tree/gh-pages/paper_styles.

Publicity Chair

Emily M. Bender, University of Washington

Dissemination

Durable accounts for the ACL meeting on Twitter and Facebook have been created:

* https://twitter.com/aclmeeting
* https://www.facebook.com/aclmeeting/

These will be passed along to the ACL 2021 publicity chair(s) so that they don't have to build up followers separately. As of Feb 4, 2020 the Twitter account has 4,061 followers and the Facebook account has 181. We have not yet been making use of the Instagram account, but we have been using the Twitter and Facebook accounts to publicize important dates as well as blog posts. The Twitter account especially has been useful for fielding questions from the community. Calls for papers have also gone out over the ACL member portal and several mailing lists, as well as websites such as WikiCFP. (These are maintained in a spreadsheet which can be handed off to the ACL 2021 publicity chair(s)).

Next Steps

* Recruit co-chairs, especially to coordinate live-tweeting of the conference
* Contact local media for coverage
* Develop land acknowledgement in consultation with the Duwamish Tribe (on whose land the meeting will take place). The Duwamish publish this information about land acknowledgments: https://www.duwamishtribe.org/land-acknowledgement


Remote Presentation Chairs

Hao Fang, Microsoft Semantic Machines

Yi Luan, Google AI Language

Sustainability Chairs

Ananya Ganesh, Educational Testing Service

Klaus Zechner, Educational Testing Service

Our main goal for this new focus area is to engage the ACL community in discussions about how best to reduce the carbon footprint of future ACL conferences in order to contribute to sustainable and livable conditions on this planet. One of the main directions we are currently envisioning is to encourage and support conference attendees in virtual participation using live streaming of conference events as air travel is the main contributor to the carbon footprint of international conferences.

Website & Conference App Chairs

Sudha Rao, Microsoft Research, Redmond

Yizhe Zhang, Microsoft Research, Redmond

We are hosting the conference website on GitHub using the easily adaptable website architecture built by Nitin Madnani for NAACL 2019: https://github.com/naacl-org/naacl-hlt-2019.
We are using the Whova event app for hosting the conference app this year similar to NAACL 2019.

Business Office

Priscilla Rasmussen, ACL